Vous êtes sur la page 1sur 271

THE FRONTIERS COLLECTION

Series Editors
Avshalom C. Elitzur
Unit of Interdisciplinary Studies, Bar-Ilan University, 52900, Ramat-Gan, Israel
e-mail: avshalom.elitzur@weizmann.ac.il

Laura Mersini-Houghton
Department of Physics, University of North Carolina, Chapel Hill, NC 27599-3255
USA
e-mail: mersini@physics.unc.edu

Maximilian Schlosshauer
Department of Physics, University of Portland,
5000 North Willamette Boulevard Portland, OR 97203, USA
e-mail: schlossh@up.edu

Mark P. Silverman
Department of Physics, Trinity College, Hartford, CT 06106, USA
e-mail: mark.silverman@trincoll.edu

Jack A. Tuszynski
Department of Physics, University of Alberta, Edmonton, AB T6G 1Z2, Canada
e-mail: jtus@phys.ualberta.ca

Rudy Vaas
Center for Philosophy and Foundations of Science, University of Giessen, 35394,
Giessen, Germany
e-mail: ruediger.vaas@t-online.de

H. Dieter Zeh
Gaiberger Strae 38, 69151, Waldhilsbach, Germany
e-mail: zeh@uni-heidelberg.de

For further volumes:


http://www.springer.com/series/5342
THE FRONTIERS COLLECTION

Series Editors
A. C. Elitzur L. Mersini-Houghton M. Schlosshauer
M. P. Silverman J. A. Tuszynski R. Vaas H. D. Zeh

The books in this collection are devoted to challenging and open problems at the
forefront of modern science, including related philosophical debates. In contrast to
typical research monographs, however, they strive to present their topics in a
manner accessible also to scientifically literate non-specialists wishing to gain
insight into the deeper implications and fascinating questions involved. Taken as a
whole, the series reflects the need for a fundamental and interdisciplinary approach
to modern science. Furthermore, it is intended to encourage active scientists in all
areas to ponder over important and perhaps controversial issues beyond their own
speciality. Extending from quantum physics and relativity to entropy, conscious-
ness and complex systemsthe Frontiers Collection will inspire readers to push
back the frontiers of their own knowledge.

For a full list of published titles, please see back of book or springer.com/series/5342
William Seager

NATURAL
FABRICATIONS
Science, Emergence and Consciousness

123
William Seager
University of Toronto Scarborough
Scarborough, ON
Canada

ISSN 1612-3018
ISBN 978-3-642-29598-0 ISBN 978-3-642-29599-7 (eBook)
DOI 10.1007/978-3-642-29599-7
Springer Heidelberg New York Dordrecht London

Library of Congress Control Number: 2012939636

Springer-Verlag Berlin Heidelberg 2012


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief
excerpts in connection with reviews or scholarly analysis or material supplied specifically for the
purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the
work. Duplication of this publication or parts thereof is permitted only under the provisions of
the Copyright Law of the Publishers location, in its current version, and permission for use must always
be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright
Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Acknowledgments

It is impossible to thank by name all the people who have helped me on this project
in one way or another, but I am very grateful for all the comments and criticism. I
do want to acknowledge the extraordinary efforts of my two very able research
assistants, Matt Habermehl and Adrienne Prettyman, whose help was invaluable,
as well as the aid and encouragement of Angela Lahee of Springer-Verlag.
Much of this work has been presented at conferences devoted to consciousness
studies, most especially the famous biennially alternating conferences which under
the optimistic banner of Towards a Science of Consciousness, are held respec-
tively in Tucson, Arizona, and various interesting places around the world. All of
the organizers, commentators, discussants, and attendees of the TSC and other
conferenceswhere philosophy and science come aliveget my thanks. This
project was materially aided by generous grants from the Centre for Consciousness
Studies at the University of Arizona and the Social Sciences and Humanities
Research Council of Canada as well as travel grants from my own institution, the
University of Toronto Scarborough. Finally, I would like to thank my wife,
Christine McKinnon, for her unflagging intellectual and emotional support, not to
mention supreme patience.

Toronto, March 2012

v
Contents

1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Part I The Scientific Picture of the World

2 Looking Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Looking In. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4 Consciousness in the Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Part II Emergence and Consciousness

5 Emergence and Cellular Automata . . . . . . . . . . . . . . . . . . . . . . . . 65

6 Against Radical Emergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

7 Emergence and Supervenience . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

8 Generalized Epiphenomenalism . . . . . . . . . . . . . . . . . . . . . . . . . . 155

9 The Paradox of Consciousness . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

10 Embracing the Mystery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

vii
viii Contents

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

Titles in this Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259


Chapter 1
Overview

On the face of it, emergence is everywhere. At least, if by emergence we mean


the ubiquitous fact that most things possess properties which their constituents lack,
then emergence is everywhere. The simplicity of this conception masks a great deal
of complexity however. There are many forms of emergence, some of which are
most certainly present in our world while others are doubtful and clash with our
best developed picture of the world. This book is about the nature of emergence,
the scientific picture of the world and the ultimate question of whether and how
consciousness fits into that picture.
The overall arc of argumentation of the book is quite straightforward. Part I aims
to sketch the scientific picture of the world against which the entire argument is
constructed. To some readers, the material in Part I will not be new. Readers familiar
with this material or willing to grant the range, power and comprehensiveness of the
scientific picture of the world should thus feel free to skim the first three chapters.
But the point of Part I is not mainly to impart scientific information. It is rather to
emphasize the fact that science has been astonishingly successful in constructing
an account of the world which is comprehensive, unified and yet hierarchical. It is
strange but little remarked that science has been so very successful in building deeply
connected explanatory theories that appear to reveal a breathtaking unity in nature.
These connections reach so deep and far that they are today beginning to encompass
the most mysterious feature of the universe: consciousness. In every area of study
we find anchor points linking emergent features to the fundamental constituents of
our world. Throughout science there are rich interconnections which underpin ever
more comprehensive explanatory projects. No permanent dead end has yet blocked
our progress and everything seems to hang together remarkably well.
All this suggests a view of the worldone I hope readers find to be virtually
commonsensewhich I call the scientific picture. According to this picture there are
fundamental features of the world, which it is the province of physics to reveal, and
then a vast and convoluted hierarchy of higher level entities, properties, processes
and laws of nature. The scientific picture thus endorses emergence but of a particular
kind, which I label conservative emergence. This kind of emergence has gone by

W. Seager, Natural Fabrications, The Frontiers Collection, 1


DOI: 10.1007/978-3-642-29599-7_1, Springer-Verlag Berlin Heidelberg 2012
2 1 Overview

many names: benign, weak, epistemological. Its core claim is that all emergent fea-
tures are strictly and entirely determined by the nature and state of the fundamental
physical constituents of the world. That is, any two worlds that were indistinguish-
able with respect to all their fundamental physical features, would necessarily be
indistinguishable with respect to all their emergent features as well.
It is not obvious that the worlds emergence is all and only conservative. Other
forms of emergence are perfectly conceivable. Many philosophers and scientists
have held that conservative emergence is inadequate to accommodate the diversity,
dynamics and organization of the world. I will call the main competitor to conser-
vative emergence radical emergence (also known as strong or ontological emer-
gence). Radical emergence denies that the state and nature of the physical funda-
mentals wholly and strictly determine, by their own nature, all the emergent features
of the world. According to radical emergence, the world has some leeway as to its
total structure in despite of its basic physical state. That is, any two worlds that
were indistinguishable with respect to all their fundamental physical features, could
nonetheless be distinguishable with respect to all their radically emergent features.
The simplest way to conceive how radical emergence would work is to countenance
the presence of laws of emergence which are not included in the correct theory of
the physically fundamental. Such extra laws could then vary from world to world,
leading to different sorts of emergent features.
Despite the clear distinction between conservative and radical emergence it is sur-
prisingly easy to confuse them. The reason is that the conservative-radical distinction
interacts with a second distinction which is of much greater practical importance to
working scientists. This is the distinction between accessible, or intelligible, ver-
sus inaccessible explanatory structures. Roughly speaking, conservative emergence
coupled with accessible explanations of the emergents is reduction. Since reduction
is composed of two distinct components it is possible for reductionism to fail even
if conservative emergence exhausts all the emergence there is in the world. Many
examples of complex systems can be found where there is no doubt that the emergent
level is conservative, but there is no hope of explaining or predicting what is hap-
pening at that level in terms of lower levels. The simplest example is that of cellular
automata, which will be discussed in Chap. 5.
Of course, the big question is: is there any radical emergence in the world? Part I
of this book assembles a tiny fragment of the vast amount of indirect empirical
evidence we have that suggests not. If there is radical emergence, it seems to be very
reluctant to reveal itself.
Part II begins with more theoretical arguments for the conclusion that conserv-
ative emergence is all we need within the confines of the scientific picture of the
world. The general concept of emergence is sharpened by an examination of the
perfect imaginary playground: John Conways game of life. Radical and conserva-
tive emergence can be precisely defined in this domain. The general application of
cellular automata to the problem of emergence opens up further vistas, all the way
to the speculative possibility of digital physics. If the world is digital we can derive
something close to a definitive proof that conservative emergence is the only kind of
emergence at work in the world.
1 Overview 3

Chapter 6 attempts to undercut arguments in favour of radical emergence more


directly, and via a more conventional route. Although some have argued that chaos in
dynamical systems supports the existence of radical emergence I find insufficient sup-
port for this claim. Without doubt, classical dynamical systems exhibit emergence but
conservative emergence seems sufficient to account for it. An important point, which
recurs throughout the book, is that the claim that conservative emergence exhausts
emergence does not imply any strong reductionist conclusions. Strong reductionism
entails that the reduced theory can by jettisoned in favour of the reducing theory
in explanatory contexts. In fact, this is seldom if ever true and the discussion of
emergence and chaos shows why. Issues of complexity, limits of measurement accu-
racy and questions of system scale intelligibility all imply that reference to emergent
features is indispensable for successful explanation, prediction and understanding.
But this vitally important aspect of emergence is entirely compatible with conser-
vative emergence. Chapter 6 ends with a discussion of the issue of whether quantum
mechanics introduces such a deep change in our understanding of nature that it
mandates acceptance of some form of radical emergence. Again, I think not. QM
undeniably introduces new features, most especially the superposition principle and
entanglement, which force modifications to our conception of emergence. But I argue
that quantum emergence remains within the domain of conservative emergence.
Chapter 7 is the final chapter devoted to understanding emergence. Here, I develop
a detailed analysis of emergence based on the concept of supervenience. A powerful
philosophical tool, the basic idea of supervenience is that of a principled dependence
of one domain upon another (so obviously already intimately related to the topic of
emergence). The mantra of supervenience is this: domain A supervenes upon domain
B when there can be no change in A without a change in B. This simple formulation
disguises a complex notion which this chapter explores. Chapter 7 is the most tech-
nical part of the book, being somewhat, in the words of Thomas Hobbes, covered
over with the scab of symbols. I think the precision and clarity of logical symbolism
is critical, and as deployed here requires only a basic understanding of first order
syntax. I have endeavoured in any case to provide a clear textual explication of every
formula and its role in the argumentation. This chapter develops a precise general
definition of emergence, the radical-conservative distinction and forges an interest-
ing link between the temporal dynamics of system evolution and the supervenience
based account of emergence.
The book then turns from articulating our understanding of emergence to explor-
ing the consequences of emergence, particularly the consequences of the world being
restricted to conservative emergence in accord with the scientific picture of the world.
To this end Chap. 8 argues that all emergent features must be, in a sense, epiphenom-
enal. That is, such features cannot possess distinctive causal efficacy of their own.
Perhaps that is not so surprising given that conservative emergence implies that every-
thing about emergent features is derivative from the fundamental physical structure
of the world. It also may not seem very threatening given that we can discern a kind of
conservatively emergent causation, which derives from the fundamental level of cau-
sation in roughly the same way that conservatively emergent laws of nature appear.
Once again, it is important to stress that the epiphenomenal character of conservative
4 1 Overview

emergents does not mean they should be ignored or demoted from pride of explana-
tory place. But the metaphysically real features which drive the world from state to
state or which, in the word of C. Lloyd Morgan, provide the go of the world all
reside at the fundamental level alone.
What is the upshot of this generalized epiphenomenalism? I argue in Chap. 9 that it
leads to a severely worrying consequence: the paradox of consciousness. In a nutshell,
the problem is this. By their nature, conservative emergents have no actual role in
the world, apart from being appreciated and deployed by those conscious entities
that find them, or concepts of them, helpful in categorizing and organizing their
experience. These emergents stand as mere epistemic potentials, awaiting take-up by
some appreciative mind. They are the natural fabrications of my title. The paradox
looms when we realize that consciousness itself, according to the scientific picture of
the world, must be merely a conservatively emergent feature. The scientific picture
strongly suggests that consciousness is the product of vastly complicated neural
systems which appeared in the universe long after its creation. But if consciousness
itself is a conservatively emergent feature then it too stands as epiphenomenal and a
mere epistemic resource which appears only to...consciousness. There is a worrying
whiff of circularity here which I argue is the fundamental and irresolvable problem
with the treatment of consciousness in the scientific picture of the world.
The paradox of consciousness leaves us in a quandary. The success of the scien-
tific picture cannot lightly be thrown away, but the paradox undercuts its most basic
presuppositions. In the absence of any settled solution, Chap. 10 offers a range of
possible escape routes which I will sketch here in anticipation of their full devel-
opment. I call these options: Watchful Waiting, Embracing Emergence, Favouring
Fundamentality and Modifying Metaphysics. The first involves simply waiting and
hoping that further advances within science will both continue to support the sci-
entific picture of the world and somehow integrate consciousness into that picture
without falling afoul of the paradox. Sure to be a popular option, it nonetheless leaves
our problem entirely unresolved. If the paradox of consciousness is an essential con-
sequence of the scientific picture then no amount of waiting and pursuing standard
research strategies will ever provide a resolution.
The other options are thus of necessity quite radical departures from the scientific
picture. As its name suggests, Embracing Emergence requires the introduction of
radical emergence into our view of the world. One difficulty with this option is the
worrying question that if radical emergence is present in the world then why has it not
revealed itself heretofore? Those who embrace emergence will have to explain why
the world does such a good job of pretending that conservative emergence suffices.
That and other problems are explored in Chap. 10.
The traditional alternative to radical emergence in the face of problems of inte-
grating recalcitrant phenomena into some theoretical domain is to expand the range
of fundamental features. In this case, the option of Favouring Fundamentality will
require that consciousness itself, in some form or other, presumably of an extremely
primitive, simple and unstructured kind, be added to the roster of the universes truly
basic or fundamental entities. One such view is panpsychism, but there are other
possibilities. Sheer incredulity is a frequent response to panpsychist views, but an
1 Overview 5

interesting case can be made that panpsychism is actually the minimal alteration of
the scientific picture of the world which can resolve the paradox of consciousness.
The final option, Modifying Metaphysics, will strike many as the most radical of
all. It stems from the observation that there is a tacit but vital premise underlying
the scientific picture of the world: scientific realism. This is the position holding that
science provides the best, perhaps only, route towards knowledge of the ultimate
nature of reality. It is the core belief supporting the idea that the scientific picture is
and should be our metaphysics of nature. But alternative views of science and nature
have been advanced with considerable philosophical sophistication and from a base
of excellent scientific knowledge. The rejection of scientific realism forces a radical
alteration in the dialectic which leads to the paradox of consciousness. The result is
highly interesting and perhaps encouraging, especially insofar as the denial need not
undercut the explanatory power of science within its proper sphere.
In basic terms, these options appear to exhaust the alternatives. I wish I could
prove that one of them offers the one true path to enlightenment. But I fear that the
still immature state of our understanding of science, emergence, consciousness and
their relationships compels us to leave the issue unresolved. While the problem of
consciousness is endlessly fascinating, it will, I think, become ever more pressing
and intellectually visible as our knowledge of the anchor points between brain and
consciousness grows over the next century.
Part I
The Scientific Picture of the World
Chapter 2
Looking Out

2.1 Lonely Earth

Try to imagine a Lonely Earth in a universe very different from ours, where the
night sky is almost empty. To the naked eye, there appear only seven celestial objects
which slowly move around the night sky in bizarre and complex ways. Some rare
nights would be utterly black, othersequally rarewould reveal all five visible
planets and the moon. In such a world, the history and practical use of astronomy
would have followed a course very different from ours, even far back into ancient
timesjust think of the effect on early navigators of there being no orienting stars,
such as the conveniently located north star.1 Without the field of fixed stars to
provide a background frame of reference, it would have been very difficult for proto-
astronomers to wrestle order out of the apparently unruly planetary behaviour. And
very likely more than astronomy would have suffered. Conceivably, mathematics
itself would never have really gotten off the ground and perhaps Platos vision of
an eternal, unchanging realm of perfect truth and beauty would have been almost
unthinkable.
But for the sake of this thought experiment, let us instead suppose that at least
science and technology advanced in our imaginary world pretty much as they did in
ours, so that Galileos twin could, in the realm of observation, eventually turn his
crude telescope onto the few available celestial objects and, in the realm of theory,
could apply Euclid and read the language of mathematics in the great book of the
natural world. This imagined world would present no impediment to Galileos earth
bound experiments on motion and gravitation. And it seems that this observer could
more or less duplicate many of our own Galileos astronomical discoveries, such as
the mountains of the moon and the system of Jovian satellites.
It is true that without the fixed backdrop of stars it would be a lot more difficult
to measure the position of the planets accurately, but it would not be altogether
impossible (we can envisage using the horizon and time of year as the basis of
a system of celestial measurement). So we might be permitted to go on to imagine
figures akin to Brahe and Kepler, who led the way to the mathematical understanding

W. Seager, Natural Fabrications, The Frontiers Collection, 9


DOI: 10.1007/978-3-642-29599-7_2, Springer-Verlag Berlin Heidelberg 2012
10 2 Looking Out

of planetary motion via universal gravitation which was perfected by Newton (it is
hard to see, though, howlacking the supreme regularity of the stellar sphere as
an anchorthe Ptolemaic system would ever be devised in our imaginary world).
Now let the people of our imagined lonely Earth continue their scientific devel-
opment in parallel to actual progress, up to modern times. Visual access to a starry
sky does not seem crucial for this development. But as their science moves towards
the equivalent of our twenty-first century physics, severe strains would appear with
respect to the science of cosmology. Even in the actual world, this is a strange sci-
ence, with no prospects for direct experimentation and reliant on disturbingly long
and complex chains of argumentation linking its hypotheses with the relatively few
observations to which they are relevant. Nonetheless, our cosmologists have man-
aged to produce a grand and beautiful account of the overall structure of the universe,
from its inception (minus a few picoseconds) to its possible end states.
But how could any kind of a scientific cosmology get off the ground on Lonely
Earth? We can assume that as their telescopes increased in quality and power, they
would discover Uranus, Neptune and Pluto; in fact, they would have the advantage in
spotting new planets and dwarf planets (if they thought to look and had the patience)
for there would be no field of stars with which to confuse a planet. But even measuring
the size of the solar system would be difficult. We did it by using trigonometry and
parallax, the apparent displacement of a nearby object against a distant background
caused by the motion of an observer (you can readily observe parallax by holding
up a finger at arms length and viewing it through each eye in turn). Without the
fixed stars, our imaginary astronomers would have had to devise a notional celestial
coordinate system, and instruments of sufficient accuracy that reliable assignments
of location could be made without the guide of any visible background (notably, it
would require exceptionally accurate clocks, which took a long time to develop in
our world). At the very least, this would have made their distance measurements far
more uncertain than ours were.2
In any case, once they had their solar system mapped out, no further advance
in cosmology would seem to be possible. On the theoretical side, the development
of relativity theory and quantum mechanics could proceed (on Earth, astrophysical
data had very little to do with the creation of either special or general relativity, and
nothing to do with the creation of quantum mechanics). As theoretical curiosities
even in our world they were little more than that for some timegeneral relativists
could create the various cosmological models, such as Einsteins static universe or
the Friedmann/Lematre expanding universe model. But these theoretical exercises
would lack any contact with sources of evidence relevant to cosmology. One of
the very few (and highly influential) early tests of general relativity was the 1919
measurement by Eddington of how starlight is bent by the gravitational field of the
Sun. In our imagined world, such a test would be impossible (the other early test
however, the relativistic explication for the behaviour of Mercurys orbit, could have
been accomplished, at least in principle, given that they could measure the position
of Mercury with sufficient accuracy).
What could the scientists of that world say about the creation and evolution of the
universe? Next to nothing. I would venture to guess the best account they could come
2.1 Lonely Earth 11

up with would barely go beyond Laplaces nebular hypothesis.3 In the absence of


any of the stellar nebulosities which so piqued the curiosity of our astronomers and
set the stage for crucial debates about the extent of the Milky Way galaxy and the
overall size of the universe, even that would take real creative genius. The nebular
theory of solar system formation states that in the beginning was a slowly revolving
(at least in the sense that it had some net angular momentum) amorphous cloud of gas
and dust which gradually collapsed due to its own internal gravitational attraction.
The increase in spin caused by the collapse would produce a disk of material, out
of which would form the planets as the sun formed at the disks centre. Powerful
basic evidence for some such process lies in the two simple facts that all the planets
revolve around the Sun in the same direction and their orbits lie pretty much in the
same plane.
Being in possession of quantum mechanics, our imagined scientists could discover
why the sun shines and they would then, perhaps, have a reasonable account of the
formation of their solar system from the assumption of a primordial gas cloud. Their
chemistry and biology could be as advanced as ours; they would have as much reason
as we do to believe that life is a complex chemical process driven by evolution so
they might well conjecture that life emerged from an earlier lifeless state.
But the whole system would remain deeply puzzling. The initial cloud of gas and
dust from which their solar system formed would have to be simply postulated as,
in the first place, existing, and then as containing just the mix of elements which
they would observe in the current form of their solar system. But then, assuming
our imaginary scientists discover how the sun can forge new elements through a
host of possible nucleosynthetic pathways, they would have an elegant solution to a
non-existent problem. For there is nothing in their world that could generate these
elements.
Yet I dont think our Lonely Earth scientists could reject the nebular hypothesis in
favour of the conclusion that their universe was eternal and essentially unchanging.
Several lines of thought and evidence would lead them to the opposite conclusion.
In the first place, they could calculate how long the sun should be able to generate
energy via nuclear fusion, given its composition and mass. This would tell them that
sun could only be a few billion years old. Second, from the rate of radioactive decay
of various elements present within the bodies of the solar system, they would find
that none were more than 4 billion or so years old (this would be in concordance with
the age limit of the Sun). They could also note that the face of the moon, for exam-
ple, reveals a violent history of impacts which no longer occur at high frequency,
obviously suggesting that the solar system is not in some kind of equilibrium state as
would be expected if it were eternal. Finally, even if they allowed for some unknown
physics which permitted the Sun to shine indefinitely and which somehow eliminated
the evidently finite age of the material in the solar system, very fundamental dynam-
ical considerations would reveal that the system itself was unstable over extremely
long periods of time (see Lecar et al. 2001).
Thus our imaginary scientists would have good reasons to believe that their solar
system had a finite age of 510 billion years. This would put them in a difficult
position, for the evidently finite age of their solar system would be coupled with no
12 2 Looking Out

evidence whatsoever of any kind of initial state from which the solar system could
have evolved. Ultra-speculative theory sketches might be invoked to ease their minds.
Theorists might argue that the universe is really a system of causally disjoint realms in
which every possible configuration exists somewhere (there are stronger and weaker
versions of every possible configuration; for examples of the ultra extreme end of
the scale see Tegmark 2007b; Lewis 1986). Such a view is a radical extension of
theories that many of our cosmologists hold, but it is quite a stretch to believe in
arbitrary creation events which generate isolated nebula, already properly stocked
with the right mixture of light and heavy elements so as to generate a life creating
solar system. Nor would the so-called anthropic principle be of much comfort, for our
imaginary thinkers find themselves not just in a universe suitable for life, they find
themselves in an exceptionally bizarre world that, among its many other oddities,
also happens to support life. It is not a world that observers would expect to find
themselves in, if one takes into account only the necessity for that world to support
intelligent observers, which is the most the anthropic principle could license.
It might well be that the most reasonable response would be to accept that the
particular world configuration in which they discover themselves was somehow an
explicit target of a mysterious creation event.
Thus, while at first it does not seem difficult to pretend that the universe might
consist of nothing but our solar system and indeed such a world does not in itself seem
to violate physical law, its existence would be exceedingly puzzling to its inhabitants.

2.2 Galactic Cosmology

What a contrast to the situation that we find ourselves in! As well see, in our world, at
every turn Nature has provided more clues which lead to a grand and unified account
of the structure, and possibly, creation of our universe. So far, we have worked
out a still incomplete but nonetheless remarkably detailed and coherent picture of
the cosmos. In complete contrast to our imaginary scientists of Lonely Earth, no
sudden complete explanatory roadblocks are ever thrown up against our theorists.
How curious.
No one knows when our ancestors started to think seriously and reflectively about
the nature and structure of the world, but it must have been a very long time ago. Evi-
dence of sophisticated astronomical knowledge goes back thousands of yearsfor
example, the Egyptian pyramids are exquisitely constructed to line up with com-
pass directions, with interior shafts that point to particular stars (as they would have
appeared at the time of construction). Such knowledge must itself have descended
from a long history of observation and thought. Possibly some (literally) hard evi-
dence of this early time comes from various peculiarly carved bones and antlers;
for example, the approximately 30,000 year old Blanchard bone appears to repre-
sent the lunar cycle (see Kelley and Milone 2005, pp. 9596). These ancient efforts
to wring order out of the evident complexity of the night sky were immeasurably
magnified through the self-conscious application of mathematical representations of
2.2 Galactic Cosmology 13

planetary motion and the heavenly spheres. The codification of Ptolemythe cul-
mination of a long history of Greek geometrical astronomymanaged to encompass
the entire visible universe in one rational system, albeit based upon a fundamental
heliocentric misconception, and set us on the path to modern scientific cosmology.
The result, some 2,000 years later or more, is a spectacularly comprehensive and
beautiful account of the universe. In parts it remains tentative and more or less hypo-
thetical, but that befits a cosmology wishing to call itself scientific. Both its grand
scope and fully detailed intimate links with all the physical sciences are breathtaking.
If I could, I would have every child learn all their science via the study of this cosmol-
ogy, which could easily be retold with increasing depth at every level of education
and would naturally include everything from basic physics to biology. Of course, our
cosmology butts up against its limits at various places, and mystery there intrudes.
Why does anything exist at all? Why are there laws of nature? Why does mathematics
do such an amazing job of describing our world? But, arguably, these arent even
parts of cosmology as a scientific discipline, although they are of course questions
that need consideration. Perhaps they are unanswerable, purely philosophical in the
good sense of being ultimate questions. The status of another question, which is the
topic of this book, remains unclear. Why and how does consciousness appear in the
universe? Is this question ultimate and philosophical or empirically answerable and
destined to be the crowning achievement of our scientific picture of the universe?
What seems so striking is how far off mystery can be pushed, especially as com-
pared to the imaginary situation of Lonely Earth discussed above. Lets work back-
wards from the situation in which we find ourselves. We have worked out the structure
of the Solar System by observation and already quite a bit of impressive direct physi-
cal investigation (every planet save Pluto has been visited by at least one robot probe,
and the New Horizons spacecraft is scheduled to arrive at Pluto in 20154 ). We find
that the Solar System cannot be much more than 5 billion years old. But thats no
great mystery. Our Solar System is embedded in a giant stellar system in which we
can now observe the kind of dust and gas clouds surrounding budding proto-stars
upon which the nebular hypothesis of solar system formation depends.5 We have
discovered a large number6 of extra-solar planets; solar systems are not rare. Ours
may be especially well behaved and well endowed to support life, but it is hardly
surprising that life will only appear in those solar systems conducive to it, and so,
out of the myriad of diverse solar systems in the galaxy, we must obviously expect
to find ourselves in one that is suited to our kind of life.
Our galaxythe Milky Wayis also a dynamic and evolving system. While one
might have conjectured that it was eternal, it could not be considered as unchanging.
If eternal, it must be following some kind of grand cycle of star death and rebirth.
For we know that the stars cannot shine foreverthey depend on a finite supply of
nuclear fuel, mostly hydrogen, which they burn through nuclear fusion. Paradox-
ically, the more fuel a star begins with the shorter its lifetime. The big stars own
gravitational force compresses the hydrogen fuel, forcing it to burn more vigorously.
What happens to a big star that runs out of fuel? Before burnout, it is the pressure
from the energy released via atomic fusion that prevents a star from collapsing under
its own weight. When its main fuel source is expended, a star shrinks and heats
14 2 Looking Out

by compression, initiating a new wave of atomic fusion, this time not of hydrogen
but of many heavier elements, from helium through to iron. Eventually there are no
new fusion routes left and the star suffers a catastrophic collapse. The compression
and shock wave from the collapse generates a truly immense amount of energy in a
very small amount of time. Such supernovas can briefly produce more energy than
entire galaxies. But more important from the cosmogenetical point of view is that the
exploding star spews forth almost the whole periodic table of elements into the inter-
stellar environment. The twin discoveries of stellar nucleosynthesis and the seeding
mechanism of supernova are two of the most significant discoveries of twentieth
century astronomy and astrophysics.
These discoveries answer one of the questions that would simply stump the
astronomers on Lonely Earth. They show how the complex mix of light and heavy
elements found in the initial nebula which seeded the Solar System was both formed
and put into our galactic interstellar space. Once we have the solar nebula in place
and properly stocked, we can look in two directions: forward towards the creation of
the planets, or backwards.
Looking back, we face the awkward question of the existence of the Milky Way
galaxy itself. For the supernova-condensation-supernova recycling of stellar material
would eventually lead to most stars being too small to explode. Material would thus
be locked away in low mass, cool ancient stars and white and brown dwarfs. A
certain percentage of higher mass stars would also lead to such lockup, since they
will evolve into neutron stars or black holes. But the galaxy we observe is not like
that. It has an interesting and varied mix of stars of all kinds with ages that seemingly
do not exceed about 1015 billion years. A curious feature of the galaxy is that the
oldest stars are concentrated in a set of vast spherical collections of stars, called
globular clusters, that themselves orbit around the periphery of the galaxy. These
stars present two very striking properties: they contain extremely small amounts of
heavy elements and appear to be very old.7 The obvious reason for this is that they
are largely stars that formed early, out of primitive material that had not yet been
recycled through many generations of star formation and explosion. If the galaxy
was truly old and somehow could regenerate star formation, we would expect to
see the heavy elements spread more or less evenly throughout the stellar population.
Since we dont it is hard to escape the conclusion that the galaxy itself was created
somewhat more than 10 billion years ago.
After its formation, the evolution of the galaxy proceeded as outlined, with succes-
sive waves of star formation (which may be linked in complex ways to the rotation
of the galaxy and its internal, spiral arm structure), leading to supernovas which
enriched the interstellar environment with heavier elements, which then condensed
into later generation stars, one of which is our Sun. But how was the galaxy itself
formed? Our best theories are, in essence, upscaled versions of the nebular hypothesis
of solar system formation; early stars and other condensations of mass in a univer-
sal gas cloud presumably condensed into proto-galactic structures from which the
hugely diverse range of galaxies and types of galaxies we now observe were created.
There now exist computer simulations of galaxy formation that do a pretty fair job
2.2 Galactic Cosmology 15

of recovering the kind of galactic forms we actually observe from primordial clouds
of gas and dust.8

2.3 Universal Cosmology

Our knowledge of galaxy formation is far from complete (the nature, role and even
the existence of so-called dark matter is just one area of intriguing ignorance), and
creation events are hard to observe since they happened so long ago (which is to say,
they are very far away and thus faint or obscured). But we can pursue our general
question nonetheless. The universe we can observe is a giant system of galaxies,
perhaps a quarter trillion or so in total (so roughly speaking there are about as
many galaxies in the observable universe as there are stars in our galaxy). Is this
system eternal? If so, then there must be some mechanism of galactic recycling
else all the galaxies should be populated only with very old stars. We observe no
such thing. Much more significant than this however, is the fundamental observation
that the fainter a galaxy is, that is, the farther away from us it is, the more rapidly
it is receding from us. Now, in fact we cannot directly measure the speed at which
a galaxy is moving away from us. What is observed is that the light from certain
galaxies is reduced in energy or red shifted, a phenomenon somewhat analogous
to the Doppler shifting of the pitch of a sound source that is moving away from us.
And there is a systematic correlation between the amount of red shifting and various
indications of distance.9 This correlation between distance and recession speed was
discovered by Edwin Hubble and Milton Humason back in 1931, though at the time it
was viewed with suspicion, not least because Einstein had devised a model in which
the universe was static.
But other theoristsFriedmann and Lematrehad already developed models
of an expanding universe which could fit the increasingly convincing and accurate
determinations of the expansion rate (codified in a number somewhat peculiarly
named the Hubble constant, though it is neither fundamental nor constant). Since
the 1930s evidence has mounted in support of the expanding universe hypothesis and
against the idea that the galactic structure of the universe is eternal. For example, if
we survey very distant galaxies (that is, galaxies that exhibit a very high red shift)
we find that the statistics of galactic morphology are notably different from those
of closer, older galaxies. In particular, there seem to be very few spiral galaxies in
the survey which itself suggests that perhaps the typical spiral arm structure evolves
from the evidently less structured elliptical forms.
If we take the Hubble constant seriously it is natural to think that all the galaxies
were closer together in the past and that if we trace back far enough, all the matter
in the visible universe must have been confined in a very small volume. It is then
not difficult to conceive of the universe as coming into existence from a very small
but, to understate it rather severely, highly energetic event. This hypothetical event
came to be called the big bang (originally a pejorative term coined by Fred Hoyle,
who championed a theory in which the universe was infinitely old). In fact, it is a
striking fact that the form of the relation between increasing velocity and distance is
16 2 Looking Out

linear, which has two interesting consequences: no matter where you are it will look
as if everything is moving away from you as the centre, and there must have been
a time in the past when everything was close together (for a nice discussion of this
and other aspects of Hubbles work see Sandage 1989).
However, the expansion of the universe as evidenced by galactic recession is far
from proof that the big bang occurred or that the universe is not eternal. Hoyle and
others accepted the expanding universe but denied the existence of any creation event
(for Hoyle at least, this was in part motivated by an anti-religious sentiment which
decried the possibility of something like a creation event). The cost was to posit
the continuous low level creation of matter and some sort of cosmic repulsion that
drove expansion. The amount of fresh matter needed to maintain the universe in a
Steady State (as the theory of Hoyle, Hermann Bondi and Thomas Gold came to be
called) is actually very smallabout 1040 g/cm3 /sand if uniformly spread out
would be completely undetectable. You would certainly not expect to see tangible
lumps of matter just appearing out of nowhere every so often. From whence derives
the energy which through all eternity drives the matter creation field remained an
open question. Any genuine scientific theory must posit observable consequences,
and the steady state theory implies that any uniform sample of galaxies visible to us
will have the same mix of ages. It is the Hubble law which allows us to take such a
sample. A large collection of galaxies with the same red shift will form such a uniform
sample, and thus should exhibit the same mix of characteristics as a large collection of
nearby galaxies. But, as noted, they do not; rather they exhibit characteristics which
are highly suggestive of a prior state of galactic evolution. There is a fair amount of
other evidence leading in the same direction.
The most decisive evidence against an eternal universe turned up before the kind
of galactic evolutionary observations we have been discussing were very numerous
or robust. This is the famous Cosmic Microwave Background Radiation or CMB.10
The CMB was accidentally discovered (although it had been predicted by some astro-
physicists) by Arno Penzias and Robert Wilson of Bell Laboratories in 1964 as they
worked on a radio astronomy project with an antenna released from corporate needs.
The CMB is a bath of electromagnetic radiation which can be observed emanating
from everywhere in the universeno matter where you point your antenna you will
observe it (some of the snow on an empty television channel is caused by the CMB).
The CMB is a direct prediction of the big bang model; what is more the prediction
is that the structure of the CMB should be identical to the electromagnetic emis-
sions of an object in thermal equilibrium at a temperature of about three degrees
above absolute zero. This very low temperature represents the remnant of the big
bang which has been cooling down for the last 1015 billion years. The cooling is
the result of the billions of years of expansion of the universe in a process somewhat
analogous to the cooling of an expanding gas. The thermal nature of the CMB is also
one of the most spectacularly accurate predictions ever made. In 1989 the Cosmic
Background Explorer (COBE) spacecraft was launched with the aim of measuring
the CMB much more precisely than ever before. Figure 2.1 shows the spectacular
result of one of the COBE instruments which provides the overall spectrum of the
CMB.
2.3 Universal Cosmology 17

Fig. 2.1 Cosmic background


radiation spectrum

Actually, the curve portrayed in the figure is the theoretical spectrum for a black
body (an object that absorbs all radiation perfectly, emitting only thermal radiation).
The COBE data consisted of many data points strung out along this curve, as illus-
trated. The data points themselves should by rights be invisible since the error in the
measurements is far less than the width of the curve at this scale!11
Of course, there are no crucial or decisive experiments in science. Although the
CMB is exactly what a hot big bang would lead us to expect, versions of the steady
state theory can always be maintained in the face of this evidence. Theorists can
postulate that not only matter but radiation is continuously being created and that
it is, somehow, being thermalized so as to take on the characteristics of black
body radiation. For example, a universe in which tiny iron whiskers are uniformly
scattered about would serve to thermalize background radiation (see for example
Narlikar et al. 2003).
Such a model seems rather ad hoc, but there remains some observational evidence
against the standard big bang model, of which the most interesting (both scientifically
and from a science studies point of view) is the problem of discordant red shifts.
Halton Arp is infamous in astronomy for persistently discovering what he claims
to be physical associations between objects that the big bang model asserts cannot
be associated, typically a low red shift galaxy seemingly coupled to a high red
shift quasar.12 Arp has cataloged a host of discordant red shifts (see Arp 2003) of
various kinds and personally believes that some kind of new physics is necessary
to explain them, and that the standard big bang model must be rejected. Heand a
few other mavericksremains adamant, but nowadays receives little attention from
mainstream astronomers.
For the big bang model has many other fundamental pieces of evidence in its
favour. A rather bizarre one which nicely reveals how seemingly disparate aspects of
physical theory interact is the supernova decay dilation effect. Supernovae have been
studied in sufficient numbers that we have mapped out certain characteristic patterns
18 2 Looking Out

of light emission that the various types tend to produce. There are two main kinds of
supernova (types I and II) and they each have their own light curves. In crude terms,
a schematic example of the comparison of a nearby type I curve with that of two
high redshift supernovae would reveal that the latter curve is stretched out.13
Why should this effect occur? Are the laws of physics different a long way away,
or a long time ago? Thats always possible, but there is no other evidence for revising
physics in a spatial and temporal way. On the other hand, if the universe is expanding,
so that distant supernova have a very high relative velocity to us, then we would expect
to see the effects of special relativity. Famously, one of these is time dilation: the
faster an object moves relative to us, the slower its clocks will run relative to ours.
By clocks here is meant any physical process by which time could be measured
(including ordinary clocks, the human body as it ageshence the twin paradox,
or any other process of change). The expanding universe hypothesis provides a very
elegant explanation of the otherwise anomalous light curves of distant supernova
(see Leibundgut 1996).
Yet another crucial piece of evidence is the big bang models prediction of the
ratios of the light elements, especially the hydrogen to helium ratio (see Peebles,
Chap. 6). Stars can and do produce helium but the same ratio of hydrogen to helium is
found both in galaxies with lots of heavy elements (hence galaxies which have a long
history of nucleosynthesis) and those with very small amounts of heavy elements. So
the amount of helium observed does not seem to be a product of the gradual creation
and dissemination of helium from successive stellar generations. However, under the
hot big bang model, we can deduce that there was a period (about 1 s after the big
bang event) when neutrons and protons were no longer transforming into each other
under various high energy processes, but were instead free particles, though still
highly energetic, mixing in an extremely hot gas.
The initially almost exactly equal numbers of neutrons and protons would begin
to tip in favor of the proton as neutrons naturally decay (into a troika of a proton,
electron and neutrino), but as soon as the temperature dropped enough there would
also be rapid neutronproton fusion into deuterons, which would in turn fuse with
protons into helium three and, by gaining another proton, into helium four (the stable
form). The prevalent conditions ensured there was little or no fusion of any heavier
elements through incremental processes then at play, though traces of lithium and a
few other elements were formed. From the rate of neutron decay and the temperature
bounds permitting fusion to begin, the ratio of protons to neutrons in place when
fusion begins can be calculated. The ratio turns out to be about 13 protons for every
2 neutrons, which means that once all the neutrons are safely ensconced in helium
nuclei (where they are safe from decay), for each helium nucleuswhich contains
two protonsthere will be around eleven hydrogen nuclei (each a lone proton). By
mass, helium will end up forming just under 25 % of the total, hydrogen around 75 %
with a few light elements and deuterium (the isotope of hydrogen which contains one
neutron in addition to one proton) filling up the rounding errors. It is a spectacular
confirmation of the big bang model that this does indeed seem to be the primordial
ratio of hydrogen to helium (the observed ratio is a little higher because of the
contribution of extra helium from fusion occurring in stars since the big bang).
2.3 Universal Cosmology 19

The present amount of deuterium in the universe is also significant for the big
bang model. It is a highly curious fact that deuterium exists in the universe at all.
Deuterium is easily ignited, which is why it is a favoured fuel for our perennially
prospective fusion reactor programs (as the old saw has it, fusion is the energy source
of the futureand always will be), and hence is rapidly consumed by the fusion
processes within the stars. If the universe was eternal, there would have to be some
process of deuterium creation (the random creation of neutrons and protons would
not suffice, since they would not likely be able to combine and free neutrons decay in
about 15 min). But if the conditions of the big bang are appropriate, a small amount
of deuterium can remain after the helium has frozen out. There doesnt seem to be
any other recognized process by which deuterium could be created. Observations of
inter-galactic dust clouds, which hopefully represent the primordial distribution of
matter, suggest that the early universe ratio of deuterium to hydrogen was about 1 to
10,000.14 That deuterium had to come from somewhere, and it is hard to see where
except for a period of nucleosynthesis of exactly the sort predicted by the big bang
hypothesis.
Compare once again the situation of our astronomers and astrophysicists with their
counterparts on Lonely Earth. They were driven by the data to embrace a picture of
the universe in which the conditions for the development of the solar system simply
appeared out of nowhere within a universe in which that appearance made no sense
at all. The discussion above highlights how the imaginary scientists would find a
kind of ultimate mystery almost everywhere they looked, once they looked deeply
enough at any rate. They would find, since we supposed that their Earth and ours
were qualitatively identical, that the water in their oceans contains not insignificant
amounts of deuterium (about one 1 water molecule per 5,000 would be heavy water).
They would have no explanation for where this deuterium came fromit was just
salted in the initial solar nebula. Perhaps they would infer that a beneficent god must
have stocked the oceans in preparation for the shift towards fusion power.
In spectacular contrast to this, our scientists have found a rather grand, deeply
complex and inter-related account of the creation of the solar system as part of an
entire cosmos, in which a great many lines of evidence all point in the direction
of the hot big bang hypothesis. Mystery remains, of course, as well as any number
of scientific puzzles. But it appears that the mystery is being driven into that small
corner of the room where rather abstract and philosophical issues get discussed. Why
is there something rather than nothing? Why do we have laws of nature, and why did
we get just these laws of nature? These mysteries face the Lonely Earth scientists
too; it is just that they also have an elephant in the room.
As our scientists close in on the big bang via current remnants of the early stages
of the universe another feature becomes apparent. I will call this the simplicity of the
early universe. All science except for physics ceases to apply to the universe as we
move back towards the big bang event. Thus between the creation of the universe
and the present time the objects, processes and, perhaps, the laws of all the sciences
beyond physics must have emerged out of conditions to which those sciences do
not apply. This is a significant fact to which all the lines of evidence we have been
considering point.15
20 2 Looking Out

The extremely early stages of cosmic evolution are not well understood because
they require physics not yet in our possession. Thus there is one cosmological epoch
which remains completely mysterious; it takes up the time from 0 to 1043 s, a
duration called the Planck time, and an associated size of about 1035 m called the
Planck length. The density of the universe during this time is so high that gravity
would be expected to have significant effects even at the ultra microscopic level of
particle physics. But we have no theory which integrates gravitation with quantum
mechanics, though there are several candidate approaches (see Smolin 2001; Greene
1999 for popular expositions).
However quantum gravity works, it takes almost no time at all for the universe
to expand (and thus for its density to fall) sufficiently that gravity no longer has any
microscopic effects, as its strength is swamped by the strength of other forces (of
which, after a peculiar cascade of emergence we will consider, there are eventually
three: the strong nuclear, the weak nuclear and the electromagnetic). We then enter
what is sometimes called the GUT epoch. GUT stands for Grand Unified Theory,
which sounds impressive. Unfortunately, there is no such theory which has been
empirically verified. But it is thought that we know quite a bit about how such a theory
should look. Supposedly, this is the period when there are but two forces, gravitation
and a single GUT force which incorporates the nuclear strong force, the nuclear weak
force and the electromagnetic force. This period lasts for only a very short time, until
about 1035 s after time 0. The GUT ought to be some kind of extension of a theory
we do have in place, the electroweak unified theory independently developed by
Sheldon Glashow, Abdus Salam and Steven Weinberg (for which they won the 1979
Nobel prize). The most disquieting prediction of GU-theories is that the proton is
not stable, though possessed of a half-life trillions of times longer than the age of
the universe. Current observations show no sign of proton decay however,16 and the
traditional GUT construction is no longer seen as very promising, but there are other
candidate theoretical approaches that promise the same kind of unification.
As the GUT epoch began another rather bizarre and mysterious process seems
to have taken place: cosmic inflation (see Guth 1997 for a first hand account of
the development of this idea). This was an almost instantaneous increase in the
scale of the universe by many orders of magnitude. The main evidence for inflation
stems from both the overall and the detailed structure of the CMB. Overall, the
CMB is almost perfectly isotropic, as if the system from which it emerged was
pretty much in thermal equilibrium. Without inflation this is hard to fathom. We sit
at the notional center of a universe which has been expanding for, say, 12 billion
years. So, ignoring some relativistic niceties, photons from the CMB from regions
on opposite sides of the universe which we capture today are separated by 24 billion
light years. Since no causal influence propagates faster than the speed of light it is
impossible for these photons to have been in causal contact since their creation. No
matter how far back in time you go, if the universe has expanded uniformlyas the
Hubble relation suggeststhere could never have been a time when these regions
were causally connected. Yet they have almost identical characteristics (to about
one part in 100,000). This is called the horizon problem. Inflation solves it on the
assumption that a huge expansion occurred after an initial thermal mixing. But this
2.3 Universal Cosmology 21

early state did not, and could not, reach perfect equilibrium and thus whatever residual
inhomogeneities remained would themselves have been stretched out during cosmic
inflation. As Andrew Liddle rather quaintly puts it: inflation is trying as hard as it can
to make the Universe perfectly homogeneous, but it cannot defeat the Uncertainty
Principle which ensures that there are always some irregularities left over (Liddle
1999). Inflation predicts that, roughly speaking, the fluctuations in the CMB should
be very nearly the same at all frequencies. So far, our observing instruments have
verified the isotropy of the CMB to a high degree, and have more recently turned
to examining the minute fluctuations (notably via the densely acronymic COBE and
WMAP spacecraft and DASI instruments as well as the BOOMERANG balloon
laden instruments in Antarctica). What we see thus far is quite in line with the
predictions of cosmic inflation.
Inflation neatly answers another question: why is the universe flat (that is, why
is the current geometry of the universe very near, if not exactly, Euclidean)? The
anthropic principle gives one answer: a typical non-flat universe would either col-
lapse too quickly for life to form, or have such a sparse matter density that the
preconditions for life (such as stars and solar systems) would not be able to arise.
Therefore, it is to be expected that we shall observe ourselves to be in a flat cosmos.
Like all anthropic explanation, this does not in fact explain why the universe is
flat. Cosmic inflation does. Just as blowing up a balloon makes regions on its surface
more nearly flat, so too inflations blowing up of the cosmos make its regions more
nearly Euclidean. Without inflation the density of the universe must be postulated
(it is not fixed by the big bang itself so far as anyone knows) as being between
0.999999999999999 and 1.000000000000001 times the critical density at the age of
1 s (see Guth 2002).
Inflation blew up the scale of the universe by a huge amount, perhaps 70100
orders of magnitude. One might wonder how come the universe was not left a virtually
empty and cold vacuum after blowing up some 10100 times in size. Inflation itself
deposited huge amounts of energy into the universe,17 leaving it a fairly dense, roaring
hot stew of elementary particles, just as in the basic big bang model (it has thus been
said that inflation is thus a bolt on accessory (Liddle 1999) to the hot big bang
theory). The inflation happened so fast that it did not extend across any significant
physical changes. Inflation began in the GUT epoch and ended as the quark epoch
began. This marks the time when the temperature fell low enough to permit the
appearance of the most elementary of particles, the quarks and their gluons which
mediate the strong nuclear force. The quark-gluon plasma characteristic of this epoch
has been recreated on Earth (in ultra small measure for a fleeting instant), initially at
Brookhaven Laboratories in New York state18 and now at the Large Hadron Collider
in Europe. Further cooling reduced the energy of particle interaction and the universe
passed through various epochs (cosmologists differ somewhat on the number of these
worth distinguishing).
First, at about a millionth of a second after the big bang, comes the hadron epoch,
in which quarks bound into the class of composite particles called baryons, such
as the familiar protons, neutrons as well as a host of mesons. While the laws of
nature appear to be almost perfectly symmetrical with respect to the production of
22 2 Looking Out

matter and anti-matter, there is a very slight, and not well understood, asymmetry
which led to the overproduction of matter baryons (by convention, what we are
made of is called matter rather than anti-matter). Nextapproximately 1 s after the
big bangcomes the lepton epoch, in which electrons, neutrinos and muons take
on an independent rather than the merely ephemeral existence they enjoyed before
(during which they were being continuously created and destroyed). At this point
neutrinos more or less stop interacting with other forms of matter. The electrons are
now free to combine with protons to begin the formation of atoms. This leads to the
epoch of nucleosynthesis, as discussed above.

2.4 Cosmology and Emergence

Of special interest is a sequence of phase transitions that occur through these


epochs, each one marking a loss of symmetry and the separation of forces that were
heretofore unified. During the initial, almost entirely mysterious Planck epoch at
the very beginning of things, all the forces were unified as described by the elusive
theory of everything. The decoupling of the forces occurred via a complex process
of spontaneous symmetry breaking and mass creation. This is best understood in
the case of the decoupling of the electromagnetic and weak forces. When unified,
according to electroweak theory of Glashow, Salam and Weinberg, the weak nuclear
and electromagnetic forces have the same strength, and form a single electroweak
force. The phase transition occurs via a transformation of a field, called the Higgs
field after its inventor,19 which imbues the particles (three kinds of bosons) mediating
the weak nuclear force with mass. The range of a force is inversely proportional to
the mass of its exchange bosons, and one distinguishing characteristic of the weak
nuclear versus electromagnetic force is that the latter has infinite range whereas
the former is effective only over very short ranges. The exact mass acquired via
the Higgs mechanism is not directly fixed by the theory but depends upon a free
parameter (the Weinberg angle) whose value can be measured empirically. But the
electroweak theory does tell us how to relate this parameter to the ratio of the expected
masses. It was a great triumph of the electroweak theory when the predicted bosons
were foundwith the proper massesin 1983/1984 (a first hand account of the
complexity and difficulties of this experimental episode can be found in Watkins
1986).
It is conjectured that via similar processes all the forces decoupled sequentially
as the universe cooled from an initial unified state. Two features of this cascade are
particularly important to us. The first is that it seems there will be no way to predict
from knowledge of the unified state the exact values of the relevant parameters of
the post-unified state. The process of spontaneous symmetry breaking involves
a random element. By way of analogy, consider the properties of a bar magnet.
An iron bar magnet will lose its magnetism if heated above 770 C (what is called
irons Curie temperature). The atomic/electronic spins are disordered by thermal
agitation above this temperature. As the bar cools magnetism returns and the magnetic
2.4 Cosmology and Emergence 23

field of the iron will be oriented in a definite direction, though there is nothing
which forces any particular magnetic orientation. Certain patches of order appear
and begin to crystallize, forcing order upon nearby regions, until eventually a single
magnet orientation emerges. That is, there is spatial symmetry which is broken by the
imposition of a definite magnetic orientation. The laws of nature do not distinguish
any spatial direction as special even after the magnetic orientation is fixed by
accident (or design).
The creation of the magnetic field of a bar of iron by spontaneous symmetry
breaking is thus a case of emergence which is unpredictable in principle. Similarly,
the ratio of the relative strength of the four forces and the masses of the force carrying
particles may not be set by nature but emerge through a random process. No matter
how much one knew about the symmetric phase, it would be impossible to predict
these values. Of course, we do not really know that these values are randomly fixed by
the free evolution of the early universe. It is possible that deeper theory will show how
they are dictated by physical law (for example, typical GUTs fix the free parameter
of the electroweak theory mentioned above, while leaving other parameters unfixed).
It is far too early to have any confidence about what the final theory of everything
will look like or even whether we shall be able to construct one. The mechanism
of spontaneous symmetry breaking discussed here serves to illustrate one possible
kind of emergence. More important, the discussion illustrates the status of the post-
lepton epoch physics. No matter what form the theory of everything takes, a strong
constraint on it will be to generate the state of the universe a few seconds after the big
bang. Short of a truly revolutionary shift in astronomy and astrophysics, the big bang
model of the later universe (after 1 s that is) is now a part of standard knowledge.
And we have seen how our astronomers, in such stark contrast to the imaginary
scientists of the lonely Earth thought experiment, have at every stage found both
satisfying answers to current questions about the nature and structure of the universe
and Earths place within it as well as answers that fit together into an elegant total
picture.
Picture then the state of the universe when approximately 1 s old. Is there any-
thing missing from our cosmological picture? Considered from the standpoint of the
recognized scientific disciplines, almost everything. There is no chemistry (organic
or even inorganic), no biology, no psychology and no sociology. Is this surprising?
From a purely physical point of view, not at all. Conditions of the universe at this
time are, by Earthly standards, very extreme. The temperature was something like
one billion degrees and it was quite impossible for atoms to form. The formation of
stable atomic nuclei was just becoming possible as the temperature dropped below
1 billion. There was then a small window of opportunity where density and temper-
ature permitted the synthesis of hydrogen and helium nuclei (as described above).
Thus it is a natural fact that the only science which applies to the universe at the age
of 1 s is physics. As we move back in time closer to the big bang itself, we encounter
more exotic, eventually frankly speculative, physics, but its physics all the way back.
However, as we pursue time away from the big bang towards the present, we
obviously enter the domain of application of all the other sciences. Thus, for example,
chemical and biological entities and processes (such as molecules, bonding, living
24 2 Looking Out

organisms and natural selection) must be in some sense emergent phenomena. A


little more precisely, any phenomenon which appears in a system which heretofore
did not exhibit it can be labeled diachronically emergent. The cosmological tale as
we now conceive it must be a tale of diachronic emergence.
My labeling naturally suggests a second form: synchronic emergence, which can
be roughly defined in terms of properties possessed by a system which are exemplified
by none of its components. The cosmological story told here supports synchronic
emergence no less than the diachronic form. It seems quite evident that the physical
entities which exist at 1 s after the big bang have no biological properties. Yet some
12 billion years later we know that at least a few systems possess such properties.
The extent of biology in the universe is one of the most interesting unanswered
questions facing science today. People have wondered about exobiology for a long
time, but before long we will be mounting serious efforts to grapple with this issue. It
is on the edge of technical feasibility for us to launch space based observatories that
will be able to analyze the atmosphere of the planets of nearby stars. As James Love-
lock pointed out some time ago (Lovelock 1965), the presence of life, at least life as
we know it, on a planet has profound effects on its atmosphere, maintaining it in a
non-equilibrium state that is a signpost of biological activity. As I write this, the pres-
ence of methane in the atmosphere of Mars has been detected. Because of inevitable
chemical reactions, methane cannot remain for long in a planets atmosphere unless
it is being replenished. Since Mars shows no signs of active volcanism (one of the few
possible alternative sources of methane), it is possible that we have already indirectly
detected life on Mars.
In any case, the obvious diachronic emergence of biology is coupled with its
synchronic emergence: we do not think that the physical components of living things
are themselves carriers of biological properties. This assumes that the basic structures
of the world have not themselves acquired biological properties at some time after
the big bang. Does it seem plausible to suppose that although the protons that existed
1 s after the big bang did not exemplify any biological properties, these very same
protons now do carry biological properties? On the contrary, there is no reason at
all to think that the properties of protons have changed in the slightest across the
billions of years since the big bang.20 There is no need to ascribe the properties
of any science save physics to the early state of the universe. It follows that all
scientific properties save those of physics are at least diachronically emergent and in
all probability synchronically emergent as well.
Emergence will be extensively discussed below. The lesson of this chapter is that
the overall scientific picture of the universe to which we have been led by abundant
evidence strongly supports emergencein some sense yet to be fully clarified. But
before turning to the nature of emergence itself, I want to examine another line of
evidence that also supports emergence, but this time in the narrower domain of life
and mind rather than the whole universe.
Chapter 3
Looking In

3.1 Jelly Beings

Everyone can remember how the Scarecrow would time and again save Dorothy,
Toto, the Tin Man and the Lion with his quick wits and resourcefulness. And he did
it all despite the apparently serious impediment of entirely lacking a brain. Although
his head was stuffed with straw, he gave convincing evidence of a host of mental
attributes. Perhaps, as Wittgenstein said, it is enough simply to behave like a living
human being to merit the ascription of consciousness (see Wittgenstein 1953, 281).
Such a remark invites a variety of possible thought experiments analogous to that of
the lonely Earth in the previous chapter, albeit ones that are still more fanciful. It does
indeed seem in some sense possible that we could have discovered that human heads
were stuffed with straw, or, a little more plausibly, a completely undifferentiated jelly.
Or we could go further. Lets try to imagine a world where science progresses much
as it did on Earth, except for the bizarre changes necessary to support this extension
of the Wittgensteinian thought experiment. So far as anyone can tell, the insides of
all animals is just a kind of undifferentiated jelly (except for the bones, say). Call
these creatures jelly beings.
So, try to imagine the slow advance of science proceeding pretty much as it did
here on Earth, with the bizarre additional fact that all animal life forms are mere jelly
beings. To be sure, anatomy will never amount to much in our imaginary world, but
the physical sciences can progress normally. But as scientists tried to integrate their
knowledge into as complete and coherent a picture as possible they would face an
anomaly much more severe than the one which faced our imaginary astronomers of
Lonely Earth. Though apparently, in some attenuated sense, possible, jelly beings
make no sense at all. Very abstract theories, such as thermodynamics and evolution,
might be applied to them, but with the devastating addendum that there would be
no explanation at all of how these abstract conceptions were implemented within
the jelly beings. Perhaps a counterpart of Mendel could discover the statistics of
inheritance in this strange world, but how could there be any reasonable account of

W. Seager, Natural Fabrications, The Frontiers Collection, 25


DOI: 10.1007/978-3-642-29599-7_3, Springer-Verlag Berlin Heidelberg 2012
26 3 Looking In

the mechanism of heredity? It is all just jelly inside, amorphous and, by hypothesis,
not analyzable in the chemical terms suitable for non-animal nature.
Biological explanation would thus run up against a solid brick wall. The cognitive
and neurosciences would suffer the same fate. We have stipulated that, within the
thought experimental situation, animals, including humans, behave just as they do
in the actual world, but unfortunately there would be no detectable causal structures
mediating perception or action. I wont dispute Wittgensteins claim that such a
discovery would not affect our attributions of mentality to others or ourselves (how
could it affect self-attribution in the face of continued consciousness after all). But
there would be no theoretical purchase on how consciousness is implemented in
the head. Obviously this would give aid and comfort to dualists and other non-
naturalists about the mind. In this fantasy, humans and animals would really possess
the anatomical uniqueness so often sought for and claimed, but the price would have
been the loss of all hope for any scientific biology, psychology or cognitive science.
In the real world, things went very differently. From very ancient times, we have
known something about the complex and articulated internal structure of animals,
and doubtless there were any number of conjectured links between the internal organs
and the mental or spiritual characteristics of things. The role of the internal organs
was given at least a proto-scientific veneer with the humoural theory of the ancient
Greeks. This account saw the four humours (the bodily fluids of blood, phlegm,
black and yellow bile) as working together within a kind of homeostatic system, and
so, to a limited degree, caught an important truth. Of course, gross misconceptions
abounded. Aristotle famously held that the brain was a peripheral organ whose lowly
function was the cooling of the blood and heart, and while such a view would have
meshed with the jelly beings scenario, his outlook did not survive for long in the real
world. The essential role of the brain in mental functioning has been long recognized,
althoughunderstandablyincompletely and inaccurately. Perhaps the Presocratic
philosopher Alcmeon was, around 450 BCE, the first to state in at least a quasi-
scientific way the link between mental functioning and the brain. He is reputed
to have performed dissections of the eye and optic nerve which led him to posit
the general principle that all the senses are connected to the brain (see Barnes
1983, pp. 478 ff.). Somewhat later, around 400 BCE, Hippocrates made the stronger
claim that the brain is the organ which enables us to think, see, and hear, and
to distinguish the the ugly and the beautiful, the bad and the good, pleasant and
unpleasant (Gross 1999, p. 13). The importance of the brain was codified by Galen
(circa 180 CE) whose medical insights presided over the middle ages (see Gross
1999). Theologically minded medieval thinkers (as they all were) largely agreed
that the seat of the soul was in the brain, and indeed they frequently localized it
more specifically, favoring the ventricles as likely places where mind (soul) and
body communed together (following on this tradition, it is perhaps little wonder that
Descartes took as one of his tasks the locating of the exact place in the brain where
the immaterial mind interacts with matter; see Zimmer 2004 for an extensive account
of the slow recognition of the importance of the brain for mental function).
Our jelly beings thought experiment is too crude and extreme to be worth pursu-
ing. But no less than in the astronomical thought experiment, the real world contrasts
3.1 Jelly Beings 27

starkly with the imaginary in the way multiple lines of evidence arise and come
together in aid of the construction of a unified account of the phenomena at issue.
From an abstract vantage point, the big bang hypothesis describes the way in which
physical law was implemented in one particular universe (or sub-universe: the region
we call our universe or the observable universe). On the side of biology, the overar-
ching theory is not physics, but evolution, and biology on Earth is the tale of how
evolution has been implemented here.

3.2 Cell Theory and Organic Molecules

From this standpoint, the nearest thing to a big bang hypothesis in biology is the
discovery of the genetic mechanisms of cells. In turn, we can break this down into
the cell theory of living organisms and the much later idea of an internal cellular
mechanism of heredity (still later found out to be DNA based). Of course, the cell
theory is directly at odds with the jelly beings thought experiment. Instead of a solid
brick wall through which explanation cannot pass, the cellular structure of living
things provides a highway into both the current functioning and evolutionary history
of life on Earth, and then joins an expressway of explanation which runs all the way
back to the big bang itself.
Just as the stellar nature of the Milky Way, the mountains of the moon and the
Jovian satellites were discovered as soon as Galileo turned a telescope upon them, so
cells were discovered as soon as the microscope was invented and turned upon living
things. Robert Hookes amazingly detailed illustrations in Micrographia (published
in 16651 ) revealed, among many other seemingly greater wonders, the structure of
a piece of cork, which Hooke likened to the honeycomb of a bee hive and borrowed
the term cell to describe it. It was not until very much later that the idea that the
cells within organisms were themselves biological units which collectively com-
posed more complex entities. In 1839 Theodor Schwann and Matthias Schleiden
advanced the cell theory (under that name), although the idea that all living things
were composed of cells, whose individual capacities and organization gave rise to
the functions of the organs or properties of the organisms which they constituted,
was already quite widely held. The cell theory proposed that most organisms were,
in essence, vast colonies of more or less specialized microscopic animacules.
The codification of the cell theory followed hard upon the heels of another decisive
event in biochemistry: the synthesis of organic compounds from purely physical
ingredients. In 1828 Frederich Whler had produced urea in his laboratory from
inorganic materials. At the time it was widely held that organic compounds were
not reducible to the inorganic because living matter was supposed to contain or
exemplify a distinctive vital principle. Vitalists of the day responded, not very
convincingly, that one of the essential ingredients in Whlers processcyanate
had been derived from blood products and thus he had merely transferred the vital
spark from one compound to another. Other vitalists explored the remaining logical
option of denying that urea was really an organic compound, being just a breakdown
28 3 Looking In

product of biological metabolism. But of course Whler was simply the vanguard of
a host of synthesizers. One of his own students, Adolph Kolbe managed to produce
acetic acid from indisputably non-organic precursors by 1845 and during the 1850s
Pierre Bertholet similarly synthesized a raft of other compounds (including methane
and alcohol).
Of course, it is logically possible for things to have gone differently. Along the
lines of our thought experiments we can imagine a world in which it turned out to be
impossible ever to synthesize organic chemicals from purely inorganic precursors.
Scientists in such a world could invoke the word vitalism to label their problem,
but their explanatory project would have ground to a halt at the organic/inorganic
divide. A curious footnote to the history of emergence illustrates this fantasy; for a
short while it looked like it was coming true.
In 1824 Whler, working in the famous laboratory of J. J. Berzelius, had made the
astonishing discovery that cyanic acid and fulminic acid had quite distinct properties.
What was astonishing was that, as Whler meticulously worked out, the two acids
have identical chemical constituents. Chemical orthodoxy at the time demanded that
distinct properties of complex substances required a difference in their elemental
composition. Berzelius himself introduced the term isomer for compounds with
identical chemical constitutions but distinct properties. It was then hypothesized
that it was the structure of the compounds that differed, but the whole concept
of chemical structure was at the time extremely dubious since very few chemists
could bring themselves to believe in the reality of atoms. Often atomism meant little
more than a belief in uncompounded chemical elements, with no commitment to the
idea that such were grounded in discrete, quasi point-like units of matter that went
together like Tinkertoys and could exemplify internal geometric relationships (see
Brock 1993, Pullman 1998). Various structure theories were developed that could
handle isomerism (e.g. Berzeliuss copulae theory or Benjamin Brodies operational
calculus).
But then came the discovery of the photo-isomers, whose crystals were optically
active. That is, polarized light shone through them has its plane of polarization rotated
either right or left. Otherwise, such crystals did not appear to differ physically or
chemically. This presented chemists of the day with the rather fundamental problem
that the number of isomers exceeds the number of possible structures (J. Wislicenus,
as quoted in Brock 1993, p. 260). It is, once again, possible to imagine that no physical
explanation could have been found for this; that these organic compounds simply,
and as a matter of brute fact, differed in a way unrecognizable to physical science.
Perhaps it would have been said that they differed at the level of their vital principle
rather than in mundane physical or chemical properties.
Instead, in the real world, we have a explanatory triumph that deeply supported
the link between the organic and inorganic realms and the emergence of the former
from the latter. The first stereoisomer (as they would come to be called) to be noticed
was tartaric acid which in 1847 Louis Pasteur tackled as a doctoral student. He grew
large crystals of the acid and noticed, by naked eye no less, that there were two
asymmetric forms, each the mirror image of the other.
3.2 Cell Theory and Organic Molecules 29

Pasteurs discovery led Jacobus vant Hoff to propose that chemical stereoiso-
mers could be explained by assuming that each isomer possessed a distinct spatial
orientation of its constituents. This required that one adopt a quite realist attitude to
atomsor at least the chemical constituents of these compoundsand especially
the bonds between them which had now to take up real angular positions rela-
tive to each other, thus permitting the mirror image asymmetry which provided the
extra degree of freedom to explain optical activity. Although brilliantly success-
ful and fruitful vant Hoffs proposal was not greeted with universal enthusiasm.
Kolbe himself launched a vituperative attack against vant Hoff, for which Kolbe has
unfortunately been branded as a stodgy and narrow minded reactionary ever since,
impugning vant Hoffs credentials as a scientist (for vant Hoff worked at the time
in a veterinary college) and lampooning the metaphysical credulity of those chemists
who sought refuge in supernatural explanations (see Brock 1993, pp. 262263).
In fact, the discovery of chemical chirality was crucial in the advance of chemistry
(and atomism). Chirality has chemical consequences far beyond photoisomerism, as
shown by the examples of thalidomide (only one of the stereoisomers of the drug has
the infamously disastrous side effects) and penicillin (bacteria are susceptible to its
attack through their use of one of the isomers of alanine which our own cells lack).
Something of our fantasy world of blocked chemical explanation persists in cur-
rent creationist writings. It is a curious fact that virtually all life on Earth depends
upon amino acids with left handed chirality (the use of right handed alanine in bac-
teria a lucky exception). The mechanism of selection is unknown and mysterious,
since production of amino acids in the laboratory generally yields so-called racemic
mixtures in which left and right handed forms are equally abundant. Even extrater-
restrial amino acids found in meteorites exhibit a preference for the left handed form
(see Cronin and Pizzarello 1997).2 But if all physical processes of amino acid pro-
duction yield racemic mixtures how can we explain the overwhelmingly left handed
amino acids within organisms? Creationists can, and do, pounce on this, declaring
that only a supernatural process could select out one isomeric form for use in biology.
Given the history of explanatory success in grappling with problems like this, this
declaration seems decidedly premature. The problem of homochirality, along with
a host of others, remains to be solved as we seek a scientific account of the origin
of life. I would be disinclined to bet that this is the place, finally, where scientific
explanation will fail.
Still, at a deeper level, the origin of chemical chirality remains something of
a mystery. The quantum physics that ought to lie behind chemistry, at first blush,
appears to obey the law of conservation of parity. So it seems reasonable to infer that
there should be no difference between the properties of the enantiomers (i.e. the mirror
image forms) of chiral molecules and no reason for nature to prefer one over the other
(once an environment with pronounced chirality forms, there is room, as we have seen
above, for this to have very powerful chemical effects). However, as was discovered
in 1956, parity is not conserved at the fundamental level in interactions involving
the weak force (see Pais 1986, pp. 530 ff.) and the electroweak theory (mentioned in
Chap. 2 above) codifies, if not yet explains, this breakdown. Small but measurable
differences in the energy levels of chiral molecules stem from weak interaction
30 3 Looking In

effects. Some scientists (e.g. Abdus Salam, see Salam 1991) have proposed that
this fundamental asymmetry accounts for why just one form of organic molecule
is favored throughout biology. There is little evidence for this hypothesis at present
but it would help to explain the apparent universal preference nature has for certain
forms of chiral molecules and would, of course, reduce the number of seemingly
brute facts in the world while increasing the scope of emergence.
Whatever its ultimate origins, chirality presents an excellent example of emer-
gence, since obviously the atoms which ultimately make up, say, amino acids possess
no spatial asymmetry unlike the chemical compounds they constitute. The purely
geometric differences in stereoisomerism thus nicely illustrate some of the more
abstract sources of emergence.
Now, to return to the main story. The chemists newly forged link between the
organic and inorganic aspects of nature provided some highly suggestive evidence
that cells themselves were made up of entirely normal matter fully obeying the
laws of inorganic chemistry. The major substances found in cells (lipids, polysac-
cherides, proteins, and nucleic acids) were discovered by 1869 (nucleic acids last)
and were found to be normal chemical compounds, although most details of their
structures and functions remained for twentieth century investigators to elucidate
(most famously of course the helical arrangement of nucleotides in DNA and its role
in inheritance, another interesting example of emergence via spatial orientation). At
roughly the same time, Darwins theory of evolution appeared (Darwin 1859), imme-
diately reforming much of biology, and providing some inkling of how life might
have progressed from simple unicellular forms to multicellular plants and animals.

3.3 The Neuron Doctrine

We begin to approach the domain of cognition and consciousness when we apply the
cell theory to the brain. This specific application is called the neuron doctrine and
it is one of the linchpins of modern neuroscience (see Shepherd 1991 for a detailed
history). The first discovery of nerve cells, the large Purkinje cells of the cerebel-
lum, was made in 1837 by the eponymous investigator, actually prior to Schwanns
enunciation of the cell theory. Thus, although it was clear that the brain contained
cell like structure, it was not at all clear that the cell theory really, or completely,
applied. The multiple tendrils of each nerve cell appeared to connect directly with
each other within the nervous system. Instead of a system of intercommunicating but
individual cells it was thought that the whole nervous system represents a protoplas-
mic continuuma veritable rete mirabile (Lewellys Barker, as quoted in Shepherd
1991, p. 65). This unitary network theory stood in opposition to the cell theory. It
was the long labor of many through the nineteenth century that established that nerve
cells were indeed individual cells and that these cells were the core functional unit
of the brain.
The two points were quite distinct. The famous Norwegian arctic explorer Fridtjof
Nansen was a brain scientist before a Greenland trekker; in his doctoral thesis of 1887
3.3 The Neuron Doctrine 31

he brilliantly deployed the novel method of staining cells invented by Camillo Golgi
to defend the view that nerve cells were individual entities rather than concentrations
in a protoplasmic continuum. But having established that, he then went off the rails
with the additional claim that the nerve cells served a merely nutritive function in
support of the activity in the nerve fibers where the real action was. Nansen was not
a central figure in nineteenth century neuroscience, but in his demotion of the nerve
cells he was expressing a common sentiment; indeed the original network theory of
the nervous system remained dominant for some time.
The neuron doctrine was finalized and essentially demonstrated to be true by one
of the founders of modern neuroscience, Ramn y Cajal, who perfected the Golgi
staining method and devoted tremendous energy to the microscopic examination of
brain cells. Golgi had discovered in 1873 that nervous tissue first hardened with
potassium dichromate then soaked in silver nitrate left individual nerve cells darkly
stained (Golgi called this the reazione nera or black reaction). For reasons no one
understands, this process selectively stains only a few nerve cells, making them stand
out starkly from the background tangle of cells and their processes.
At his own expense, Cajal set up an academic journal in 1888 to disseminate
his results. He wrote all the articles and sent copies, needless to say unsolicited, to
eminent scientists across Europe. To his disappointmenthow naive is thatCajals
work was not immediately recognized. It was his pilgrimage to a Berlin conference in
1889 where he made his breakthrough. The sheer quality of his slides, which delegates
could view through the several microscopes he set up at the conference, made it clear
that Cajal had taken a great leap forward in neuroscience. With surprising rapidity, a
number of eminences, such as Albrecht Klliker, came around to the idea that nerve
cells were separate entities and that they were the core functional units in the nervous
system.
How neurons functioned remained controversial for some time, although it had
been known since Luigi Galvanis work at the end of the eighteenth century that the
long tendrils that sprang from the nerve cellseventually categorized as axons and
dendrites, for output and input signals respectivelycould be stimulated by electric-
ity. Galvani took the natural step of identifying the nerve impulse with an electrical
signal, but it was not at all clear that this was correct. One of the foremost physio-
logical researchers of the time, Johannes Mller (he was in fact the first professor
of physiology as an independent field), remained highly skeptical of the electrical
hypothesis (see Mller 2003; Mllers attitude towards Galvanis views is discussed
in Nicholas Wades introduction).
Some followers of naturphilosophie, an influential philosophical movement
which maintained a number of mystical and vitalist tenets about living things,
regarded the nerve impulse as radically non-physical and possessed of an infinite
velocity. Quite dubious theoretical calculations of the nerve impulse yielded wildly
varying estimates, some fixing it at more than ten million miles per second, far beyond
the speed of light. Mller himself made various attempts to measure the effects of a
neural electrical current, but without success (his instruments were simply not sen-
sitive enough). It took a very long time to work out the processes by which electrical
signals propagate along axon or dendrite, which do not work in anything like that
32 3 Looking In

of simple conducting cables (for the details of this exceedingly complex but by now
quite well understood process, see Nicholls et al. 2001). Nonetheless, the basic fact
of the electrical nature of neural signaling was soon established. One of the famous
experiments in this field was Hermann Helmholtzs demonstration in 1850 of the
finite, and indeed very modest, speed of nerve impulses. By means of a conceptu-
ally simple procedure which was essentially no more than tickling a frogs leg and
waiting for the associated movement, Helmholtz measured the velocity of the nerve
impulse to be a mere thirty meters per second, hardly a supernatural accomplish-
ment (Helmholtz had dissected the frog to isolate a nerve-muscle preparation and
deployed precision instrumentation to mark fine temporal differences).
In the spirit of our thought experiments, here is yet another place where nature
could have thrown up a roadblock and placed us in a mysterious world unamenable
to scientific explanation. But didnt. It is conceivable that the nerve impulse could
have turned out as unmeasurable or possessed of a seemingly infinite velocity and
that it never revealed itself as a kind of electrical signal at all. That would have
been powerful evidence for the mystical views of vitalists but would have left an
unbridgeable gap between brain processes and the rest of nature.
Instead, at every step nature and ingenious experimenters came into harmony
through nothing more than hard thinking and hard work. From our discussion, we
can isolate three crucial points where the nervous system, and especially the brain,
is anchored to the rest of the natural world. First, organic material, the makeup of
cells throughout living nature, is composed of exactly the same chemical and atomic
constituents as the inorganic world. Second, the nervous system is structured like the
rest of organic nature, composed of individual cells which are the core functional
units. Third, the nerve cells form a network in which communication is effected
by electrical signals. This last point has been elaborated in stunning detail over the
last 150 years (see Nicholls et al. 2001), culminating in the detailed knowledge we
now possess about synaptic transmission and the role of the myriad of different
neurotransmitters therein, including more than a glimmering of their psychological
import. Though the pioneers of the neuron doctrine could not have guessed at the
complex processes that underlie the nerve impulse, their basic idea has been entirely
vindicated. It would take us far afield to survey the elaboration of these anchor points
over the last century and a half. Suffice it to say here that the initial general hypothesis
that physiology would prove explicable in standard scientific terms, and the more
specific neuron doctrine have been and continue to be immensely fruitful.
The discovery that the neural network is constituted out of ordinary physical
material leads to another sort of advance critical to neurosciences development, that
of effectively monitoring and measuring the brains activity. If the neurons are the
core functional units and they communicate via electrical signals then measurement
of external electrical and magnetic properties will reveal neural activity (albeit of a
collective nature). This is the basis of the electroencephalogram, or EEG, and the
magnetoencephalogram (MEG). It is also possible to measure the electrical proper-
ties of individual neurons directly via the insertion of a probe electrode into or very
near to the neuron. Measurement and manipulation are often two sides of a single
coin. In this case, it is also possible to influence the neuron by reversing the process
3.3 The Neuron Doctrine 33

with an electrically charged electrode. The famous work of Wilder Penfield (see Pen-
field 1958) deployed this method and showed, serendipitously since he was actually
looking for focal points linked to epileptic seizures, how the electrical stimulation
of neurons could lead to the production of various sorts of conscious experiences. In
1954 an electrode placed in the septal area of a rats brain revealed the now famous
pleasure center (see Bozarth 1994 for a review). Robert Heath performed similar
ethically unrepeatableexperiments on human beings in the 1960s (see Heath 1963,
Heath 1964) with roughly similar results. The idea of a pleasure center is probably
overstated, pleasure is better seen as dependent on a complex and quite extensive
reward system within the brain which is modulated by a variety of neurotransmitters
and hormones.3 The point here is simply that this modulation ultimately works by
changing individual neurons propensities to produce their characteristic electrical
signal.

3.4 fMRI Anchor Points

Recently, sophisticated neural measurement technologies have transformed neu-


roscience, namely positron emission tomography and nuclear magnetic resonance
(though politeness nowadays requires that we drop the black magic word nuclear).
These techniques, especially the latter, have opened a window on the working brain
and, in many ways, the mind as well.
Magnetic resonance imaging (MRI) exploits some intricate details of the first
anchor point mentioned above. The basic MRI signal is the radio emission of protons
whose spin axis is precessing at a characteristic frequency in a magnetic field (the
target protons are usually those in the hydrogen nuclei of water since it is so abundant
in living tissue). An MRI machine is, to a first approximation, just a huge magnet
designed to supply a gigantic and fixed magnetic field over a large volume of space
with a strength of about 24 Tesla (which is 20,00080,000 times stronger than the
Earths own magnetic field, which sounds impressive but an ordinary fridge magnet
might well attain 0.2 Teslaover a minuscule volume). The main magnet is what
accounts for the size and a lot of the cost of an MRI machine.4 Because they possess
electric charge and the quantum mechanical property of spin, the protons in the MRI
subject act like little magnets themselves and will tend to align with the main magnets
field. The alignment is, as a quantum mechanical phenomenon, only probabilistic
and the spin axes form a kind of ghostly cone around the direction of the main field. In
a process somewhat analogous to the way a top will wobble, these spins will tend to
precess around the fields direction at a frequencycalled the Larmor frequency
dictated primarily by the strength of the field. These precessing spins generate a
constantly changing magnetic field and hence will produce an electromagnetic signal.
However, in the initial state, at equilibrium in the main magnets field, the precessing
spins will be out of phase and effectively cancel each other out. The trick of the MRI
scanner is to get the target protons into a coherent state so that a rotating magnetic
field will be generated.
34 3 Looking In

Another complication is worth noting. Quantum spins in any direction can take
on two values, call them up and down. Both up and down spins can be in align-
ment with the main magnetic field, either parallel or antiparallel with the field. We
might then expect that all effects will be washed out by the parallel and antiparallel
spins canceling each other. However, because there is a small energy advantage to
the parallel alignment there will be a small asymmetry or excess in the number of
protons which align parallel to the field compared to those aligning antiparallel (the
asymmetry, whose value depends on the strength of the main magnetic field, is just
a few parts per million in standard MRI machines). The MRI signal is dependent
on the unmatched protons which in the end do produce a net magnetic field in the
subject aligned parallel to the main magnetic field. it is bizarre to think that when
you go into the MRI scanner your body, or parts thereof, is literally turned into a
magnet.
The first step in acquiring an MRI signal is to focus radio energy at the exactly
appropriate, or resonant, frequency onto the subject. This energy is absorbed by
the target nuclei, causing a good number of them to undergo a spin flip transition
into the higher energy state and come into phase with each other. This effect, called
nuclear magnetic induction, was discovered in 1946 (the original article is Bloch
1946). There are various ways to regard what is happening, but we can conceive
of it as the net magnetic field tipping out of alignment and precessing around the
direction of the main magnetic field. This in turn generates an electromagnetic signal
identical to the induction signal. Now, when the induction pulse is turned off there
is a short period of timethe relaxation time5 during which the generated signal
continues on its own and can be detected, until the proton spins fall out of phase
and the system returns to equilibrium with the net magnetization once again aligned
with the main magnetic field. The complexity of the whole system is dizzying (and I
am only providing the merest hint of a sketch of how this all works), which matters
because it emphasizes what I called anchor pointsthe way that our fundamental
physical understanding meshes with processes which reach all the way up to the
psychological.
Extracting information from this signal is just as complex as its generation. The
trick is to exploit the fact that the induced signal will subtly vary because the magnetic
field is not homogeneous and is differentially affected by the nature of the tissue at
any particular location. The signal would be undecipherable however unless it was
restricted to or regimented within a spatial region. To achieve spatial localization
the operators of the machines also carefully modulate the magnetic field in which
the subject is immersed. Thus, MRI machines have a set of secondary or gradient
electromagnets (of much less intensity than the main magnet) which can be manipu-
lated to alter the overall magnetic field with high spatial and temporal precision. The
result is that the received signal is composed of a large number of slightly different
components which reflect, in complex ways, the nature of the substance emitting
them and their location. These can be teased out by a mathematical analysis (Fourier
analysis) at which modern computers are very adept and which can then be used to
produce an image.
3.4 fMRI Anchor Points 35

The standard MRI we have been discussing is medically most useful for imaging
static structures (see for example Chap. 4, Fig. 4.2). MRI machines used in this way
are just a kind of jumped up x-ray device. Starting in the early 1990s a new kind of
imaging was developed that could track real time metabolic changes6 ; this was called
functional MRI (fMRI) and has been intensely applied to brain imaging over the last
decade. Metabolic processes within cells are sustained by a host of inputs, of which
perhaps the most crucial is oxygen. Neurons are prodigious energy consumers and
require a rapid and large supply of oxygenated blood (something approaching one
liter of blood is pumped through the brain every minute). The brain is also capable
of directing oxygenated blood to those regions which are most active (a nineteenth
century finding, see the classic article Roy and Sherrington 1890), and this haemo-
dynamic response is what can be measured via fMRI. Just as in ordinary MRI, the
extraction of an image depends upon variations in the magnetic field. It turns out
that oxygenated blood and deoxygenated blood have quite distinct magnetic proper-
ties (oxygenated and deoxygenated haemoglobin are diamagnetic and paramagnetic
respectively) which results in differential distortions in the local magnetic field as
blood is moved into the brain and its oxygen is taken up in cell metabolism. This
in turn causes changes in the relaxation times of brain tissue depending on whether
or not it is receiving extra oxygenated bloodthat is, depending on how active the
neurons within it are.7 While these differences are very small, they are detectable.
The primary use of fMRI, as well as all other brain scanning techniques, is of
course for practical medical diagnosis and research. That is not what we are interested
in here but luckily there are a number of researchers with access to scanners looking
at the neural substrates of mental states and processes. One extra difficulty that
intrudes here is that, by and large, the brain is a seething hive of activity in which it
is very difficult to trace neural activation that is related to specific cognitive states.
Put another way, the brain has a great many jobs to do and it is always on the job.
Every task involves many common functions, and the distinctive activity associated
with a given task is usually very small relative to the overall activity of the brain.
Thus most cognitive imaging involves taking averages over many trials and many
subjects, and also taking such averages during a control or rest state. The difference
between the two averages is taken to be perhaps an indication of the brain activity
distinctively associated with the cognitive state being investigated. We are a long way
away from real-time, single-subject mind-reading machines, although we are already
at the stage where a serious neuroscience article can begin with these words: [a]
challenging goal in neuroscience is to be able to read out, or decode, mental content
from brain activity (Kay et al. 2008). MRI machines are constantly improving in
sensitivity and it would be very rash to deny the possibility of mind-reading via fMRI
and successor technology, or to suggest that it wont happen in, say, the next fifty
years.8
This is another place where nature could have refused to cooperate, but did not. It
is possible to imagine that all the experimental design and mathematical manipula-
tions applied to the fMRI signal simply could not generate any coherent data linking
haemodynamic activity to neural processes, still less to cognitive states. Such a pos-
sibility is far less radical than our other thought experiments and it is thus significant
36 3 Looking In

that to the contrary, nature is, despite the primitive level of current instrumentation,
already providing us with robust results from MRI machines. Well start to examine
some of these in a moment, but first a brief digression on the anchor points.
As we have seen, there are multiple places where the activity of the brain is
anchored to more basic features of the world. It is natural to ask about the relations
between the anchor points. These relations themselves ought to weave a web of
explanatory interconnections. There are also more practical scientific issues about
calibration of new measurement devices and about what it is exactly that is being
measured. One place where the philosophical and scientific considerations come
together is the relation between the haemodynamic response measured in fMRI and
the activity of neurons.
The neuron doctrine assigns to the individual neuron and its output signals the
central functional role in the brain. It is assumed that the haemodynamic response
matches increased neural activity. This crucial hypothesis is what links fMRI to the
basic functional units in the brain. It is only quite recently that this linkage was
tested by Logothetis et al. (2001) whose difficult experiment managed to show
unequivocally that a spatially localized increase in the BOLD (i.e. Blood Oxygen
Level Dependent) contrast directly and monotonically reflects an increase in neural
activity (p. 154).9
Of course, the ultimate object of the exercise is to discoverin some detail befit-
ting the ongoing exponential explosion in our knowledgesomething about the
neural substrate of mental states, especially states of consciousness, and to this we
now turn, beginning with a fanciful tale.
Chapter 4
Consciousness in the Brain

4.1 From Phrenology to Brain Imaging

Jill is worried about the sincerity of Jacks affection. Young, beautiful and the founder
of a NASDAQ leading, multibillion dollar biotechnology firm she has developed a
natural fear of predatory suitors. Too many times the semblance of true love has
proven temporary sweet illusion. Enough is enough. But how can Jacks tempting
and oft professed love be tested? In the old days, a persistent knight could be set on
an impossible quest that only a true hearted suitor, guided by the indisputable power
of love plus a considerable amount of luck, would have the ghost of a chance of
completing.
But this sits ill with the contemporary temper (and in any case dragons have
become distressingly rare). Jills thoroughly modern instinct is to seek a technological
quick fix: a mechanical gauge of Jacks affection. How about functional magnetic
resonance imaging? Thus Jill requires that Jacks quest shall be to lie very quietly
within an fMRI chamber, patiently examining images of an assortment of lovely
women, among which are occasionally seeded those of Jill herself as well as some of
Jacks previous loves. Short of acute claustrophobia, this is not a quest to shrivel the
heart of a good knight, loyal and true. But dissemblers should fear it, for the fMRI
machine cannot be fooled by sweet words and heavy lidded gazes.
And so Jack is slid into the long dark tube of the MRI machine, like a submarines
torpedo readied for firing. Operators, and Jill, anxiously watch a bank of monitors
as a garishly coloured image of Jacks brain appears before them. Sure enough,
whenever a picture of Jill is presented to Jack there is a strong and distinctive response
from deep within his brain. A certain pattern of activation spreads over particular
small regions; a marked deactivation occurs in another region. After enough runs to
guarantee statistical significance, the sages gravely confer and agree: this is a sure
sign of true love. Jill is ecstatic. Jack may be beginning to have some doubts, despite
his now publicly certified feelings for Jill.
Lets look more closely at this, for it contains or leads to almost everything we need
to think about when we ponder the problem of linking the fundamental functional

W. Seager, Natural Fabrications, The Frontiers Collection, 37


DOI: 10.1007/978-3-642-29599-7_4, Springer-Verlag Berlin Heidelberg 2012
38 4 Consciousness in the Brain

Fig. 4.1 Basic brain anatomy

units of the brainthe neuronswith mental states and, ultimately, consciousness.


First, Im not (quite) making this up. fMRI is a real step towards mind reading, which
at least in a crude and, so to speak, collective form may turn out to be easier than
anyone would have imagined. Recent work (Bartels and Zeki 2000; Bartels and Zeki
2004) foreshadows a scientific version of my modern fairy tale. Indeed it is the case
that people in love show an easily identifiable pattern of brain activity in certain
regions of the brain. The medial insula, parts of the anterior cingulate (more on these
brain regions below) and zones of the striatum (a part of the brain in which resides
the so-called pleasure centre) are all characteristically activated by the sight of the
loved one, while a part of the prefrontal cortex tends to deactivate. This pattern of
activation and deactivation is not idiosyncratic or confined to any one individual but
was observed over a number of subjects very consistently.
It is no accident that these are the parts of the brain implicated in our love-test. To
set these particular regions into focus, we should begin with an overview of the brain.
We can divide the brain into three basic components: the cerebrum, cerebellum and
brainstem. The cerebral cortices make up the cerebrum, the largest and evolutionarily
newest section of the brain which wraps fully around the brainstem and dwarfs the
cerebellum, which hangs below the cerebrum. The external appearance of the brain
reveals four of the cerebral lobes (named after the bones of the skull under which
they lie) and the cerebellum, along with a portion of the brainstem (see Fig. 4.11 ).
Though mightily enhanced by modern imaging evidence, the idea that mental
functions might be closely associated with delimited or localized brain regions is
of course not new. Its first at least quasi scientific appearance is with the phrenol-
ogy of Franz Gall and Johann Spurzheim in the early nineteenth century. Although
phrenology is nowadays a kind of paradigm case of pseudo scientific quakery, Gall
himself was no quack and was perhaps only guilty of an overly optimistic hope that
the physiological correlates of mental function would be easy to find. He is hardly the
only one in science guilty of such optimism, which may be partly excused as simply
4.1 From Phrenology to Brain Imaging 39

a methodologically virtuous love of simplicity in ones hypotheses. Gall, however,


may have violated Einsteins dictum: theories should be as simple as possible, but
no simpler. But, after all, it is not absolutely inconceivable that variations of the
skull could correlate with mental functioning, and what else did Gall have to go on?
Scientists, if not philosophers, must match their aspirations to available technology.
And Gall wasnt completely off track. Researchers nowadays use cranial casts of our
hominid ancestors to look for the traces of brain structure left in fossilized skulls,
and have discovered, for example, some evidence that Homo erectus may have had
language, or proto-linguistic, abilities of some kind by finding a marked asymmetry
between left and right brain in the region of Brocas area (of which more below)
in an ancient, though not precisely dated, skull found in Indonesia (see Broadfield
et al. 2001). Phrenologists also deployed what seems to modern ears a curiously
quaint scheme of mental categories, including amativeness, conscientiousness and
mirthfulness. But then, the current idea that there is a part of the brain devoted or
specialized for recognizing faces may come to seem no less preposterous, whether
or not some kind of modularity of cognitive function is maintained (maybe this is
already happening; see Gauthier et al. 2000).
The concept of functional localization admits of a more abstract definition than
the obvious one of correlating a brain function with activity in a particular part of
the brain. It seems reasonable to allow that an anatomically scattered set of neural
systems could form a functional unity devoted to a particular cognitive task. It
might be better to think in terms of a multi-dimensional categorization involving
more or less delimited regions of the brain and the degree to which these regions are
specifically devoted to a particular task. Maximum functional localization would then
be a matter of maximal specificity conjoined with minimal dispersion in the brain. It
is far too early to say how cognitive functions will map onto such a categorization.
And with respect to knowledge of the basic neuro-cognitive functions which underlie
the mind, we really arent all that much better off than the phrenologists.
Another way to put this is to distinguish between the relatively low-level, brain
structural hypothesis of localization and the relatively high-level, cognitive functional
hypothesis of modularity (see the classic Fodor 1983 for the core ideas of modularity
theory2 ). Modules are functionally discrete but they do not have to be localized.
Furthermore, our mental functions might be the product of the interaction of many
modules devoted to sub-mental activitiesperhaps that is the most likely scenario.
A healthy skepticism about localization of mental function is thus warranted.
But of course that should not and will not stop efforts to correlate mental states
or processes with neural activity, though it favours caution in the interpretation of
such correlations. In general, there are no specific fine-grained functional correlates
to the gross anatomical divisions illustrated in Fig. 4.1. For example, the occipital
lobe is devoted to vision, but if one regards the recognition of objects by sight as
part of our visual abilities then vision extends far beyond the occipital region. The
brain is extremely densely cross connected throughout, and signals from any part of
the brain can be sent to any other part. Many of the bizarre neurological syndromes,
which traditionally offered the only window into brain-mind linkages, are as much
40 4 Consciousness in the Brain

Fig. 4.2 Brain bisection


(MRI scan)

the result of loss of coordination between distinct parts of the brain as of damage to
particular functional regions.

4.2 Communication Breakdowns

The most spectacular example of these communication breakdowns is the result of


severing the massive, high bandwidth, connection between the two cerebral hemi-
spheres called the corpus callosum, shown in cross section in Fig. 4.2.
There is an extremely rare congenital syndrome, agenesis of the corpus callosum
(ACC) in which the organ never develops at all. Although usually involving severe
behavioural, cognitive and developmental difficulties, ACC is occasionally almost
entirely asymptomatic. It seems likely that in the absence of the corpus callosum
pre-existing subcortical pathways, whose functions are replaced with callosal con-
nections as the brain matures, remain functional in victims of ACC (see Lassonde
et al. 1986). The separation of the hemispheres is, however, more often the result of
deliberate surgical intervention. Brain bisection or commissurotomy is done in the
last resort to relieve dire and otherwise untreatable forms of epilepsy. A successful
procedure prevents a seizure from spreading from one hemisphere to the other. But
such operations produce some incidental symptoms that forcefully suggest the cre-
ation, or conceivably even the pre-existence, of two quite distinct consciousnesses
residing in one brain (for an account of the early work on this see Gazzaniga 1970).
A basic consequence of brain bisection, plus the curious fact that sensory and
motor connections are crosswired,3 is that speech mechanisms, which are usually
located on the left side of the brain are isolated from any visual information that
happens to be restricted to the brains right side. For example, if bisection patients
4.2 Communication Breakdowns 41

are briefly presented with a short and meaningful phrase like key ring, where the
little space between the y and the r is set as the fixation point at the centre of
their visual field, they will be prepared to say that they saw only one word: ring.
Despite the direct, and fully conscious, report that they saw ring, if asked to select
an object corresponding to the presented word, their left hand (under the control
of the right side of the brain) will select a key rather than either a ring or a key
ring. Nor is this behavioural divisiveness restricted to and only revealed in special
laboratory conditions. There are many reports of commissurotomy patients suffering
from frequent and distressing intermanual conflict in ordinary life, sometimes for an
extended period of time after the operation (Joseph Bogen 1998 writes that [a]lmost
all complete commissurotomy patients manifest some degree of intermanual conflict
during the early postoperative period) occasionally of a quite violent nature.
There are certainly two centres of behaviour control here; are there similarly
two centres of consciousness? I dont think anybody knows. Philosophers have of
course weighed in on the issue. Thomas Nagel provided an early, fascinating and
inconclusive discussion (see Nagel 1971). Charles Marks and Michael Tye both
defend the single consciousness view of bisection (see Marks 1980; Tye 2003). In
contrast, Roland Puccetti argued for the dual consciousness view, along with the
yet more audacious claim that even normal brains have two centres of consciousness
within them, in Puccetti (1973). A somewhat elusive and obscure alternative in which
split brain phenomena illustrate partial unity of consciousness (roughly defined in
terms of states A, B and C being such that A and B are co-conscious with C but
not with each other) has been explored by Susan Hurley (see Hurley 2003). A good
overview of the vast general topic of the unity of consciousness can be found in
Brook and Raymont (2010).
Be that as it may, the disruption in behavior which frequently at least appears to
involve dual consciousnesses is occasioned by a purely physical intervention which
can be straightforwardly investigated. Although nature has not been kind to those
who have been bisected, she has been once again surprisingly friendly to those who
seek to explain what is going on in the brain.

4.3 Blindsight

Another interesting oddity of consciousness that stems from the intricacies of the
brains internal communication system is blindsight (for an authoritative overview,
as well as integration with other deficits, see Weiskrantz 1997), an affliction in which
people who sincerely declare themselves to be utterly blind in part of their visual
field (called the blind field) and who generally act fully in accordance with these
assertions nonetheless are able to obtain and use visual information presented in the
blind field. Blindsight is the result of damage to a specific part of the occipital (or
visual) cortex, a part of the brain devoted to the processing of neural signals from the
eyes, in a region called V1 (see Fig. 4.3) which is the first area of the visual cortex
to receive input from the retina (the resulting blindness is called cortical blindness
42 4 Consciousness in the Brain

Fig. 4.3 More brain regions

to distinguish it from much more common peripherally caused forms of impaired


vision). Thus, more or less damage in V1 will cause a greater or lesser zone of
blindness, or scotoma, in the patients visual field (on the side opposite to the damage
in accord with the twisted wiring of the brain). Seemingly, it is obviously true that
if the first link in the chain of neural processing necessary for seeing is broken no
vision can result. Nonetheless, people with blindsight can respond to visual stimuli.
For example, if a bright light is flashed in the blind field, a blindsighted subject will
deny seeing anything but can nonetheless guess with extremely high accuracy the
location of the flash.
Much of what we know about the neural causes of blindsight comes from studies
on monkeys, who possess a visual system very similar to our own and can have
blindsight intentionally imposed upon them (see Stoerig and Cowey 1997). Monkeys
with one or both of their visual cortices excised give clear indications of blindsight.
It does seem however that monkeys do much better than humans at adapting to the
condition so as to use their blindsight in everyday activity. It is sometimes hard to tell
the behavior of a monkey fully blindsighted by removal of the visual cortices of both
hemispheres from the behavior of a normally sighted monkey (see Humphrey 1984,
Chapter 3; Stoerig and Cowey 1997, pp. 549 ff.). Perhaps this is only because of the
extremely extensive and rigorous training to which monkeys can be subjected (for
example, Nicholas Humphrey worked at getting a blindsighted monkey named Helen
to see again for 7 years), but I wonder whether human introspective consciousness
in a way interferes with such training in humans since it is so evidently pointless
to even try to see when one knows that one is blind.
What is more, it is possible for information presented to the blind field to influence
conscious thought. In an experiment somewhat reminiscent of the split brain research
noted above, a blindsighted subject hears (and typically also sees in their non-blind
4.3 Blindsight 43

visual field) an ambiguous word, say bank, immediately following solely visual
presentation of a disambiguating clue, say fish, to their blind field. Such cueing
seems to affect the interpretation of the word that is consciously heard despite the
apparentto the subjectcomplete unconsciousness of the cue (see Marcel 1998).4
Subjects are as perplexed about their own abilities as anyone else and, very curiously,
never begin to use their blindsight intentionally even though this would seem to
involve only forcing themselves to guess about what is around them. They strongly
report that they see nothing in their blind field; one subject, when asked how he
could explain his ability to guess correctly the existence or non-existence of a target
replied that perhaps he saw a different kind of nothing (Stoerig and Cowey 1997,
p. 551).5 The ability to access, in any way whatsoever, visual information without
consciousness seems very strange, probably because our richest and perhaps most
vividly conscious sensory experiences are those of colour, form and observed motion.
But, on the other hand, we know that many things influence our behaviour without
our being conscious of them and, in light of the many other connections in the brain
that deliver retinal information perhaps we should not be so surprised at blindsight.
One hypothesis about how this works is the dual path theory of David Milner and
Melvyn Goodale (2006) which asserts that visual consciousness involves a set of
dorsal neural pathways but that another, ventral, set of pathways provide additional,
or at least other, visual information processing as well. Oversimplifying, blindsight
is the result of degradation in the dorsal path with preservation of the ventral path. It
is hardly surprising that the working brain would have a host of interactive systems
which could be selectively lost or damaged without a total loss of function. Milner
and Goodale provide a fascinating account of the blindsight abilities of a human
victim of a tragic natural experiment in Goodale and Milner (2004).

4.4 Neuroeconomics

Despite the features of global communication characteristic of the brain, it is thus


evident that we can, albeit crudely, mark out some gross functions associated with
the main anatomical regions. The frontal lobe (or lobes, since they come in pairs, one
for each hemisphere) subserves various high level functions: thinking, deciding,
integration of memory, thought, emotion and value. Recently, much attention has
been devoted to this last topic because of its corrective perspective on the role of
emotions in rational thought. Far from being unsettling distractions interfering with
the dispassionate assessment of our plans and goals, emotional responses are crucial
elements of a coherent, and indeed rational, life. The famous, and terrible, story
of Phineas Gage well illustrates the absolute need for emotional engagement with
thought and planning. In 1848, a railway construction accident severely damaged
a large section of Gages left frontal cortex (prefrontal in fact, just in front of the
temporal/frontal junction)a tamping rod, almost four feet long and more than an
inch in diameter at the wide end, was driven right through his brain by a prematurely
exploding charge of dynamite! Gage did not lose consciousness at the scene of the
44 4 Consciousness in the Brain

accident and somehow survived the inevitable massive infection. But, although he
was left in apparently full possession of his cognitive faculties, his personality was
utterly changed, and most definitely for the worse. A previously steadfast, depend-
able, staid and reserved man, Gage seemed to lose control of himself and could never
again hold down a steady job or live a normal life. In Antonio Damasios (Damasio
1994) examination of this incident, from a stomach churning account of the accident
itself to its pathetic and unhappy outcome, a strong case is made that Gages funda-
mental problem was the destructionvia physical damage to a specifiable region of
the brainof appropriate emotional response to and/or evaluation of ordinary life
situations.6
There is no claim here that the frontal cortex is the seat of the emotions. A great
number of brain systems, most of them evolutionarily older and more primitive
than the cortex, are involved, and involved in different ways, in many distinct kinds
of emotions and in our overall emotional consciousness. Gages problem is more a
problem of integration of emotional response with what might be called the value of
the event triggering that response. It is common for people with damage similar to
Gages to be unable to make the most trivial decisions; they will dither and endlessly
ponder the pros and cons of, say, whether to leave by the front or back door. Of course,
such value is relative to the situation in which one finds oneself. The man in The
Lady or the Tiger could not really be faulted if he gave in to hesitant deliberation
over which door to choose (especially if he had thought a little deeper about the
situation and its emotional as well as motivational freight), and while exasperated
parents may not be able to understand how their teenage daughter can spend an hour
deciding which dress to wear to a non-event, this is evidently no insignificant matter
to her.
The significance of an event is gauged by our emotional response to it. Such
responses frequently occur entirely within the realm of contemplation as evidenced
by our bouts of fretting, wondering or worrying. The mythical purely rational being,
untroubled by any emotions whatsoever, would end up doing absolutely nothing
since nothing would matter to it. Thus, the original Star Treks Mr. Spock is revealed
to be full of emotions, though generally of a rather high-minded sort: loyalty, love
of knowledge and justice, and the like. Without these, motivation would be utterly
lacking and he would never even bother to show up for duty.
Nonetheless, our cultures deep sense of the conflict between reason and emotion
is not motiveless. One area of life in which rationality and emotion are very closely
intertwined is finance and here no less than anywhere else the constitutive role of
the neural substrate of cognition is currently generating lots of data and even some
public notice. The nascent field of neuroeconomics is now popular front page news
as the July 5 2004 issue of Newsweeks article entitled simply Mind Reading (via
fMRI of course) attests (see Glimcher 2004 for more background). It is now quite
possible to witness the brain basis of the perennial conflict between cold rationality
and hot emotion in the arena of monetary exchange.
In the so-called ultimatum game, two participants, A and B say, play under these
rules: A will be offered a sum of money and A must decide how much of this initial
sum to offer to B. B can take it or leave it, but if B refuses As offer then neither
4.4 Neuroeconomics 45

player receives anything. So, the obviously rational thing for A to do is to offer B
some minimal amount (perhaps some amount that can be regarded as effectively
non-zero if we suppose that some amounts are so small they arent even worth
thinking about). And it is equally obvious that the rational thing for B to do is to
accept whatever A offers, sincesurelysomething is better then nothing. What is
frequently found however is that unless A makes a pretty substantial offer, B will
cut off his nose to spite his face. B will walk away with nothing but the satisfaction
of knowing that A is being punished for making a lowball offer. This is true even
in one-shot encounters where there is no prospect of training ones opponent to
share fairly. Sanfey et al. (2003) have used fMRI to reveal the neural dynamics
underlying this theoretically odd, but deeply human, behavior. Remarkably consilient
with Damasios account, two brain areas are especially active during the ultimatum
game, the prefrontal cortex and another area known to be associated with emotional
response, particularly negative emotions such as disgustthe insular cortex (to be
discussed in more detail below). The more active the insula, the more likely player
B is to turn down As lowball offer. Those of a phrenological bent might exclaim:
here is the organ for righteous indignation7 !
It is tempting to speculate about the origin of this seemingly innate irrationality
and there is no shortage of Darwinian tale spinning from evolutionary psychologists
(see Pinker 1999 for a compendious and entertaining introduction to this field; for a
collection of important scholarly articles see Buss 2005). It is an easy game to play.
Back in the EEA (that is, the environment of evolutionary adaptedness, presumably
the African grassland of half a million years ago or so) we all lived in small and
highly socialized tribes in which everybody knew everybody and must have faced
many daily encounters of at least a quasi economic nature. A nicely tuned sense of
disgust towards unfair exchange would serve us well so long as sharing would lead
to greater reproductive success overall, which surely it would under conditions of
rapidly shifting resource ownership in hunting and gathering societies. Of course,
judging fairness requires no less rational acumen than sharp practice so the need
for rationality is not at all lessened in a tribe of fair sharers. And, of course, under
conditions of repeated play, it is rational for us to punish non-sharers if that increases
the chances they will share the next time we play. Players have to be smart enough to
figure all this out however. We might also expect that a certain amount of random
cheating could be, more or less infrequently, advantageous as well, thus encouraging
a certain sly deviousness that we can still recognize in modern humans. We could
even invoke the mighty powers of sexual selection with the hypothesis that women
being more socially astute than menwould favor those men given to fair dealing
and hence help fix the genetic basis for the brain dynamics Sanfey et al. (2003)
observed in action. (Of course, I am just making this all up as I go; for a powerful
criticism of the whole project of evolutionary psychology see Buller 2006.)
In any event, it is interesting to observe that chimpanzees are, in this regard,
rather more rationalin the blinkered purely economic sensethan human beings.
It is possible to train chimps to play an analogue of the ultimatum game. Amongst
chimpanzees, the dispenser in the game tends to fork out a very small amount, and
the receiver strongly tends to take whatever is on offer (see Jensen et al. 2007).
46 4 Consciousness in the Brain

This may well reflect the cognitive sophistication, possessed by humans but lack-
ing in chimpanzees, necessary for the complex interactions characteristic of social
exchange. Either that, or chimps just see things more clearly than we do. It would be
very interesting to compare the neural processes occurring in the apes as compared
to those reported by Sanfey et al. (2003).

4.5 Language and Recognition

Of course, the human trait which most strongly contributes to our sociability is lan-
guage and the links between brain and language have been long known. It is highly
suggestive that this newest addition to the capabilities of animals on Earth and the
most distinctively human trait has a neurological foundation which seems to be
unusually localized in important respects. Since the nineteenth century, two areas
have been identified as crucial for linguistic cognition and behavior. One, Brocas
area, resides (almost always) at the rear of the left frontal lobe (see Fig. 4.3). It seems
to specialize in the sequencing of speech into grammatical and fluid forms.8 The
characteristic syndrome caused by damage to this areaBrocas aphasiais diffi-
cult, halting ungrammatical speech with much circumlocution, but with considerable
retention of basic comprehension. Another sort of aphasia, and its associated region
of the brain, is also named after a pioneer of neuroscience, Carl Wernicke. A kind of
inverse of Brocas aphasia, victims of Wernickes aphasia (or fluent aphasia) retain
easy and relatively grammatical speech, but their talk is utterly senseless.
Consider the Cookie Theft Picture used in the Boston Diagnostic Aphasia Exam-
ination. This picture, readily viewable on the internet, presents a domestic scene with
some emotionally intense content. The picture shows a mother calmly washing the
dishes while water pours out of the overflowing sink in front of her and her children
steal some cookies from the cupboard, with one child clearly about to fall off the
stool he stood on to get access to the cookie jar. Here are two commentaries on the
picture (from Avrutin 2001):
[1] B.L.: Wife is dry dishes. Water down! Oh boy! Okay Awright. Okay Cookie is down
fall, and girl, okay, girl boy um Examiner: What is the boy doing? B.L.: Cookie
is um catch. Examiner: Who is getting the cookies? B.L.: Girl, girl. Examiner: Who
is about to fall down? B.L.: Boy fall down!
[2] H.W.: First of all this is falling down, just about, and is gonna fall down and theyre both
getting something to eat but the trouble is this is gonna let go and theyre both gonna fall
down but already then I cant see well enough but I believe that either she or will have
some food thats not good for you and shes to get some for her too and that you get it and
you shouldnt get it there because they shouldnt go up there and get it unless you tell them
that they could have it. And so this is falling down and for sure theres one theyre going to
have for food and, and didnt come out right, the uh, the stuff thats uh, good for, its not
good for you but it, but you love it, um mum mum (smacks lips) and that so theyve
see that, I cant see whether its in there or not. Examiner: Yes, thats not real clear. What
do you think shes doing? H.W.: But, oh, I know. Shes waiting for this! Examiner: No, I
meant right here with her hand, right where you cant figure out what shes doing with that
hand. H.W.: Oh, I think shes saying I want two or three, I want one, I think, I think so, and
4.5 Language and Recognition 47

so, so shes gonna get this one for sure its gonna fall down there or whatever, shes gonna
get that one and, and there, hes gonna get one himself or more, it all depends with this when
they fall down and when it falls down theres no problem, all they got to do is fix it and
go right back up and get some more.

I think the reader will have no difficulty in making the diagnosis. The familiarity
of language localization makes it easy for to us forget the remarkable fact that one
could use such commentaries to predict with high accuracy the general location of
physical damage in the brain.
As we extend our ability to monitor the cognitively active brain our knowledge
of such localization is growing rapidly. The temporal lobe of the brain, in which
Wernickes area resides, is also the residence of the so-called auditory cortex, devoted
to various relatively high-level aspects of audition. It is an interesting question
what this devotion consists in however, which is relevant to our project here. It
seems very unlikely that brute location has anything special to do with audition and
altogether more likely that some features of cortical structure or organization are what
is crucial. Some remarkable work carried out in the laboratory of the neuroscientist
Mriganka Sur (see Sur et al. 2000) reveals both the amazing plasticity of the brain
and reinforces the ideas both of localization of function and some kind of intimate
connection between neural organization and mentality.9 Exploiting the fact that in
ferrets the neural pathways from retina to brain develop largely after birth, Sur and
his colleagues managed to induce new born ferrets optic nerves to project to the
auditory cortex.
Astonishingly, in the face of visual stimulation and typical environmental inter-
action, this part of the ferrets brain took on an organizational structure very similar
to that of the normal visual cortex. Sur et al. also showed that mature rewired ferrets
would respond to visual stimuli received in the revamped auditory cortex just as stan-
dardly wired ferrets had been trained to respond to visual stimuli processed by the
normal visual cortex. Despite the very unusual, and probably unique in mammalian
natural history, neural substrate involved it is very tempting to endorse the idea that
these ferrets enjoy visual experience with genuine visual phenomenology (at least
insofar as ferrets possess any conscious phenomenology at all, a thesis I am prepared
to accept).10
In humans, parts of the temporal lobes are specialized for tightly constrained
recognition tasks. For example, an area at the junction of the right temporal and
occipital lobes seems to be necessary for the identification of faces and damage
to it leaves people utterly unable to recognize anyone by face alone.11 They retain
the ability to see faces, of course, but there is no longer anything distinctive about
particular faces. The condition is known as prosopagnosia and forces sufferers to
devise intricate alternative strategies of recognition, which might depend upon the
way someone walks, the shape of their legs in a particular sort of clothing (such
as jeans) or some other distinctive feature (for a fascinating first person account of
prosopagnosia see Bill Choissers web page (Choisser 2007); for a striking display
of the specialization of face recognition see Goodale and Milner (2004), plate 5,
discussed on pp. 57ff.).
48 4 Consciousness in the Brain

Another area deep within the temporal lobe (in both hemispheres this time) is
dedicated to places and, among other things, houses. As with the story of Jack and
Jill, it is possible to observe these areas in action with fMRI scans. A new twist to
this now familiar story is that it is possible to monitor brain activity in a way that
discriminates what is conscious from what is not. This works by exploiting binocular
rivalry, which is the rather uncommon experience of consciousness-switching that
occurs when each eye is presented with distinct inputs (if you have a telescope, or
some binoculars, at hand, you can experience rivalry by keeping one eye open while
the other looks through the scope (or through just one eyepiece of the binoculars)).
In a very elegant experiment, Tong et al. (1998) (see also Kanwisher 2001) presented
to subjects in an fMRI chamber dual visual stimuli: houses to one eye, faces to the
other.
Binocular rivalry results in the subjects consciousness flipping fairly rapidly,
every ten seconds or so, between the two presentations. The fMRI scansafter the
usual collection and statistical reduction of the dataclearly reveal the alteration
in consciousness. Every few seconds the face area lights up, then fades down as
activation in the place/house area takes its turn. While current imaging techniques
are generally far too crude to allow real time observations of this effect12 it takes
intensive after the fact analysis of the fMRI signals to reliably spot the alternating
activationthere can be little doubt that with time it will be possible to perform the
basic mind-reading involved in knowing whether someone is conscious of a face
or not. Perhaps someday employees will be monitored to verify that they are not
spending too much time daydreaming about their lovers. Or, more optimistically if
less plausibly, perhaps the skull will be sanctified as a strict boundary of privacy
through which no agency can legally intrude.
An important qualification needs to be emphasized once again. Although I think
it is true that we could detect what a person is conscious of via possible extensions of
the brain monitoring techniques discussed here, there is no implication that the seat
of consciousness of faces, for example, is the face area of the brain. As I have already
tried to stress, consciousness is more likely a phenomenon requiring integration of
activity across a great many brain systems.13

4.6 Temporal Lobe Religiosity

The temporal lobes also seem to be implicated in a much more bizarre phenomenon
for which there is no common label. We might very loosely, and unfortunately rather
phrenologically, call it religiosity. There is evidence that unusual stimulation of the
temporal lobes results in feelings of supernatural presence, mystical communion
and the like. Since this is a highly distinctive and significant state of consciousness, it
is intensely interesting to find even the first hints of a neurological foundation for it.
Victims of temporal lobe epilepsy have sometimes noted that immediately prior
to a seizure, and a sure sign of one impending, is a strange, powerful and wonderful
feeling. The most famous and articulate sufferer was Fyodor Dostoevsky, who
4.6 Temporal Lobe Religiosity 49

presented the condition occasionally, but significantly, in his fiction. In The Idiot
Dostoevsky has Prince Myshkin meditate upon his own affliction of epilepsy:
there was a moment or two in his epileptic condition almost before the fit itselfwhen
suddenly amid the sadness, spiritual darkness and depression, his brain seemed to catch
firehis sensation of being alive and his awareness increased tenfold at those moments which
flashed by like lightning.all his agitation, all his doubts and worries, seemed composed in
a twinkling, culminating in a great calm, full of serene and harmonious joy and hope, full
of understandingHe often said to himself that all those gleams and flashes of the highest
awarenesswere nothing but disease, a departure from the normal condition. And yet he
arrived at last at the paradoxical conclusion: what if it is a diseasewhat does it matter that it
is an abnormal tensionThose moments werean intense heightening of awarenessand
at the same timethe most direct sensation of ones own existenceIf in that secondthat
is to say the last conscious moment before the fithe had time to say to himself, consciously
and clearly, Yes, I could give my whole life for this moment, then this moment by itself
was, of course, worth the whole of life (Dostoevsky 1869/1955, pp. 243244)

The link to religious feeling and mysticism is clear, the disease playing a curiously
redemptive role.
It is perhaps possible to stimulate something like this mystical sense artificially.
As explored in a lengthy series of articles, Dr. Michael Persinger and a number of
co-researchers (see Persinger 1983 for an early example) has devised a machine that,
he claims, can induce mystical or religious experience. The machine operates by
immersing the brain in a feeble but complex dynamic magnetic field which mim-
ics and interacts with the natural electromagnetic activity of the temporal lobes, a
technique generally referred to a transcranial magnetic stimulation (TMS).
TMS is another beautiful anchor point which links basic physical properties and
processes to the high level phenomena of mental functioning and consciousness. TMS
operates by electromagnetic induction: a rapidly changing magnetic field will induce
an electric field which affects the way neurons generate their output signals thus
affecting neural function (generally by promoting neural activity) and, via a cascade
of emergent processes, eventually altering the subjects mental state. Examples of the
effect of TMS include interference with judgments of moral responsibility (Young
et al. 201014 ) and the temporary creation of a kind of Brocas aphasia (Stewart et al.
200115 ).
Persingers work purports to involve the more outr mental state of mystical
communion. Wearing a rather bizarre looking modified motorcycle helmet outfitted
with a set of solenoids, a subject is settled comfortably in a darkened room while
Persingers device plays a subtle symphony of magnetic fields designed to resonate
with and hopefully amplify certain patterns of activity within the temporal lobe.
The machine is reputed to have weird effects. After fifteen minutes of exposure or
more, subjects often report a very strong sense of presence, as if an unseen entity
was with them in the experimental chamber (such a feeling of presence is highly
typical of mystical experience). The well known consciousness researcher Susan
Blackmore, a psychologist noted for her debunking of claims of the paranormal, was
terrified by Persingers machine and reported feeling something get hold of my leg
and pull it, distort it, and drag it up the wallTotally out of the blue, but intensely
and vividly, I suddenly felt angerLater, it was replaced by an equally sudden attack
50 4 Consciousness in the Brain

of fear (Blackmore 1994). Despite her experience, I have the impression that those
of most subjects is not unpleasant, even if disturbing, but rather somehow especially
significant. The effectiveness of this experience machine is also highly dependent
upon the subject. A Canadian journalist did not have such good luck as Blackmore,
apparently because his temporal lobes are insufficiently sensitive or labile, but even
he managed a minor experience (see Hercz 2002).
Some researchers, including Persinger, would like to explain our susceptibility
to mystical and supernatural experiences, and hence the associated beliefs, to inap-
propriate activation of the temporal lobe (perhaps brought about by local anomalies
in the Earths magnetic field). Famous haunted houses might become scientifically
explicable as zones of anomalous geomagnetic fields. In a more sinister and, if this
is possible, in a still more speculative vein, Persinger has written a warning that the
power levels required for inducing the requisite brain activity are not particularly
high, leading to the possibility of large scale remote manipulation of our brains (see
Persinger 1995, with the Strangelovian title Electromagnetic Remote Control of
Every Human Brain). The paranoid sci-fi fantasy of satellite mind control creeps
into sight. After all this, it is only fair to state that the sole attempt at a fully indepen-
dent replication of Persingers results failed miserably, with the authors opining that
the god helmet works by the good old psychological mechanism of suggestibil-
ity rather than direct transcranial magnetic manipulation of the temporal lobe (see
Granqvist et al. 2005).
An obvious and intriguingif somewhat irrelevant to the problem of
consciousnessphilosophical question here is whether such research ought to under-
mine religious belief.16 A surprising number of people who have discussed work like
that of Persingers unite in denying any such implication, preferring to believe that
the physical causes of these experiences are irrelevant to their status as evidence.
After all, the possession of a detailed neurophysiological account of visual percep-
tion which allowed for the technical ability to generate visual hallucinations would
hardly suggest that we are all deluded in trusting our eyes. In fact, perhaps one could
turn discoveries such as Persingers around, to make them stand as positive evidence
for religious belief. The argument would go like this. There is no Darwinian explana-
tion of why we should be able to sense divine presence or have mystical experiences,
since these have little positive bearing upon survival and reproduction (in fact, rather
the opposite, if the legendary solitary and anti-social life of the mystic is anything
like a typical result of the divine vision) and thus the fact that we possess such a
sense at all is like a makers mark, a sign that this faculty is telling us something
important about the world.
On the other hand, it seems to be a feature of mystical experience that it is best
obtained by creating decidedly abnormal states, e.g. by lengthy fasting, long periods
of isolation, etc. which might of course lead one to suspect that mystical revelation is
more akin to hallucination than perception. It could be replied that exactly because
of its potential for anti-Darwinian results the divine sense is and should be difficult
to engage.
But there is a philosophically more fundamental problem here. In order for this
divine sense to operate there must be stimulation of certain regions of the temporal
4.6 Temporal Lobe Religiosity 51

lobe. This is, so far, analogous to visual perception: in order for visual perception
to occur there must be stimulation of regions of the occipital lobe. The difference
between veridical visual perception and mere illusion or hallucination, is the cor-
related source of these stimulations. When I see a pink elephant, the ultimate test
of whether I am hallucinating or not is whether a pink elephant is in fact causing
the activity in my visual cortex. Similarly, to tell whether the God-sense is veridi-
cal or not we have to look at the cause of the sense of divine presence or mystical
experience. It is abundantly clear that these causes are going to be perfectly natural:
perhaps unusual geomagnetic activity, perhaps Persingers machine, perhaps, most
often, internally generated stimulation of the temporal lobe due to some form of
physical or psychological stress. If Susan Blackmore feels a strange invisible pres-
ence tugging at her leg while wearing Persingers helmet, we are not much inclined
to count this as evidence of invisible leg-pullers, and one reason is that we have
an obvious alternative cause of this bizarre experience ready to hand. In every case
of mystical experience mediated by temporal lobe function, we will have such an
alternative explanation because there has to be something in the natural world which
sparks the anomalous activity of the temporal lobe.
As in our earlier discussion, this is not an a priori knowable fact about the world.
It is always conceivable that some cases of temporal lobe excitation will have no link
to the rest of the natural world, that no explanation in terms of local conditions (either
within the brain or in the state of enveloping electromagnetic fields) will be possible.
But the very possibility of Persingers machine suggests otherwise. It suggests that
here we have another anchor point connecting quite basic physical features of the
world, in this case electromagnetic fields and the electro-physiology of neurons, to
the very high level feature of the state of consciousness which constitutes the mystical
experience.

4.7 Limbic Tone

And, what is more, we know that temporal lobe disturbances can cause peculiar
disorders of consciousness which no one could attribute to the creator. One of the
most interesting of these also reemphasizes the importance of the interaction of
various regions of the brain, and moves us into a new region of the brain. Deep
within and surrounded by the cerebral cortex lies a set of structures sometimes called,
since it seems to have appeared only with the rise of the mammals, the mammalian
brain, or the (core part of the) limbic system (a rather vaguely defined functional unit
also encompassing certain parts of the cerebral cortex, occasionally also called the
limbic lobe). It includes the hippocampus, thalamus, hypothalamus and amygdala,
all perched on top of and somewhat enveloping the brain stem while being in turn
completely surrounded by the cerebrum (see Fig. 4.4).
The limbic system serves a variety of fundamental functions, particularly in the
assignment of emotional tone or value to experiences (as well as the control of
glandular response through hormone release) and is crucial for the formation of long
52 4 Consciousness in the Brain

Fig. 4.4 Limbic system (image courtesy NIH)

term memory. Intuitively, of course, there is a very close connection between these
aspects of cognition: emotionally intense experiences are hard to forget17 and have
unmistakable strong physical effects on the rest of the body. It is part of the evo-
lutionary function of emotion to tell us what is important and hence what to learn.
As with any aspect of the brains functioning it is safe to say that the interconnected
operations of the components of the limbic system are at best imperfectly under-
stood. It seems that the thalamus serves as a kind of gatekeeper and clearing house,
shuttling important signals to other parts of the brain for analysis and assessment (all
sensory signals, save those of smell, pass through the thalamus). The hippocampus
(despite its Latin name only very slightly like a seahorse in appearance) is crucial
for the formation of memories, though not for memory storage. People unfortunate
enough to have lost or damaged their hippocampi (as usual, there are mirror image
versions of the organ on each side of the brain) simply cannot lay down any new
explicit memories.18 They do retain the ability to learn new motor skills and there are
curious indications that experience is stored in some non-explicit way, especially if
the experience is emotionally charged. Such unfortunate subjects will form aversions
to people with whom they have regularly unpleasant interactions, although they have
no idea why they have such feelings (see Damasio 1999, Chapter 2).
The almond shaped amygdala, which resides just in front of the hippocampus, is
notably focused on the assignment of negative value to experiences and serves an
especially distinctive function in assessing stimuli as fearful. Subjects with damaged
amygdala show such symptoms as a lack of appropriate response to fearful stimuli,
a marked lack of the proper reserve in dealings with strangers and even the inability
to recognize expressions of fear in others (something at which we are normally
extremely swift and good at; in fact, fearful faces appear to be among the most
salient of emotional displays, see Yang et al. 2007).
4.7 Limbic Tone 53

The interaction of the temporal lobes recognitional abilities with the limbic sys-
tems function of emotional assessment leads to a very bizarre disorder of conscious-
ness, known as Capgras syndrome (first reported as a distinctive condition in Cap-
gras and Reboul-Lachaux 1923). Sufferers fall under the delusion that those closest
to themspouses, parents, sometimes even petshave been replaced by impostors
(as it may be evil twins, clever actors, disguised aliens, robots). A plausible account
of Capgras syndrome has been offered by H. Ellis and A. Young (1990); see also
Stone and Young (1997) which exploits the diversity of function within the brain
and the attendant possibility that complex states of consciousness require these
functions to interact in particular ways. In this case, Ellis and Young endorse an
idea in Bauer (1984) which posits dual information pathways in the brain, a cortical
path underpinning conscious recognition by similarity and a second path involving
limbic structures which adds emotional recognition (note the conceptual similarity
of this suggestion to the explication of blindsight of Milner and Goodale discussed
above). Thus we have the functions of recognition (especially, as we shall see, facial
recognition), emotional response as a component of recognition and the integration
and assessment of that emotional response into appropriate feelings, thoughts and
ultimately beliefs.
It seems that someone suffering from Capgras syndrome retains the ability to
recognize people by the standard physical clues of appearance, voice, etc. In par-
ticular, the specialized face recognition circuitry, as discussed above, remains pretty
much intact (frequently, and probably significantly, there is usually some impair-
ment in the ability to recognize faces in Capgras patients). But the problem lies in
the distinct neural pathway involving the limbic system which fails to assign appro-
priate emotional tone to incoming stimuli, and hence to the presence of loved ones.
Thus the patient can see that someone who at least appears just like a loved one is
present, but the patient has an anomalous lack of appropriate conscious emotional
response. The brains attempt to integrate perceptual recognition with lack of what
might be called emotional recognition leads to the belief that the person is not really
the loved one at all, but rather some kind of impostor. Sufferers are not, of course,
inclined to say that they lack an emotional response to familiar people or animals,
rather they assert that their friends, pets and lovers look funny or not right. It
seems the tendency to place fault anywhere but in ourselves is not just a feature of
normal human psychology.
Such a theory of Capgras syndrome raises some intriguing points. About con-
sciousness itself, it suggests that we experience the emotional tone of the world
around us as a part of that world, not a post-facto and relatively independent
assessment of an initial, and neutral, perceptual experience. That is, it seems that
we represent things around us as possessing emotional, or in general, value attributes
in addition to the more commonplace perceptual attributes (see Seager 2000a, 2002).
At the level of general cognition, the syndrome raises a disturbing question about
rationality, one which could be asked about many of the bizarre deficits that result
from localized brain damage. The question is why are the victims of this sort of dam-
age prone to formulating (or as it is frequently labeled, confabulating19 ) such wild
hypotheses to account for their strange feelings, and why do they end up actually
54 4 Consciousness in the Brain

accepting these hypotheses? I think one might naively be inclined to suppose that if,
for example, a patient was told that they had Capgras syndrome (and, let us say, they
were also provided with good case studies of other instances of the problem, and as
comprehensive a neurological account of it as we now possess) then the patient ought
to accept that diagnosis and thence give up the wild belief that a set of impostors
has been implanted in their home. While this purely intellectual acceptance likely
would not restore the emotional response, it would, one might think, vastly improve
the lives of Capgras victims.
Now, there may be some people who sustain the kind of brain damage that results
in delusional syndromes such as Capgras who do come up with more reasonable
hypotheses. We do not hear about such cases in the literature. Of the cases we do read
about, there is little indication that the delusion can be cured by cognitive therapy.
Why is that? There is some evidence that the processes that underlie belief fixation
themselves are disturbed in victims of ailments such as Capgras syndrome (that is,
delusional states in general). Stone and Young (1997) report that delusional people
tend to exhibit several biases in their reasoning, such as jumping to conclusions,
failing to take adequate account of possible alternative possible explanations, failing
to give sufficient weight to background beliefs, an abnormal willingness to embrace
probabilistic judgments upon unusually little evidence (they will, for example, form
an opinion as to how many of the balls in an urn of red and white balls are, say,
white more quickly than average20 ). These are all faults of reasoning that might tend
to lead delusionals toward more rather than less extravagant explanations of their
plight.
We must be careful, though, in our use of the loaded words disturbed, fault and
bias. It may be that the reasoning styles of delusional people are well within the
normal range. There need be no implication that whatever organic problem caused
the delusional state had any sort of direct and detrimental effect upon the subjects
reasoning skills (although of course this may often occur). It may be, for example,
that in a case of Capgras syndrome a pre-existing style of reasoning is, as it were,
exploited by the breakdown in the emotional recognition system to generate the
characteristic bizarre delusions. The proverbial Missourian skeptic who suspiciously
demands show me, if beset by damage to the emotional recognition pathway might
have difficulty accepting the true identity of a spouse, despite non-sensory evidence,
the pronouncements of authorities and background knowledge, given that loved ones
simply do not seem to be who they claim and look to be.21 Still, the rarity of Capgras
syndrome along with the scattershot nature of accidental brain damage makes it quite
plausible that reasoning itself might be affected, perhaps minimally in amplifying
or emphasizing certain preexisting ways of thinking. However, this amplification
need not necessarily be the result of physical damage to or malfunction of some core
reasoning component of the brain. It could, for example, simply be the effect of a
sudden, forced greater reliance on reasoning in contexts where reason was previously
entirely unnecessary and quite out of place. One normally does not have to come
to a reasoned rejection of the hypothesis that ones spouse has been replaced by an
almost perfect replicasuch hypotheses just do not arise except in exceptionally
bizarre circumstances, and/or the philosophy seminar room. The Capgras victim is
4.7 Limbic Tone 55

in the extreme position of persistently getting evidence from a sensory system that
remains generally reliable that the bizarre hypothesis is a live option.
Thus it might be expected that distinct forms of functionally localized brain dam-
age could interact with idiosyncratically varying modes of reasoning already in place
to yield an affinity for particular syndromes. Perhaps someone whose general will-
ingness to believe a novel proposition tends to depend more upon intuition, that is,
the feel of a situation and its associated emotional response, and less upon abstract
logical thinking and acceptance of external authority would be more susceptible to
Capgras syndrome. Worse, there is every reason to think that beliefs so founded could
be, paradoxically, confirmed by testing. Once the basic delusional hypothesis of
Capgras syndrome has suggested itself, as Stone and Young put it (rather comically
despite the tragedy of the situation), apparently confirmatory evidence is rapidly
forthcoming, because the more carefully relatives are observed, the stranger they
seem, and they will also start acting unusually in response to the increasingly bizarre
behavior of the person now experiencing the full-blown Capgras delusion (Stone
and Young 1997 p. 344).
It is also interesting to speculate that if and insofar as reasoning styles are in part
a matter of cultural milieu (see Nisbett et al. 2001) we might also find the prevalence
of the various delusional disorders to correlate with culture in some way. Now, there
is of course a trivial dependence of delusional states upon culture insofar as forms of
knowledge, theory and indeed concepts themselves are cultural constructions. Thus
it is impossible for someone unacquainted with modern western culture to be under
the misapprehension that they are the victims of alien abduction, or that Lady Gaga is
secretly in love with them. However, it is easy to see that analogues of such delusions
could be present in any culture (delusions of demonic possession, voodoo witchcraft
and the like can readily stand in for alien kidnapping and CIA mind control).
Some delusions which it seems could or should be equally prevalent across cul-
tures are not, and these are more interesting cases. In south China, Singapore and
Malaysia is found the delusion known as Korothe fear that ones genitals are
shrinking, which while far from common there is extremely rare elsewhere. It is far
from clear that there is any kind of culturally mediated cognitive explanation for
the distribution of Koro. The idea is not unthinkable however.22 Nisbetts research
suggests that western modes of thought tend to focus upon selecting highly salient
particular features or individuals out of the environment, relatively ignoring the con-
text in which they occur. This might seem tailor made for the Capgras delusion, in
which, it appears, one super-salient fact about a highly significant individual seems
to drive the delusion. In contrast, those from Asian cultures (and it is the culture that
matters here, not the genes, Japanese Americans reason as westerners according to
Nisbett) reportedly pay a lot more attention to the context of an event and the rela-
tions between the elements of the environment. Perhaps this would make Capgras
less likely in Asian cultures. I have no idea, and since according to Nisbett part of
the Asian way of thought is a greater tolerance of contradiction, Capgras could
sneak back in. It would be interesting to see if Capgras syndrome does correlate with
culture in any way.
56 4 Consciousness in the Brain

4.8 Cingulate Consciousness

It is past time to return to our tour of the brain. Roughly speaking, the inner regions
of the cerebral cortices, surrounding what I have called the limbic system (although
the nearby cortical regions are often included within the limbic system) are involved
with the integration of sensory information, emotional response and thought. One
broad area, the cingulate cortex (which lies within the fold between the two cerebral
hemispheres just above the corpus callosum), seems to bear a special relation to
consciousness. For example, without a proper functioning cingulate, pain loses its
affect, though it can still be felt. Such a loss of affect, the felt significance of thoughts,
perceptions and sensations, resulting from damage to parts of the frontal lobe has
been noted for some time. Many of the unfortunate victims of mid-twentieth century
attempts at psychosurgery, those insulted by the removal, destruction or disconnec-
tion of the prefrontal lobes (for which the inventorAntonio Monizreceived a
Nobel prize in 194923 ) suffered a general loss of affect and would sometimes report
that pain no longer bothered them (by and large, very little either bothered or pleased
them). Cingulectomy, a closely focused and highly constrained descendant of the
lobotomy aimed at the cingulate cortices (see Fig. 4.2), remains an option of last
resort for severe intractable pain but is now a very precise and carefully circum-
scribed operation. When this operation succeeds patients can still feel their pain, but
it no longer hurts.
The awfulness of pain can even be brought under a kind of mental control. A
remarkable study (Rainville et al. 1997) employed hypnotic suggestion to reduce
(and also to increase) the unpleasantness of the pain of putting ones hand in very
hot water. But that is not the remarkable thing about this experiment. Subjects were
asked to put a hand in water of about 47 C (117 F) and rate both the intensity and
the unpleasantness of the pain on a scale from 0 to 100 (for the sake of comparison,
the average ratings for unhypnotized control subjects were around 78 for intensity
and 61 for unpleasantness). Subjects were then hypnotically suggested to either
increase or decrease the unpleasantness of the pain. The unpleasantness ratings of the
pain changed significantly (ratings were now an average of 81 and 45 respectively).
What is remarkable is the brain confirmation of the changed ratings and the precise
localization of the change in brain activation corresponding to increased or decreased
unpleasantness. Using PET scans, the study clearly shows greatly increased activity
in the anterior cingulate cortex upon hypnotic suggestion to increase unpleasantness,
and markedly decreased activity in the very same region given the suggestion to
decrease unpleasantness.
The cingulate cortex is by no means merely or solely devoted to registering the
affect of pain. It is involved in a number of tasks related, roughly speaking, to the mon-
itoring and assessment of internal states: visceral sensations, feelings (of which pain
is of course an especially important representative) and emotions. Nor is its function-
ing invariably associated with consciousness. Sleepwalkers show unusual activity in
the cingulate cortex during their somnambulism (in which consciousnesseven the
kind associated with dreamingis usually absent) but it is interesting that episodes of
4.8 Cingulate Consciousness 57

sleepwalking go with activation of the posterior cingulate, whereas conscious states


and processes are associated with activation of the anterior cingulate (for example,
the kind of sleep that is correlated with dream states, the so-called rapid eye move-
ment (REM) sleep, coincides with a marked increase in anterior cingulate activity
(see Braun et al. 1997). It has been postulated that the anterior and posterior por-
tions of the cingulate cortex are functionally differentiated, with the posterior part
playing a more passive, monitoring role while the anterior takes on a more active
executive function (perhaps we can see here a connection between activation of
the anterior cingulate and the imperative motivation of pain).24 Let me remind the
reader yet again that this is not to be taken as a claim that the seat of consciousness
(or even that of conscious painfulness or other intrinsically attention demanding
experiences) has been found in the anterior cingulate cortex. My view is that trying
to localize consciousness is as fundamentally mistaken as trying to localize a hurri-
canes destructiveness at some definite zone within its swirling winds, or to nail down
the beauty of a painting in some particular region of it. But this does not mean that we
cannot discover that certain brain regions figure in the generation of consciousness,
or kinds of consciousness, while others do not (and, of course, we should expect
there to be gradations on a scale of significance for consciousness generation across
various brain regions or systems).
The anterior cingulate is at the top of the brain, at the base of the cleft between the
two hemispheres (the longitudinal cerebral fissure) just above the corpus callosum
(again, see Fig. 4.2). If we followed the gray matter (composed of the neurons
themselves that make up the processing units of the brain according to the neuron
doctrine) along the convoluted surface of the cortex from the cingulate upwards to
the surface of the frontal lobe and then down towards the temporal lobe we would
find a kind of fjord leading into the brain. This is the lateral sulcus or Sylvian Fissure
(marking a boundary between frontal and temporal lobes). Buried within this cleft
is the insular cortex or insula, a curious zone of tangled gyri and sulci (see Fig. 4.5).
The insula is another cortical area primarily involved in emotional response and
the generation of feelings. It has been specifically linked to recognition of facial
expressions of disgust (see Phillips et al. 1997)25 and is part of the general limbic
complex which deals with hot and disagreeable emotions, of which fear is the prime
example. But the insula also seems to play a role not frequently associated with the
cerebral cortex, namely in the monitoring and control of fundamental physiological
functions. The insula responds differentially and uniquely to changes in cardiac
rhythm and blood pressure.26

4.9 Self Representation

In the development of his general theory of personal consciousness, Damasio postu-


lates that the insula is part of the cortical structure which encodes the most integrated
representation of the current internal state of the organism (Damasio 1999, p. 156),
p. 156) thus forming a crucial component of what Damasio calls the proto-self. It is
58 4 Consciousness in the Brain

Fig. 4.5 The insular cortex


(image courtesy John Beal)

extremely important to stress how our value assessment of the world (including our
internal milieu)which is the basic function of the emotionsis an essential part
of our representation of the world.
Moving upwards across the brain we come to the parietal lobe and the somatosen-
sory representation that is laid out across it from top to bottom (see Fig. 4.3 above).
Readers have doubtless seen a diagram of the grotesque somatosensory homuncu-
lus who represents the amount of brain power devoted to all your bodys internal
sensory channels (there are in fact a huge number of such maps in a number of
places in the brain, especially in the visual cortex representing various types and
patterns of stimulation on the retina). The distortion in the homunculus is due to the
fact that certain parts of your body are much more densely covered with sensory
receptors than others. For example, and not surprisingly, our hands are highly over
represented compared, say, to our arms; the tongue is huge compared to the ear. The
somatosensory homunculus has its motor twin in a neural strip just in front of the
somatosensory homunculus, across the divide between the frontal and parietal lobes
(the central sulcus). The differences between the two homunculi are interesting and
intuitively reasonable, diverging exactly in those places that differ in sensory recep-
tiveness versus motor control (no doubt readers can readily think of some examples).
The somatosensory cortex is intimately related to consciousness, providing as it
does a core part of our representation of the current state of our body and its immediate
relation to the environment. There are some spectacular and instructive breakdowns
in the integration of ones awareness of ones own body caused by damage in either
the somatosensory or motor cortices. The well known, but nonetheless bizarre, phe-
nomenon of the phantom limb is, in part, a result of a disconnection between the
somatosensory cortex and the rest of the body. When a limb is amputated it leaves
behind, at the very least, a location in the brain where sensation from it was registered
and represented. If that part of the brain is stimulated (perhaps because of stimulation
of nearby areas of the somatosensory representation, which need not be a nearby area
of the body since the physical layout of the somatosensory representation does not
4.9 Self Representation 59

fully preserve anatomical relations) it is probable that there will occur a sensation as
if in the missing limb.
The phantom limb is a beautiful example of what philosophers call the intention-
ality of sensation. Intentionality, in general, is the feature of mental states (or any
other representational state) of being about something. Pictures, words, diagrams,
computer programs are all examples of intentionality. It is a nice question what the
difference is between intentionality and merely carrying information, but there must
be such a difference because everything carries information about other things, but
very few things in the world are representations (and representations need not repre-
sent everything they carry information about). For example, the moon carries a lot of
information about the history and early conditions of the solar system but does not
represent that information. Textbooks about the moon and the solar system actually
represent those things as well aswe hopecarrying information about them, as
well as carrying information about many other things not represented at all.
Thus it seems reasonable to suppose that intentionality is to be explained as
carried information plus something else. What a representation represents is a subset
of what it carries information about, and somehow what determines something to be
a representation is what determines which subset of information is represented (or is
supposed to be represented). Presumably it is correct to say that the somatosensory
cortex really does represent the state of our body (at least the component having to do
with tactile sensation) whereas it does not represent various facts about our species
evolutionary development (even though, I suppose, one can find out a lot about the
latter from the structure of the former). It is then very tempting to say something
along the lines of: it is the biological function of the somatosensory cortex to provide
information about the tactile state of the body, and it is having this function that makes
the states of the somatosensory cortex representational states (and, of course, it then
follows that what it is these states have the function of providing information about
naturally constrains the field of information to what is supposed to be represented).
It may be alright to give into this temptation, but such weakness immediately leads
to the demand for an account of functions. And while there are lots of these on offer,
no one knows which, if any, is correct (see Ariew et al. 2002).
In any case, whatever account of representation (especially mental representation)
one favours, a vitally important feature of intentionality is that it is possible to rep-
resent things that do not exist. Thus the word unicorn represents a creature which
does not, never did and in all probability could not exist (recall that only virgins
can see unicorns, and how that is supposed to work I dont have the faintest idea).
Similarly, the phantom limb that gives pain after amputation simply does not exist
(if one could pick something up with ones phantom limb that would be different).
Nonetheless, the limb is represented as existing by the mind. This is important for
understanding consciousness. The representation of the tactile body is available to
consciousness. But it is not always, nor all of it, in consciousness. Activation of
the somatosensory cortex is not by itself sufficient for consciousness. However, that
part of the somatosensory representation which is presented in consciousness is a
representation of the body, which is represented in all the distinctive ways we have
of feeling: emotional as well as sensory. In general, I tend to endorse the idea that
60 4 Consciousness in the Brain

consciousness just is a form of representation, of which the consciousness of the


body is but one component.27

4.10 Intentions

Another fascinating aspect of the parietal lobe is worth mentioning: a particular area
of the posterior parietal called the parietal reach region (PRR). As usual, most of what
we know about this brain region comes from invasive studies on monkeys, but there is
some fMRI work which supports the unsurprising conclusion that we share the PRR
with our simian cousins (see Connolly et al. 2003). What is especially intriguing
about the PRR is that neural activity within it seems to correspond to high-level,
occurrent (hence conscious) intentions to reach out for or point at things. The PRR
is active when one (or at least a monkey) thinks about reaching for something even
if one is in the dark and whether or not the movement is actually executed. Some
recent work involving individual neuron monitoring in monkeys (Musallam et al.
2004) reveals our growing ability to decode the mental state of a subject based upon
analysis of the neural activity of the subjects brain.
It is an interesting question how to set up an experiment to verify that you have
correctly decoded an intention that is not acted upon. In this case, the problem is
ingeniously solved by first teaching the monkeys to reach for where a light appeared
on a display by memory, a couple of seconds after the light has gone off. Several
neurons in the PRR are monitored while the monkeys work on this task until the neural
code is cracked. Then the experimental paradigm is altered: reward the monkey
for reaching for where the computer predicts it will reach on the basis of neural
state. Over time quite high accuracy of prediction is attained. Such results are still
quite crude, and may suggest to the suspicious reader that the experiment more
likely indicates that monkeys are smart enough to learn how to manipulate their own
brains activity in order to receive a reward rather than a straightforward case of mind-
reading from a naturally occurring mind-brain correlation, but there is no doubt that
linkages are being forged from neural activity to mental states. The ultimate goal is
the development of neural prosthetics by which the disabled may be able to control
computers, machines or more standard prosthetic devices via a real time decoding of
their intentions reflections in the brain. It is striking, and more than a little sobering,
to find the authors bold enough to state: recording thoughts from speech areas could
alleviate the use of more cumbersome letter boards and time-consuming spelling
programs, or recordings from emotion centers could provide an online indication of
a patients emotional state (Musallam et al. 2004, p. 258). It will not be very long
before prosthetic limbs will be directly activated by decoded neural signals (in the
more distant future such linkages will extend beyond the body to institute the long
held dream of independence from physical instrumentality).
Be that as it may, and seeing how we have circled back in the direction of our
original lovelorn thought experiment, it is time to draw our cursory summary of
brain and consciousness to a close. We could go on in much greater depth about
4.10 Intentions 61

the exponentially expanding knowledge of relationships between brain systems and


consciousness (for overviews see Koch 2004 or Zeman 2002) but the point should be
clear enough. In a way strikingly parallel to the modern vision of cosmology, there
is abundant evidence of the dependence of high level phenomena upon the neural
activity of the brain. Even as galaxies resolve themselves into stellar interactions
and stars resolve themselves into atomic processes so the brain, and seemingly also
the mind, resolve themselves into the ceaseless biochemical activity of the neurons.
The linkage from the sub-atomic realm to the foundations of consciousness remains
murky and tenuous, but at the same time already richly interconnected and exceed-
ingly fruitful. As in the first chapter, at every step of the way we can easily imagine
an explanatory roadblock being thrown in our path. Instead, nature has been gen-
erous to prepared minds, steadily revealing more and more of the underpinnings of
consciousness. There is no doubt that consciousness is now within the purview of
neuroscience. When Galileo first turned his home made telescope on the heavens he
had an ontological epiphany:
What was observed by usis the nature or matter of the Milky Way itself which, with the
aid of the spyglass, may be observed so well that all disputes that for so many generations
have vexed philosophers are destroyed by visible certainty, and we are liberated from wordy
arguments. For the Galaxy is nothing else than a congeries of innumerable stars distributed
in clusters. (Galilei 1610/1989, p. 62)

Will some future neuroscientist, with some new kind of instrument of which we
now have but the most meager idea, have a similar experience which itself reveals
that consciousness is nothing else thanwhat? The story Ive told so far may make
that in some sense possible or even to seem a juggernaut of likelihood, but the story
thus far begins and ends with apparently non-conscious components. This means that
consciousness must be an emergent feature of the world. So, what is emergence?
Part II
Emergence and Consciousness
Chapter 5
Emergence and Cellular Automata

5.1 The Game of Life

Despite its portentous name, emergence seems to be the most natural thing in the
world. Everywhere we look we find things which possess properties which their
components or constituents lack. Trees are made of leaves, trunk and root, but none
of these things are trees in their own right. If anything about the structure of nature
seems obvious and irrefutable it is that nature possesses a hierarchical structure
within which higher level entities have properties lacked by lower level entities.
This is what I labeled synchronic emergence in Chap. 1. Nature no less obviously
appears to evolve or develop in time, so that new entities and new features appear
over time. This was labeled diachronic emergence. In the last three chapters we have
been exploring a small part of the coastline of the natural hierarchy, from the ground
floor level of the sub-atomic realm to the dizzying heights of consciousness itself.
This is a hierarchy of both synchronic and diachronic emergence and thus we have
been exploring and implicitly defending a grand system of emergence.
Although familiar and ubiquitous, there are some philosophical problems that
arise from emergence. As often happens in philosophy, what seems obvious and
unproblematic in normal use reveals some deeper mysteries when looked at more
closely. The best way to approach the puzzles of emergence is with a basic example.
My example is admittedly rather hackneyed, but that is only because it is the perfect
beginning point in the examination of emergence. The example is John Conways
Game of Life (henceforth simply Life) or, more generally, cellular automata.1
Life was invented by Conway in 1970 and was widely popularized by Martin Gard-
ner in some of his famous Mathematical Games columns in Scientific American
(see Gardner 1970). The concept of a cellular automaton (CA) goes back somewhat
further, to work of John von Neumann, Stanislaw Ulam and Arthur Burks. The sta-
tistical mechanics model of Ernst Ising in 1925 (for an account of the model and its
history see Brush 1967) is an even earlier version of the essential ideas behind the
cellular automaton, though one that seems to have had little influence on the subse-
quent development of the mathematical concept (see Hughes 2000 for an interesting

W. Seager, Natural Fabrications, The Frontiers Collection, 65


DOI: 10.1007/978-3-642-29599-7_5, Springer-Verlag Berlin Heidelberg 2012
66 5 Emergence and Cellular Automata

philosophical investigation of the Ising model). The most compendious and ambi-
tious exploration of the cas must surely be found in Wolfram (2002), wherein the
reader will discover thatapparentlycas provide the answer to life, the universe
and everything. It is hard to know what to make of Wolframs book. Some reviewers
seem to have had something like the experience once had by Dorothy Parker: this is
not a [book] to be tossed lightly aside; it should be thrown with great force. My own
opinion is simply that it is indeed possible, although apparently unlikely, that there
could be something like a cellular automaton model of the fundamental physics of
our world. This possibility is of great significance for the topic of emergence.
The game of Life is played in an imaginary world in which space is two dimen-
sional and quantized: the world is a grid of positions, and these positions can have
but one binary property: that of being occupied or unoccupied (on or off, alive or
dead). An initial position is simply a set of occupied positions, or cells. The dynam-
ics of Life are also very simple, consisting of but three laws of nature. Heres how
Martin Gardner explained them back in 1970:
Conways genetic laws are delightfully simple. First note that each cell of the checkerboard
(assumed to be an infinite plane) has eight neighboring cells, four adjacent orthogonally,
four adjacent diagonally. The rules are: (1) Survivals. Every counter with two or three
neighboring counters survives for the next generation. (2) Deaths. Each counter with four or
more neighbors dies (is removed) from overpopulation. Every counter with one neighbor or
none dies from isolation. (3) Births. Each empty cell adjacent to exactly three neighborsno
more, no feweris a birth cell. A counter is placed on it at the next move (Gardner 1970,
p. 120).

The components or denizens of a Life world are simply the cells and they
have but two possible statesalive or dead (conventionally represented by a cell
being colored or left blank respectively). But Life worlds evolve in complex and
interesting ways, exhibiting features that go far beyond the on/off nature of the
individual cells.2 This is certainly a kind of diachronic and synchronic emer-
gence. Figure 5.1 presents an example of a possible initial state. Each configu-
ration here will, so to speak, persist and form into an identical copy of itself
after four iterations of the rules; but the copy will be moved two squares hor-
izontally. The left configuration moves towards the right, the right configuration
then necessarilyby symmetrymoving to the left. Therefore a collision is
inevitable, the outcome of which is the birth of two instances of one of the more
famous denizens of the Life world, called gliders. After 19 iterations we are left with
the pattern shown in Fig. 5.2.
The two gliders are also 4-iteration persisting patterns and will move off diagonally
to the northwest and northeast. Figure 5.3 presents a much more complex initial state.
It would be a tedious business to calculate by hand how this will evolve, but, as
mentioned in note 1 above, there are many computer implementations of Life. Using
one would reveal that this pattern is intrinsically stable with a period of 246 time ticks
but in addition steadily gives birth (once every 246-ticks) to gliders which depart to
the northwest, as illustrated in Fig. 5.4. The circled glider is proceeding off the grid to
the upper left and another one will eventually be spawned in the central interaction
zone only to follow the first to infinity. Other initial patterns will absorb gliders,
5.1 The Game of Life 67

Fig. 5.1 Possible Life config-


uration

Fig. 5.2 Two gliders

Fig. 5.3 More complex pat-


tern

or reflect them. And there are many other well known Life creatures (e.g. glider
guns such as we have seen that generate gliders, puffer trains and a whole host of
different kinds of guns generating all sorts of glider like configurations). All such
features meet the simple definition of emergence we are currently working with. They
68 5 Emergence and Cellular Automata

Fig. 5.4 Future of Fig. 5.3

have properties which their components lack. Gliders, for example, reconstitute their
original form after four iterations; individual cells lack that feature. This is a case
of synchronic emergence. Other initial patterns of occupied and unoccupied cells
glider gunswill produce gliders, a feature which did not exist in the original pattern.
This is diachronic emergence.

5.2 Digital Physics

The ontology and laws of the Life world are very simple. Does this preclude the
game of Life from providing any insight into the kind of emergence we seem to find
in the real world? Far from it. It has been conjectured that the ultimate foundation
of the actual world is a cellular automaton (or cellular automaton-like) system (see
Wolfram 2002 for some recent speculation). The first person to investigate this idea
seems to have been Zuse (1969).3 The most actual work on the idea appears to be
that of Fredkin (1990, 2003, 20044 ). The general program of modeling the world at
its most fundamental level as a cellular automaton is called digital physics or, more
provocatively, finite nature by Fredkin.5
The scale at which this hypothetical cellular automaton which computes the
universe (lets call it the ultimate cellular automaton or UCA) would be very much
smaller than even subatomic dimensions, probably approaching the Planck length.
In fact, there is no particular reason to suppose that the UCA works in what we call
space. The neighborhoods around the cells of a CA are purely abstract. We might
thus hope that space, and time as well, are no less emergent features of the world
5.2 Digital Physics 69

than its more ponderable denizens.6 This would be an amazing vindication of a part
of Leibnizs old idea that the radically non-spatial monads systems of perception
generate the relational structure we call space. Time is perhaps a more problematic
emergent for Leibniz since the monads themselves seem to evolve against a temporal
background, although Leibniz might be able to construct time from the internal causal
relations of the monads (see Cover 1997). We could perhaps go so far as to consider
the instantaneous (or infinitesimally temporally extended) states of all possible
monads as the fundamental entities of the universe. Then time itself will appear as a
special set of relations amongst these states (such an idea has been explored in the
context of physics in Barbour 2000 with the possible states of the universe replacing
those of the monads). Then instead of saying that the UCA operates at a length scale
of about the Planck length, we should say that it is at that length that spatial properties
and relations would break down, and the universe would give evidence that it is the
product of a CA. Unfortunately, experiments that could probe such a length scale are
hard to imagine, but hopefully as digital physics is developed some experimentally
accessible implications will emerge. Otherwise the doctrine can only remain pure
metaphysics, albeit mathematical metaphysics.
No one knows how or if digital physics will pan out. The core problem is to link
some CA architecture to the physics we already know. As Fredkin puts it: the RUCA
[reversible universal cellular automaton] runs a computation. As a consequence of
that process and of appropriate initial conditions, various stable structures will exist
in the lattice. For each such stable structure, we expect that its behavior will mimic
the behavior of some particle such as a muon or a photon. What we demand of a
correct model is that the behavior of those particles obeys the laws of physics and
that we can identify the particles of the RUCA with the particles of physics (Fredkin
2003, p. 192). The particles (and fields and everything else for that matter) of standard
physics would thus all be emergent entities. The task of establishing how standard
physics can be linked to some RUCA is immensely difficult however and not yet
very far advanced.
It might be thought that the continuous mathematics of the calculus which serves
as the basic mode of description for all our basic theories, and which has been
applied with huge success throughout all domains of science, precludes any seri-
ous interpretation of the world as based upon the digital and discrete workings of a
cellular automaton. After all, according to standard formulations of physics, many
systems are capable of instantiating an infinite and continuous range of states (e.g.
the position of a particle can take on any real number value). The situation is not so
clear cut however. It is possible to demonstrate rigorously that some CA systems
generate behavior asymptotically describable in continuous terms. For example,
the Navier-Stokes equations governing hydrodynamical flow can be retrieved from
so-called lattice gas models which are types of CA (see Frisch et al. 1986). Of course,
I emphasize that no one knows whether all of continuous physics can thus be retrieved
or approximated from a CA model, but no one knows otherwise either (see Fredkin
2003, 2004). It is important to bear in mind that the space of the CA lattice is not
the space of our physical universe (as deployed and described in scientific theory)
and the time-tick of the CA is not the time we measure or experience. So the mere
70 5 Emergence and Cellular Automata

fact that the space and time of the CA are discrete does not rule them out even if
we grant that the best scientific description of our experienced (and measured) space
and time makes them continuous quantities.7
The weird behavior of some quantum systems called entanglement,8 in which
two systems that have interacted maintain a mysterious kind of connection across
any distance so that interaction with one will instantaneously affect the state of the
other, might appear to contradict the adjacency requirement of cellular automata.
But, again, it is far from clear that this is a real problem for digital physics and for
the same reason. The spatial distance separating the parts of an entangled system
which makes entanglement seem weird need not be reflected in the underlying
workings of our hypothetical universal CA. In fact, could it be that the phenomenon
of quantum entanglement is trying to tell us that what we call spatial separation is
not necessarily a real separation at the fundamental level of interaction?
CA systems extend far beyond the game of Life. They can be defined for any
number of dimensions or number of neighbors. They all share a fundamental finite-
ness and discreteness. Traditional Life has some deficiencies as a serious model of
the foundation of the world. For example, the Life rules are not reversible, which
means that information is lost as a Life position evolves. Any number of initial posi-
tions will lead to there being no living cell left after a certain period of evolution
and it is rather difficult to extrapolate backwards from an empty plane to the partic-
ular initial position which led to it! It appears however that the laws of nature we
have discovered so far guarantee that information is conserved. This follows from
the determinism of our fundamental theories. Nature also seems to abide by certain
fundamental symmetries or, equivalently, conservation laws. Some of these emerge
naturally from reversible cellular automata (see Fredkin 2004).
It is worth stressing that although quantum mechanics is said to have introduced
irreducible indeterminism into our most basic picture of the world, the evolution of
the wave function of any quantum system is strictly deterministic. Only the mea-
surement process introduces any possible indeterminacy. Under the standard view,
measurement introduces a random collapse of the wave function into one of the eigen-
functions of the observable property being measured (that is, if we are, for example,
measuring position our measurement will yield a particle with a definite position).
Since the wave function usually encodes more than one eigenfunction we have only
a probability of getting any particular measurement result. However, a full quantum
mechanical description of the world ought to include the measurement apparatus and
will deterministically move the wave function of the system under observation plus
the apparatus to a new quantum mechanical state. Presumably, a quantum cosmology
will incorporate a quantum mechanical description of the whole world and thus will
have no need to ever invoke the collapse of the wave function during measurement.
All sorts of fascinating questions intrude here, but it seems quite viable to maintain
that the quantum mechanics of the whole universe will maintain determinism and
thus the preservation of information.
Readers may recall the flurry of journalistic interest in 2004 when Stephen Hawk-
ing conceded that black holes do not purge information from the world (as he formerly
believed) but rather retain it in correlations between what falls into the black hole and
5.2 Digital Physics 71

the Hawking radiation which it emits. The point is expressed very nicely by Lubos
Motl who wrote in his blog:
When we burn books, it looks as though we are destroying information, but of course the
information about the letters remains encoded in the correlations between the particles of
smoke that remains; its just hard to read a book from its smoke. The smoke otherwise looks
universal much like the thermal radiation of a black hole. But we know that if we look at the
situation in detail, using the full many-body Schrdinger equation, the state of the electrons
evolves unitarily (Motl 2005).

Notwithstanding the foregoing, the game of Life has one extremely interesting
positive feature: it is computationally, or Turing, universal (see Berlekamp et al.
1982; Gardner 1983).9 What this means is that a Life position can be set up which
can be interpreted as carrying out the computations of any Turing machine. This
means as well that the Life rules can simulate any other cellular automaton. Thus
the deficiencies noted above are rather notional, although it would be a perverse
metaphysics (or physics) that asserted that the basis of the universe was Conways
game of Life, simulating a reversible cellular automaton (there are any number of
CAs which are universal in the sense that they can emulate any Turing machine so
the claim that the universe is such-and-such a reversible CA being simulated on an
X-type CA would seem to be a paradigm case of underdetermination of theory by
data; nor could we rule out a, perhaps infinite, chain of CAs, each simulating the one
above). Theoretically speaking however, this fact shows that any kind of emergence
which is embodied in some CA architecture will also occur in some Life world.

5.3 Hypercomputation

It is also true that any finite CA (or finite section of a CA) can be simulated by
some Turing machine (or our own personal computers, which are Turing universal
modulo memory capacity). This means that there is a potentially straightforward
test of whether the universal CA hypothesis is correct. If some aspect of nature is
correctly and essentially described in terms of uncomputable functions, then the
universe cannot be running on an underlying CA.10 A large number of proposals for
at least the possibility of such hypercomputational processes (to somewhat extend a
term of Jack Copelands) have been advanced (see Copeland 2002; Stannett 2006;
Ord 2006 for surveys). Roughly speaking, these proposals involve coming up with
a way in which an infinite amount of information can be processed in a finite time.
Two important variants of these schemes appeal respectively to, first, the effect of
parts of nature embodying non-computable real valued parameters (such parameters
exist with infinite precision) and, second, to possible natural processes that allow for
infinite computations to occur in a finite time.
A simple example of the second kind is the accelerating Turing machine of
which many variants have been proposed. The core idea is simply that of a Turing
machine (or any other standard, Turing universal, computer) whose execution speed
doubles for each operation. If an accelerating Turing machine takes 0.5 seconds
72 5 Emergence and Cellular Automata

to complete its first operation, 0.25 seconds for the next operation and so on, then
evidently it can complete an infinite set of operations in one second. Such a machine
could compute the standardly uncomputable halting function11 in the following way.
The machine begins by simulating the action of some Turing machine (given some
particular input). If that machine halts then the simulation will discover this at some
stage of processing and then it will write a 1 in its output and halt. If the simulated
machine never halts then that 1 will never be written but, after 1 s, we will know
that the accelerating machine has simulated an infinite number of operations. Hence
if there is no 1 in the output then we know that the simulated machine does not halt
with the given input.12
An example of the first scheme can be found in Siegelmann and Sontag (1994)
(see Copeland 2002 for a brief description). A specially constructed neural network
in which some of the connection weights are uncomputable real numbers can itself
compute Turing uncomputable functions. Perhaps thats no surprise. In an analogue
of the famous GIGO law of computer programming, we might not be surprised to
find noncomputable numbers in, noncomputable numbers out (see Davis 2006 for
this point and a blistering attack on the whole idea of hypercomputation).
Are any physical implementations of such hypercomputational systems really
possible? A fundamental difficulty intrudes. The schemas of hypercomputational
machines rely upon continuous physics being the correct description of the world;
they are developed within that framework (there is no reason to expect that uncom-
putable numbers will play a role in a nature which is not continuous). There is of
course nothing wrong with that, but it does mean that they do not tell us much about
the real possibility of such devices, unless and until we know that nature really is
correctly characterized by continuous physics. Since the UCA hypothesis explicitly
denies this, the use of continuous physics to generate the plans for hypercompu-
tational devices comes perilously close to begging the question against the UCA
hypothesis. And perhaps it is not much of a surprise that the mathematics of the
continuous physics we have developed permits uncomputability to appear when suf-
ficiently prodded by human ingenuity. Pour-El and Richards (1981) have shown that
certain equations familiar in physics can have uncomputable solutions even if the
initial condition of a system obeying the equation is Turing computable.
But as Penrose has pointed out (Penrose 1989, p. 188), such results do not seem
to describe any real physical system; this pathological mathematics does not seem
applicable to the real world. Is that an illusion? Is it the result of our own bias in favor
of the computable? Warren Smith (2006) provides another interesting example of
hypercomputation. Smith shows that the trajectory of a set of point massive particles
acting in a Newtonian world is not always computable which fact could, in principle,
be deployed to solve the halting problem. Of course, we know that our world is
not Newtonian and there are many bizarre features that arise in Newtonian models
of physical systems, frequently dependent on the presumed continuous nature of
space and time, the assumption of complete determinacy of object properties and the
allowance of point masses within the models (for some examples see Earman 1986
and Earman and Norton 1993, 3).
5.3 Hypercomputation 73

In Smiths model, the possibility of point mass particles approaching each other
infinitely closely (or exploiting infinite energy) generates the uncomputability via a
singularity in which particles attain infinite velocity (and thereby travel an infinite
distance) in a finite time.13 Smith makes the interesting general point that these
Newtonian models necessarily will outrun the Turing limit. If positions can take
on real number values with infinite precision (as certainly befits a point particle in
a genuinely continuous space) then the number of trajectories particles can track is
uncountably infinite whereas only a countable number of trajectories are computable
by Turing machines (this is, I suppose, another case of our law: uncomputable in,
uncomputable out). Smiths results go further since the singularities can arise for
Turing computable initial data.
In purely abstract terms, we can think of all possible physical systems and the
mathematical functions which describe them and their evolution. The set of com-
putable functions forms a vanishingly small subset of the set of all possible functions,
almost all of which are uncomputable. We might then expect that almost any sys-
tem we picked would be correctly describable by an uncomputable function rather
than a computable one (it might well be approximately describable or describable
within some midrange of values by some computable function of course). Is that why
physics is never completed, but must renew itself periodically as the deficiencies in
our computable approximation to ultimately uncomputable nature becomes appar-
ent? The appearance of the widespread computability of nature is then no more than
a reflection of our mathematical predilections and limitations. Such reflections may
increase our doubts about the conception of the universe as ultimately some kind of
cellular automaton.
But this is not perfectly clear. If we allow nature to embody uncomputability into
its very fabric in some thought experimentally manipulable way then it is easy to
make cellular automata that cannot be simulated by any Turing machine.14 Imagine a
cellular automaton whose update rule is this: update cell (n, m) with the standard Life-
rules if the halting function H (n, m) = 1, otherwise update cell (n, m) with X-rules
(where these are just some other well defined set of updating rules). The evolution of
this CA is as well defined as the halting function, that is to say, perfectly well defined.
But it cannot be simulated by any Turing machine. One can also imagine a kind of
cellular automaton in which the update rate varies from place to place in the grid, so
that, for example, one could have arbitrarily large regions that acted essentially like an
accelerated Turing machine. This region could pass on information it has computed
to the rest of the CA. We could call such cellular automata hyper-CAs. I cannot
define very precisely this concept in the absence of complete catalog of all possible
extensions of the basic CA idea. The crucial feature though is the maintenance of
the core CA concept.
It is an interesting question whether for each proposed hypercomputational device
there is a computationally equivalent hyper-CA. For some of the proposals this is
obviously the case. The way the accelerated Turing machine could be used to compute
the halting function could evidently be emulated by the kind of regionally divided,
variable clock rate CA just discussed.
74 5 Emergence and Cellular Automata

The issue is interesting because hyper-CAs share their conceptual simplicity with
standard CAs. If we knew that hyper-CAs could incorporate the kinds of hypercom-
putation we have been discussing as a potential real world phenomenon, and if we
also accept that the retrieval of standard physical theory from the theory of cellular
automata is a viable project, then we would know that any issue of emergence that
might arise in the world of continuous physics could be discussed in the conceptually
simpler world of the CA. It is also possible that hypercomputation is an illusion, fos-
tered by the kinds of continuous mathematics within which we have developed our
fundamental theories. Certainly, no one has provided any convincing evidence (or so
much as a hint actually) of real world hypercomputational processes at work. Further-
more, to the extent that the appeal to hypercomputationalism traces Turing machine
transcendent abilities to properties described by fundamental physics, the lessons we
learn about emergence from cellular automata will apply to a hypercomputational
world as well.

5.4 Conservative and Radical Emergence

So lets look at the topic of emergence from the vantage point provided by cellular
automata. Clearly we have versions of both synchronic and diachronic emergence,
as discussed above for the particular case of the Life CA. It is equally obvious that
the emergent feature of any CA are completely determined, both with regard to their
existence (or coming into existence) and their properties by the bottom level structure
of the CAs grid (its pattern of live and dead cells plus the updating rules). The
general concept of determination at issue here is called by philosophers superve-
nience. In Chap. 7 we will look at this much more closely (for a recent compendious
guide to supervenience see McLaughlin and Bennett 2008), but a simple preliminary
characterization of supervenience is simply that domain A supervenes upon domain
B just in case there can be no change in A without a change in B. A quick test of your
intuitions about any particular case uses what I call the Dali-test. Salvador Dali is
reputed to have once said The only difference between me and a madman is that Im
not mad. If this strikes you as impossible, albeit somewhat pregnant with meaning,
then you believe that the property of being mad supervenes on some other features.
Contrast with this case: the only difference between an electron and a positron is
that the positron has positive charge. No problem there; charge does not supervene
on the other properties of the electron.
A trivial example is the supervenience of chess positions upon the arrangement
of the pieces. A position cannot be changed from a checkmate to a non-checkmate
without altering the position of at least one piece on the board. Rather more interesting
and indeed a somewhat philosophically controversial example is that of an objects
aesthetic qualities, which seem to supervene upon the physical structure of whatever
work of art is at issue. Could it make sense to say that the only difference between
painting A and B is that B is ugly? Arguably, you could not turn the Mona Lisa into
an ugly painting without making some physical change to it. Or could you? If you
hold that beauty is, as such, essentially some kind of social or cultural phenomenon
5.4 Conservative and Radical Emergence 75

then you might well believe that the Mona Lisa could become (or could have been)
an ugly painting via changes in the surrounding society. Very well thenyou do
not believe in the supervenience of beauty upon the physical form of the object in
question. This sort of example is not really very strange. After all, one can become
an uncle or a widow without there being any change in oneself. A currently much
more contentious example in which this debate rages would be the supervenience
of the mental upon the physical, which would assert that no change could be made
in the mental states of some subject without a concomitant physical change in that
subject.
It is important to bear in mind the larger metaphysical picture here. A superve-
nience thesis always comes with a specified foundational or subvenient domain.
The ultimate subvenient domain is the microphysical state of the universe and the
core issue of emergence is how to explicate the relation between the microphysical
details and the macrophysical emergent features. The cellular automata model is very
helpful in getting clear about what is at stake here.
It is evident that within the system of cellular automata there is supervenience
of the emergent features upon the basic structure or properties of the CA. For any
emergent feature you care to name, say a glider gun in Conways Life, it is impossible
to alter its properties without making some change in either the configuration of cells
which make up the gun, orrather more drasticallyaltering the rules of the CA
itself. There is no case of any emergent features in a CA whose behavior can be
changed, or whose interactions with other emergent features can be changed, except
by alteration of the cell configurations which underlie these emergent features and
their interactions (or by alteration of the updating rules of the CA).
But it is easy to imagine CA-like systems that do not obey the principle of super-
venience of emergent features upon the underlying structure and process of the
automaton. Consider the game of Life with one extra rule: whenever a glider forms
and survives for, let us say, twenty iterations in order to mature, it attains invulner-
ability and cannot be destroyed. Rather, any living cell it touches dies with no effect
on the glider. So some gliders are, once formed, partially immune to the usual rules
of the Life world and have a new power, that of destroying any living cell which
they touch. Gliders continue to obey the Life rules internally, so to speak, so that
they propagate across the grid normally. This new Life-like cellular automaton is
perfectly well defined, and could easily be simulated in a computer.15 It is important
to bear in mind that the simulation would be only that, a simulation of this new
and bizarre kind of quasi-Life world. The computer code of the simulation or its
operation would not exhibit the kind of temporal emergence Im trying to illustrate
here, for the simulation would have to work via some kind of time-counter that kept
information in a local record of how long a glider had survived. Inside the world
being simulated, rather than the simulation, it is the irreducible temporal fact about
how long a glider has survived that governs how it will behave. Note that this shows
we can simulate this sort of radical emergence in a system that does not possess any
such emergence itself.
Of course the proper understanding of this new quasi-Life world would require
modification of the rules of Life. The modification would make reference to a certain
76 5 Emergence and Cellular Automata

Fig. 5.5 How will this


evolve?

configuration and the time the configuration persists, and such reference would be
ineliminable. This extra rule fails to obey two important features of a good cellular
automaton, which we may call locality and instantaneousness. The first demands
that a cell evolves only according to the state of its neighbor cells (though remem-
ber that neighborhood is abstractly defined and is not necessarily a literally spatial
notion). The second demands that the history of the automaton is irrelevant to its
further evolution, or, to put it another way, the state at time t+1 supervenes upon the
state at t.16 My new rules make the state of a cell depend upon non-neighbors if these
are part of a glider shaped configuration, but only if that configuration has already
persisted through sufficient iterations. So I could present you with a configuration
and it would be impossible for you to tell how it would evolve. For example, the one
illustrated in Fig. 5.5. It crucially matters how long the glider approaching the block
at the southeast has persisted. As a matter of stipulated fact, it has not yet reached the
critical lifetime of twenty iterations and will, alas, dissolve into nothingness, taking
the block with it. Figure 5.6, a very simply modified glider gun, represents the initial
condition from which this glider emerged.
If you observed the evolution of this pattern, you would find nothing out of the
way; it would quickly (after 230 time steps) lapse into a stable configuration, almost
unchanging save for three bar oscillators (a column or row of three cells will
endlessly oscillate between its vertical and horizontal form). However, if we simply
moved the southeast-most block of four live cells a little further to the southeast, then
the glider produced by the evolution of this initial configuration will have persisted for
more than 20 iterations by the time it encounters the newly placed block. We would
then observeto our astonishmentthat the glider does not suicidally destroy itself
and the block but rather plows right through it (leaving nothing behind). If we suppose
ourselves to be scientists trying to infer the underlying laws of the Life world based
upon the observed events, we would face a difficult problem here. We might be forced
5.4 Conservative and Radical Emergence 77

Fig. 5.6 Glider gun

to give up the intuitive and beautiful twin ideas that everything that happens at t+1 is
the result of the configuration at t plus purely local rules of interaction. We might be
forced to say that there is something special about gliders as such; they are not just
another configuration of cells held together, as it were, only by the underlying laws
of Life.
One way to highlight the extent of the change envisioned here is to note that
the standard Life world beautifully instantiates a very strong form of mechanism, a
useful definition of which was provided by Charlie Dunbar Broad:
Thus the essence of Pure Mechanism is: (1) a single kind of stuff, all of whose parts are
exactly alike except for differences of position and motion; (2) a single fundamental kind of
change, viz, change of position. Imposed on this there may of course be changes of a higher
order, e.g. changes of velocity, of acceleration, and so on; (3) a single elementary causal
law, according to which particles influence each other by pairs; and (4) a single and simple
principle of composition, according to which the behaviour of any aggregate of particles, or
the influence of any one aggregate on any other, follows in a uniform way from the mutual
influences of the constituent particles taken by pairs (Broad 1925, pp. 4445).

It is remarkable that digital physics embodies an even more stringent form of


mechanism than Broad proposed. There is no change of position or velocity in the
Life worldthese are emergent features that arise from the blinking on and off
of the automatons cells. There are only two states of the fundamental constituents
(alive or dead as we call them). There is no pairwise influence over distance; the
state of any cell is a trivial function of the state of its neighbors. And yet there is an
obvious kinship between Broads stringent vision of mechanism and digital physics
which is most evident in the discussion of emergence.
Now, ordinary gliders count as emergent phenomena by our lights but doesnt the
newly defined kind of super-glider represent a sort of emergence which deserves
separate recognition? Traditionally, emergentists would not have been impressed
with the weak notion of emergence we have thus far developed. Emergentists such
as Conwy Lloyd Morgan, Samuel Alexander, John Stuart Mill, George Lewes and
C. D. Broad wanted emergentism to be a more radical doctrine.17 Roughly speaking,
these emergentists would have regarded our super-gliders as genuinely emergent
78 5 Emergence and Cellular Automata

phenomena but standard gliders would not have been worthy of the term. The reason
is twofold. First, an ordinary gliders behavior (and conditions of creation) is entirely
dependent on the laws governing the underlying features of the Life world (i.e.
individual cell life and death). Second, the powers of a glider, its ability to interact
with other features of the Life world (emergent or otherwise) is similarly dependent
on the basic laws of Life. It is clear that any system that met the conditions of Broads
pure mechanism would not allow for radical emergence (of course, it was in part to
make this point that Broad chose his definition of mechanism). It seems equally clear
that the digital physics of the Life world is similarly bereft of radical emergence.
Let us then officially baptize this evidently more serious and exciting, but possibly
non-existent, form of emergence radical emergence as opposed to the conserva-
tive emergence that seems commonplace and uncontroversial. The core feature of
radical emergence is that the emergents should make a difference to the way things
happen, over and above, or in addition to and possibly (or occasionally) in a kind
of interference with, the way things would happen without them. The appearance
of radically emergent properties or entities is supposed to stem from laws of nature
but these laws are laws of emergence which are not listed among nor consequences
of the fundamental laws of the non-emergent base features of the world. While it is
undeniable that everywhere we look we find emergence, the real question is whether
there is any radical emergence in the world.
Often, the classical emergentists defined radical emergence in terms of explica-
bility. Only if it was impossible to explain the behavior and powers of an entity
purely in terms of the basic laws of nature governing the underlying stuff of the
world was that entity truly (or radically) emergent. But the emergentists had a very
pure, ethereal and, as it were, non-epistemological sense of explanation in mind here.
Normally, explanations are like stories about or accounts of phenomena that make
them intelligible to human investigators. In that sense, explanation cannot outrun
human intelligibility which, in all honesty, doesnt really run that far or fast. It would
be commonplace to say that it is impossible to explain, say, differential geometry to
a four year old child to whom one could explain why its a bad idea to pull a cats
tail. In this sense, explanation is relative to cognitive capacityquantum mechanics
is absolutely inexplicable to a goat. It is an interesting empirical question how far
human cognitive abilities extend into the explanatory domain, and what particular
features or capacities of our minds set our ultimate intellectual limits (at least one
philosopher claims that the explanation of consciousness transcends our intellec-
tual powerssee McGinn 1989). I tend to believe that the capacity-relative sense
of explanation is its core meaning, but the classical emergentists seemed to think
of explanation as something perfectly objective or as relative to any conceivable
mind, of any finite level of intellectual ability. They were seeking to free the idea of
explanation from its inherent relation to inquirers. That is really an incoherent goal.
What they were really driving at was not an issue of explicability at all, but rather the
idea of supervenience or the dependence (or lack thereof) of all emergentstheir
existence, properties and powersupon the fundamental features of the world and
the laws governing those features. In any event, as I intend it, radical emergence is
5.4 Conservative and Radical Emergence 79

an ontological doctrine about the ultimate nature of the world, not about the powers
of minds to understand that world.
At other times, emergentists stressed predictability or the lack of it. If it was possi-
ble to predict the behavior and powers of something entirely in terms of fundamental
laws (and initial conditions) then that thing would not be radically emergent. As with
the concept of explanation, predictability is relative to the capacities of the predictor.
Modern astronomers can predict the positions of the planets to much higher preci-
sion than Ptolemaic astronomers, who in turn did much better than earlier stargazers.
Our science still finds earthquakes quite unpredictable, but that may well change in
a hundred years. But of course the classical emergentists were not concerned with
the ephemeral details of current scientific predictive power. As with their appeal
to explanation, they meant predictability in principle, given the correct account of
fundamental science and unlimited intellectual resources. Once again, I think they
were aiming beyond any cognitive activity; they were aiming at those features of the
world that drive the world from state to state or fix its dynamics. Talk of prediction
is a way to vivify this rather abstract idea.

5.5 Simulation

However, the issue of predictability raises an interesting and novel point in the context
of cellular automata. In a certain, quite limited, sense the Life CA meets the test of
intrinsic unpredictability (see Bedau 1997 for an interesting philosophical discussion
of this feature). We have already noted that Life is Turing universal. Any Turing
machine can be simulated by some Life configuration. Suppose there was an efficient
way to predict what would happen in any Life world, where by efficient is meant
some method which shortcuts the direct simulation of the Life configuration. Then
we could use Life to solve the Halting problem. For any Turing machine (with
some definite input), simply emulate it as a Life configuration and use our presumed
efficient prediction method to check whether or not it will halt. So, in general, there
can be no way to predict what a Life configuration will do.18 Of course, in a great
many particular cases, it will be easy to see how a configuration will evolve, but there
cant be a universal method for doing this. On the other hand, it is not difficult to
simulate, at least in principle, the evolution of any finite Life configuration. Thus we
can conclude that, in general, the only way to predict what a Life configuration will
do is to simulate it.
Now we come to the rather curious question: does simulation count as a kind of
prediction? The obvious answer is: yes, if the simulation runs faster than the system
being simulated. The somewhat less obvious answer is simply yes. In light of
the attempt to spell out radical emergence as an ontological issue, I think the latter
answer is the correct one. C.D. Broad liked to talk about a mathematical archangel
who could deduce how a system would behave (knowing the initial condition of
the system) even if the calculations involved far transcended any human abilities
(Broad 1925, p. 70). The modern study of complexity and computation has revealed
just how powerful such an angel would have to be to get anywhere significant. We
80 5 Emergence and Cellular Automata

know, for example, that a great many conceptually simple and entirely finite problems
are probably computationally intractable.19 Examples include such favorites as the
traveling salesman problem (i.e. find the shortest route that visits all cities in a
region) and Boolean satisfiability (i.e. given a compound statement formed from
and, or and not is there a way to assign true and false to the variables so as
to make the whole statement come out true). Computational intractability arises
when the number of operations required to solve a problem increases exponentially
with the size of the problem. Lloyd (2002) has calculated that if the whole visible
universe is regarded as a computer then it could have performed only some 10120
elementary operations since the big bang. If we assume (implausibly of course) that
we would need to perform at least one operation for each route in the traveling
salesman problem, then 10120 operations would solve the problem for no more than
about 80 cities.20 From the point of view of a universe presumed to have continuous
quantities, both spatial and temporal, this is a woefully inadequate computational
base; perhaps this is some sort of evidence in favour of digital physics (see Landauer
1991 and note 5 above).
But I dont think that the time needed for the simulation matters to the kind of
abstract points about the nature of emergence that we are interested in. It just is the
case that cellular automata are entirely predictable (simulatable) from an understand-
ing of the rules governing individual cell life and death plus the initial configuration.
Broads mathematical archangel, calculating at supernaturally high but finite speed
would have no trouble figuring out how a Life configuration would evolve (which is
not to say that the archangel could solve the halting problem). But notice that even
the archangel, given only the basic Life rules, would necessarily fail to simulate accu-
rately the evolution of one of our super-gliders. In order to succeed, the archangel
would need to know the laws of emergence which I arbitrarily imposed upon the
Life world. Thus it is fair to say that our modified Life exhibits or at least is a model
of a kind of radical emergence.
Notice that the example reveals that, in a certain sense, radical emergence is
a relative notion. Relative to the local rules of Life our modified game possesses
radical emergence. If we add to the archangels stock of knowledge an awareness
of the special historical and configurational law we have imposed then there is no
radical emergence. The general question at issue is whether the world exemplifies
radical emergence relative to the correct laws of fundamental physics.
If digital physicists such as Fredkin or Wolfram are right about the ultimate basis
of the world, then there is exactly as much radical emergence in our world as there is
in the Life world, which is to say none at all. But of course, there is still emergence.
And it is an extremely interesting fact about our worldeven if the advocates of
digital physics are rightthat it is possible, profitable and even mandatory in any
practical science (not to mention day to day living) to deploy theories and laws that
operate at the level of the emergents. This fact is often marked by another distinction
within the category of emergence: that between a metaphysical or ontological form
and an explanatory or epistemological form of emergence. We can terminologically
identify metaphysical or ontological emergence with what we are calling here radical
emergence.
5.5 Simulation 81

The second form of emergence deserves some further remarks. Conservative or


explanatory emergence is not the claim that emergents completely fail to be explica-
ble in terms of the fundamental features of the world. Presumably, in the absence of
radical emergence, there is an acceptable sense of explanation (explanation in prin-
ciple) in which all emergents are thus explicable (and if they are not then we are just
back to radical emergence).21 Rather, conservative emergence embodies the claim
that there is an unavoidable need to use the non-radically, but nevertheless emergent
features of the world in our science or, speaking even more broadly, simply in our
understanding of the world.
If there is conservative emergence then it is impossible for us to grapple with the
complexity of the world without invoking a host of emergent entities and processes. I
think something like this is the viewpoint forcefully advocated by Robert Batterman
(Batterman 2002, see especially Chap. 822 ; see also McGivern and Rueger 2010). This
can be and is cheerfully conceded by those who deny there is any radical or ontological
emergence. They are perfectly happy to endorse the idea that reality is composed of
levels or a hierarchy of more or less independent systems of emergent entities. But
this notion of independence is simply that within each level there is a set of laws
over the emergents at that level which do a pretty fair job of describing, predicting
and explaining how things happen at that level. The fact that we can successfully
grapple with the complexity of the world only by appealing to these emergent levels
does not falsify the claim that the entities and laws of each level remain completely
supervenient upon the ultimate basis of the world, whatever it might be. It seems
also to be true that conservative emergence can unify phenomena that do not possess
any common microphysical structure or constitution. For example, the ubiquity of
phase transitions across extremely diverse domains of phenomena can be explicated
in terms of emergent features, as stressed by Batterman. Yet across these cases we
have every reason to believe that criticality arises from the underlying entities and
their properties, as evidenced by our success at simulation which invoke nothing but
these underlying features (for one of innumerable examples, see Matsumoto et al.
2002).

5.6 Reductionism

More than three decades ago, in a now classic paper, Philip Anderson provided a
very interesting characterization of the difference between radical and conservative
emergence (though not in those words) in More is Different (Anderson 1972). It
is worth remembering that this supposedly anti-reductionist article begins with this
remark:
The reductionist hypothesis may still be a topic for controversy among philosophers, but
among the great majority of active scientists I think it is accepted without question. The
workings of our minds and bodies, and of all the animate or inanimate matter of which we
have any detailed knowledge, are assumed to be controlled by the same set of fundamental
laws (Anderson 1972, p. 393).
82 5 Emergence and Cellular Automata

This is what Anderson means by reductionism. We can gloss it in terms of the


supervenience of all higher level features of the world upon the fundamental physi-
cal structures and processes. For Anderson, reductionism is an ontological doctrine
which does not entail anything in particular about the explanatory relations amongst
phenomena. In fact, the conflation of ontology and epistemology, which is all too
common in philosophical discussions of reductionism, is the core problem examined
in Andersons article.23
Anderson deploys two very old terms to explicate his view: analysis and synthe-
sis. Ontological reduction he describes in terms of the universal analysis of the high
level in terms of the physically fundamental. The first three chapters of this book
were devoted to the case for analytic reduction as Anderson understands it. But, says
Anderson, analysis does not imply synthesis. The fact that we have good evidence
for analytic reduction does not entail the possibility of generating explanations of the
high level which restrict themselves solely to the physically fundamental. Of course,
in purely ontological terms, analytic reductionism does imply that the world itself is
generating all phenomena entirely from physically fundamental phenomena. Ander-
sons point is that this does not license belief that there are synthetic explanations of
all high level phenomena. Synthesis can fail for a host of reasons. The most obvious
(and ubiquitous) is the complexity of large systems, but we have also noted that the
analytical reductionist viewpoint will miss unifying and universal features of the
world which operate in abstraction, so to speak, from their fundamental constitution.
Anderson goes so far as to say that the more elementary particle physicists tell us
about the nature of the fundamental laws, the less relevance they seem to have to the
very real problems of the rest of science, much less to those of society (Anderson
1972, p. 393). He also makes some further remarks which, in light of the foregoing,
seem quite obscure:
the constructionist [i.e. synthetic] hypothesis breaks down when confronted with the twin
difficulties of scale and complexity. The behavior of large and complex aggregates of ele-
mentary particles, it turns out, is not to be understood in terms of a simple extrapolation of
the properties of a few particles. Instead, at each level of complexity entirely new properties
appear, and the understanding of the new behaviors requires research which I think is as
fundamental in its nature as any other (Anderson 1972, p. 393).

While the first half of this quotation is admirably clear, the idea that new properties
appear sounds more like radical emergence and thus can seem to be in some tension
with his endorsement of ontological reductionism. I do not think Anderson is being
inconsistent here. Rather, I think he means by talk of new properties, new models
of complex behavior which deploy concepts new and distinct from those used in
fundamental physics. Then it makes perfect sense to predict that the implications
of these models, both in terms of their mathematical consequences and physical
interpretation, are indeed the product of truly fundamental, creative scientific work.
Andersons picture, at least as I see it, seems very attractive, but his remark
about new properties can stand proxy for a deep question. For the real issue is
whether there is any radical emergence in our world. We have just seen that if the
digital physics hypothesis is correct the answer is no. It would however be an
understatement to say that this hypothesis is highly controversial. The thesis is very
5.6 Reductionism 83

speculative and really on the borderline between science and metaphysics. What
I mean by the latter is that it is very hard to see how the claim that the world
is fundamentally a kind of cellular automaton could be tested. Well actually, as
things stand now, the testing is very easy since the digital physics thus far developed
generates grossly incorrect physics (Fredkin 2003, p. 209).
In defense of the digital physicists, it is fair to say that the theory is simply too
immature yet to face the tribunal of experience. Given the staggering success and
accumulated amount of continuous physics over the last 400 years, digital physics
is a spectacularly audacious hypothesis. Considerable time will be needed for it to
be sufficiently developed, at first just to correspond to known physics and then to
generate possible experimental tests. Fredkin notes that the digital physics hypothesis
which he favors entails, in contradiction with the theory of relativity, that the events
occurring within the universe do so against an absolute fixed reference frame. This is a
consequence of the demand that properties such as energy and momentum be locally
encoded: information that represents the energy and the momentum of a particle
is represented by digital information that is associated with the particle (Fredkin
2003, p. 209). Thus we might someday expect an experiment which could test for
the existence of such a fixed frame of reference at a level of precision concordant
with the fundamental length and duration stemming from the emergence of space and
time. Of course, as things stand there is no hint of such evidence, but it is hard to prove
a negative; the famous MichelsonMorley experiment was, despite its fame, hardly
a decisive refutation of the ether. In general, we might look for a variety of what
have been called lattice effects, which are divergences from the predictions about
a system made on the basis of continuous mathematics caused by the underlying
discrete nature of the system (for an interesting example in a biological context see
Henson et al. 2001; for a speculative one from physics see note 7 above).
But in the end, I suppose that it would not be unreasonable to reject the digital
physics model and with that reject the straightforward deduction that the world
contains no radical emergence. Perhaps the lessons of the Life world are just irrelevant
to any debate about emergence in the real world. Let us then turn our attention to a
possible line of argument in favour of radical emergence.
Chapter 6
Against Radical Emergence

6.1 Autonomy and Realization

Where should we look for evidence of radical or, as it is also called, ontological
emergence? It might be thought that classical physics is not a very promising starting
point. Yet there is a natural link between emergence and complexity. There is no doubt
whatsoever that complexity engenders conservative, or epistemological, emergence.
There is also an argument that the classical but peculiar phenomenon of chaos in
dynamical systems implies that radical or ontological emergence may be a genuine
feature of our world. I have my doubts, but the subject is intrinsically interesting
and cannot fail to shed light on the general topic of emergence. The argument has
been advanced in some detail, though also with some caution, by Silberstein and
McGeever (1999) which I propose to use as an initial guide.
Silberstein and McGeevers argument that dynamical systems arguably exhibit
a kind of ontological emergence, rather than the uncontroversial epistemological
emergence which is granted on all sides, and which I take for granted here, depends
upon the idea that ontological emergence is the best explanation for the striking
multi-realizability (universality) exhibited by chaotic and other non-linear systems
(Silberstein and McGeever 1999, p. 195). This conclusion is supposed to be rein-
forced via an appeal to a second feature of these dynamical systems, dynamical
autonomy, which they explain as these systems ability to maintain their dynamics
not only across several physically dissimilar substrates, but also in spite of constant
changes in the micro-states of the fundamental constituents that make them up (this
idea is borrowed, as they note, from Wimsatt 1994). The argument seems to be that
some explanation of both multiple realizability and autonomy is necessary, and that
ontological emergence provides the best explanation. Discussing the particular case
of psychology, they state that if there is nothing physically common to the realiza-
tions of a given mental state, then there is no possibility of any uniform naturalistic
explanation of why the states give rise to the same mental and physical outcome
(p. 196). In general, the question is why would so many disparate systems yield
the same dynamics? (p. 196). Ontological emergence promises an answer since it is

W. Seager, Natural Fabrications, The Frontiers Collection, 85


DOI: 10.1007/978-3-642-29599-7_6, Springer-Verlag Berlin Heidelberg 2012
86 6 Against Radical Emergence

possible that disparate realizations of some higher level feature will give rise to the
same ontological emergents, thus providing the common element which accounts
for the basic similarity of the physically diverse set of realizations. While this much
ontological emergence might suffice to explain multiple realizability, it would not by
itself be a sufficient explanation of dynamical autonomy. For that, as Silberstein and
McGeever go on to argue, some downward causation from the emergent to the con-
stituents of the realization would have to be added to discipline those constituents
or constrain them to persist as a realization of the high level feature in question.
I will argue to the contrary that, once some unclarity about the notions of realiz-
ability and autonomy are cleared up, there is no need to invoke emergence to account
for this pair of properties. Indeed, there would be something almost incoherent about
the attempt to use emergence for this purpose since multiple realizability and, espe-
cially, dynamical autonomy are precisely the best candidates for being the most
significant emergent features of the systems at issue rather than features which could
be explained by (other?) emergent features.
Several rather large issues are raised by the concepts of realizability1 and auton-
omy. I think realizability should be primarily understood as a relation between a
model, usually but by no means always a mathematical model, and some actual
device which then realizes that model. The clearest cases, and the one from which
the term arises I believe, are computational systems. As alluded to in Chap. 5, Tur-
ing (1936) famously devised a purely mathematical model of a generic computing
device, which invoked a very small number of very simple operations (reading, writ-
ing and erasing a finite set of symbols onto an infinitely long tape), as well as a
presumed system-structure of arbitrary complexity in which rules for carrying out
sequences of these simple operations were encoded (described by the machines
arbitrarily large but finite state tables). These now familiar Turing machines are
not real devices, but it was soon evident that real, physical machines could be made
that almost perfectly mimic the computational capacities of these abstract devices
(and in fact Turing was a pioneer in the transformation of his notional machines into
real physical devices).
Such mimicking is what is meant by the term realization. Every feature of the
model has a corresponding, or targeted, feature in the realizing physical system
of course the converse does not holdand for every possible state transition in
the model there is a corresponding possible physical transition that preserves the
mapping of model feature to corresponding targeted physical feature (thus the idea
of realization is closer to the mathematical concept of a homomorphism than to that
of isomorphism). To the extent that we can construct a given Turing machine in a
number of different ways and using different materials, we have multiple realizability
of Turing machines.
There is no doubt that if we have realizability of such abstract devices at all, we
have multiple realizability. But do we have realizability? Obviously, no real physical
device will exactly realize any Turing machine, since the real device is subject to
environmental influences and eventually inevitable internal breakdowns. In terms of
the characterization of realization just given, this means that the mapping from the
transitions in the model to possible physical transitions is imperfectthe physical
6.1 Autonomy and Realization 87

device has possible transitions which violate the mapping. For example, a trivial
Turing machine could be devised that simply writes a google (10100 ) 1s on its tape
and then stops. No physical device could realize this Turing machine since we can
be sure that at some point long before this staggeringly large number of symbols was
printed a breakdown would occur and the transition specified by the model would not
occur in the realization. Modern day computers are similarly imperfect realizations
of what can be described as Turing machines (though, in the interest of speed and
efficiency, their architecture is vastly different and much more complex than Turings
fundamentally simple scheme).
So one question that arises is what counts as a realization of a model, given that
perfect realization is impossible? One natural answer is that we should use the design
of the realizing system to gauge whether it is a realization of a model or not. The
design of a computer assumes that the physical devices out of which the realization
is constructed will operate as specified, and so we might say that so long as the
physical components do so operate, the device will realize the model. We could then
define the realization relation in terms of this attempted design or the intentions of
the designers. But now we have another level of realization to worry about. The
construction of the physical components so that they will perform some abstractly
specified function within the overall economy of the device raises exactly the same
issue of realization that we face regarding the system as a whole. Nonetheless, I think
there is a pretty clear sense in which we could talk of a physical device realizing a
model if and so long as the device operates as designed. However, it is important
to notice that we have transformed the realization relation from a relation between
an abstract model and some physical device into a relation between two models:
the original model now is put into correspondence with a model of some properly
functioning device.
But even if this, to borrow the term of Daniel Dennett (1971), design stance
theory of realization is acceptable for artifacts, natural systems are not designed. For
biological systems which have evolved by natural selection we might be tempted
to invoke metaphorically a similar notion of design. We might, that is, imagine that
Mother Nature had some abstract model in her mind, so to speak, against which we
could compare the actual physical devices we find which imperfectly realize that
model. But this is really to take the metaphor too far. At best, we can be permitted
to regard Mother Nature as a tinkerer, with no clear idea of what she might come up
with, although perhaps with some very general and vague goals always at the back
of her mind. The problem is that to say that a biological system realizes some model
imperfectly presupposes that there is some robust sense to the notion that there is the
plan of nature against which we can measure the imperfection of the realizations.
This seems rather perverse. For example, it is pointless to wonder whether the
human eye is an imperfect realization of some ideal vision system, or a perfect
realization of a vision system under a certain set of constraints, or a more or less
imperfect realization of something in between these two extremes. Or again, is the
heart an imperfect realization of a blood pumper because it is subject to certain
diseases or is this weakness a matter of external influence that ought to be considered
irrelevant to the quality of the realization? Real answers to such questions would
88 6 Against Radical Emergence

require that there be a specific model against which we could gauge the similarity
of the real world performance of the mechanisms which enable the biological
functions of interest to us with the desired performance as given by the model. We
can devise many such models but it is absurd to think that nature somehow had one of
these and no other in mind during the evolution of these biological functions. While
some loose characterization of natures design is no doubt harmless, the specificity
of design necessary to ground the notion of realization is simply absent in the case of
biological systems. Thus it is incorrect to suppose that whatever has a function must
be a realization of some model of a device which serves that function. Although this
is of course how we normally produce devices that serve a function, to impose it as
a requirement upon natural systems would threaten the already precarious grasp we
have of the idea that natural systems genuinely have functions.
In any case, Silberstein and McGeever profess to find multiple realizability at the
level of dynamical systems in general, independent of whether or not they are literally
designed or are the product of biological evolution. It is extremely unclear whether
at this level of generality it makes much sense to speak of imperfect realizations in
anything but the most tenuous metaphorical sense. Consider a mathematical model
of the Earths weather system. Every day a wide variety of such models are being run
on many computers, some being used for real time weather forecasting, others for
long term climate modeling and still others for purely educational purposes. Even the
best of these models are not particularly accurate representations of the atmospheric
dynamics of the Earth and differ greatly in their internal structure. Is the atmosphere
an imperfect realization of some such model? It was certainly not designed to realize
any of these models, either intentionally or by evolution.
Invoking the notion of realization seems explanatorily backwards in this sort
of case. The models here come after the phenomena, and aim to represent certain
especially significant features of the phenomena which can be more or less tractably
modeled with current technology. To turn things around and claim that the weather
realizes imperfectly these models misconstrues the epistemological relations here. It
seems more reasonable to say that the weather system acts, in some respects and to
a lesser or greater degree, rather like realizations of some atmospheric models. Why
is there this match? Thats an intricate question. Part of the answer is that the models
were consciously devised with much effort to make just that relation true. A perhaps
deeper point is that whatever it is that the universe is doing in the vicinity of the
Earths surface, certain aspects of it do present phenomena to us that look somewhat
like the virtual phenomena generated by certain of our models. How can nature
do that? In the limit, this is just the question: how is science possible? Probably, its
answer has more to do with the ingenuity of modelers than some exciting fact about
nature, although of course the fact that nature is such as to permit the existence of
modelers is of some local significance to us and, as emphasized in Part 1 we do find
the world curiously generous in satisfying our epistemic desires.
So the idea of realization in the case of natural systemswhether biological
or notsimply comes down to the more or less extensive applicability of some
model. This is a very much weaker notion of realization than we find in the field of
computation and, in general, intentionally designed systems. If a model can be applied
6.1 Autonomy and Realization 89

to some phenomena with some success then we say that the phenomena realize that
model, although always more or less imperfectly. But imperfect realization is now a
fact about the applicability of the model, not a fact about the system being modeled
(unlike in the case of systems which are really designed to realize some model).
Somewhat less grandiosely, an examination of a simple model and some of its
realizations might reveal the source of multiple realizability and give us some hints
how nature manages to realize humanly devised models (to the extent she does and
in the somewhat perverse sense just discussed). Here is a familiar mathematical
equation which can be interpreted as a model of a very large number of physical
systems:
d 2 g
= sin( ) (6.1)
dt 2 l
This simple dynamical system is an oscillator; for example it is (the model of) an
ideal, frictionless pendulum, where is the angle of the pendulums bob from the
vertical, g is the acceleration of gravity and l is the length of the bobs support. From
this description of how the acceleration of the bob depends upon its position we can,
more or less, mathematically deduce the famous law of the period of a pendulum:

t = 2 l/g (6.2)

Notice that we already have a primitive but significant form of dynamical auton-
omy even in such a simple example: the period is independent of the amplitude of
the pendulums swings (at least for small amplitudes). Thus a pendulum which is
gradually losing energy, for whatever reason, will maintain the same period (a very
useful property for clocks in a world full of friction), as will a pendulum that is given
a little shove. If you forcefully push a pendulum and thus speed it up temporarily it
will, of its own accord so to speak, return to its normal period as soon as you stop
abusing it.
But more important is the lesson for multiple realizability. Obviously, it is possible
to make a pendulum-like system out of a virtually unlimited range of materials and
in a rather large number of distinct configurations (for example, the bob can be made
of any number of things and suspended by a rigid rod or a string, or a vine in a
jungle). In fact, from a more abstract point of view, pendulums fall under the class
of oscillators which are ubiquitous in modeling and seemingly in nature as well.
However, the simplicity of the example shows us the source of multiple realizability
in a clear form. The model works by providing a dynamical relation between a set
of parameters within an implicitly specified context in which only these parameters
affect the behaviour of the system. The model is very pure or ideal in this respect
and we do not expect nature to isolate any system in this way, but the applicability of
the model requires only that these parameters are so dominant as contributors to the
behaviour of the system that the other unmodeled and perhaps unknown influences
can be ignored, at least for the level of accuracy at which the modeler aims. In the
case of the pendulum, the parameters are the length of the bobs support and the
acceleration of gravity. All that is required for a real physical system to realize a
90 6 Against Radical Emergence

pendulumwhich, as we now see, means only that this model is applicable to itis
that it have components which correspond to these parameters which are sufficiently
isolated from the non-parametrized aspects of the environment. It is trivial to see
how a string attached to a weight on Earth (with sufficient space to swing) provides
the resources to mimic l and g and hence will act quite a bit like our model ideal
pendulum.
Now, if we ask why pendulums are multiply realizable, is there any reason to
think that ontological emergence has any role to play in the answer? Is the length
of the vine an ontological emergent? Is Tarzans weight? Hardly. What explains
multiple realizability is the way the model sets up its parametrization which imposes
no constraints beyond those of the model itself. Dynamical autonomy fares no better
in the matter of requiring ontological emergence. This autonomy is a feature of (and
at least in our example case deducible from) the model and any system containing
features which can be mapped onto its parameters (including the implicit condition
of those parameters being the soleor at least dominantinfluence on the behaviour
of the system) must of course exhibit the autonomy found in the model.
In fact, the multiple realizability and dynamical autonomy of this family of oscil-
lators can be seen to arise from very abstract features of the model. As discussed
in Batterman (2002), the models structure can be deduced from extremely general
considerations which totally abstract from all the details of construction and material
constitution. This follows from the highly plausible assumption that the particular
units of measurement which we happen to employ are of no physical significance and
thus the physics should be invariant across a change of fundamental units of mea-
surement (Batterman 2002, p. 15). From this assumption plus the free hypothesis that
the relevant physical parameters are length, mass and acceleration a mathematical
technique called dimensional analysis can generate, as if by magic, the mathematical
model of the pendulum.
The basic concepts of dimensional analysis have a long history, dating back at least
to work of Jean Fourier, and are mathematically very sophisticated. Modern meth-
ods trace back to a theorem proved by Buckingham (1914; see also Strutt 1915).
The essential idea is to reformulate a desired but unknown functional relationship
of a number of dimensioned variables, such as length (with conventionally labeled
dimension L), mass (dimension M) or time (dimension T ), in a dimensionless form.
The latter can be very roughly explicated by considering the trivial problem of deter-
mining how far an object, d, will fall in a certain time, t, subject to a certain gravita-
tional force, g. Let us naively conjecture that what matters is t, g, and the mass, m,
of the object (thus implicitly assuming that the objects colour, or the day of week,
as well as innumerable other possible factors are irrelevant). The dimension of d is
L, that of g is L/T 2 (distance per second per second, that is, an acceleration) and of
course m has dimension M. Can we write an expression that makes the exponents
of our dimensions go to zero or nondimensionalizes the problem? Well, this will
evidently do the trick: U = t 2 gm/dm 2 ; U is called a dimensionless product. The
fact that we had to put m in both the numerator and the denominator might suggest to
us that it is really irrelevant to the problem so lets eliminate it, to obtain: U = t 2 g/d
which we can rewrite as d = t 2 g, where is an unknown constant. We can now
6.1 Autonomy and Realization 91

determine the constant, which holds for all falling objects, by measuring any falling
bodys progress (we know that it is in fact 1/2).
Of course, by no means all interesting physical problems yield easily (or at all) to
dimensional analysis. However, it succeeds here and in the slightly less trivial case
of the pendulum and in many other problems, though most frequently in practical
cases by reducing the number of variables rather than eliminating them. This clearly
demonstrates that there is at least no need to postulate any form of ontological
emergence to account for the autonomy or multiple realizability of the myriad of
systems to which such models are applicable.
The argument can be summed up as follows. Unlike in the definitional case of
computational systems, realization of a model by a natural system amounts to nothing
more than the applicability of such a model to the system. There is no fact of
the matter about whether a certain natural system is genuinely realizing model
A rather than model B (at least if both A and B are applicable to the system).
Multiple realizability stems from the model rather than the world; it follows from
the relatively trivial fact that many otherwise quite distinct natural systems can have
their components mapped onto what I called the parameters of various models. If this
mapping can be accomplished for any natural system then the system will necessarily
behave more or less as the model predicts, and thus there will be multiple realizability.
Similarly, dynamical autonomy is a feature of the model and so any natural system
to which the model applies will automatically exhibit it.
Silberstein and McGeever make much of the non-linearity of the models which
are supposed to exhibit ontological emergence. So it is important to emphasize that
non-linearity is not a necessary condition of either dynamical autonomy or multiple
realizability. To take one interesting example, the general equations of hydrodynam-
ics (e.g. the NavierStokes equations) are non-linear and of course can be used to
produce many chaotic models. But a consequence of these equations is the particular,
ideal case of irrotational flow. Irrotational flow is, roughly, a flow of fluid in which
the particles of the flowing liquid are not themselves rotating, or in which small
regions lack any overall rotation. The vortices formed by water swirling down a drain
can be quite accurately modeled as irrotational flows (once they are in existence). The
irrotationality of the flow should not be confused with the fact that there is of course
rotation around the drain itselfirrotationality is, so to speak, a micro-property or
local property of the flow (any closed path that does not include the drain itself will
have no overall rotation or, as it is called, circulation). Conversely, there can be rota-
tional flow in which there is no circulation. Viscous (that is, relative to the model
under discussion, non-ideal) fluids flowing along a straight pipe will tend to stick
to the wall of the pipe, setting up a gradient of velocity and hence local rotation,
which leads to usually unwelcome turbulence in the flow.
A common textbook example may make irrotationality clearer. Imagine putting
a very small (relative to the size of the vortex) floating paddlewheel into your bath-
tub as you drain the water. The paddlewheel as a whole will swirl around the drain
but the wheel itself will not turn. If you were caught in Edgar Allen Poes mael-
strom, and it was irrotational, you would not be spinning around as you slowly were
sucked down into that terrible vortex.3 In general and in reality, vortices will not be
92 6 Against Radical Emergence

irrotational but, perhaps surprisingly, the vortices generated by both tornadoes and
bathtub drains can be modeled quite accurately as irrotational flows. Such models
are only approximately correct; the real world flows are not truly irrotationalthe
models have this property since viscosity is ignored (in terms of the discussion of
models above, there is no parameter for viscosity so to the extent that a natural system
is significantly affected by viscosity, to that extent the model under discussion here
will fail to apply to it). In fact, if one ignores viscosity, there is no way to explain how
vortices could ever form at all (without viscosity it would be impossible to stir your
coffeethe spoon would just move through the water without setting up currents).
Von Neumann disparagingly dubbed the subject of such ideal fluids, the study of dry
water. But these models are nonetheless good enough to account for many features
of commonly occurring vortices.
Whats of interest to us here is that the equations governing irrotational flow
are linear equations (see Feynman et al. 1963, vol. 2, pp. 40 ff.). The only explicit
parameter of a model of irrotational flow is the velocity field, which must meet the
no local circulation condition but an implicit, and also ideal, assumption is that the
flowing liquid is incompressible (pressure can thus be ignored). Multiple realizability
follows just as it did in the case of the pendulum: any physical system with features
corresponding to these parameters (including implicit ones) must act like the model.
Many situations involving the flow of real world fluids come reasonably close to
fulfilling this condition. Even flowing air will suffice in situations where external
pressure changes are not significant (thus, perhaps surprisingly, irrotational flow
models do a pretty good job of explaining why airplanes stay up in the air). The
dynamical autonomy of irrotational vortices, which is quite striking inasmuch as
such vortices will strongly resist destruction and can be manipulated as objects
(you can, for example, make them wiggle and twist in predictable ways without
destroying them), is also a consequence of the model and can, in brief, be traced
back to the conservation of angular momentum.
Thus irrotational models can model the structure of tornadoes and bathtub drain
vortices quite well. Such models naturally exhibit both multiple realizability and
dynamical autonomy no less than their non-linear brethren. Since it is very doubtful
that anyone (and evidently not Silberstein and McGeever) would want to claim onto-
logical emergence for the high level features of models based upon linear dynamical
equations, it does not seem possible to use either dynamical autonomy or multiple
realizability, which are common emergents of both the linear and non-linear systems,
as evidence for ontological emergence in the non-linear case.
With the red herring of non-linearity out of the way, and given our initial discus-
sion, what remains the basic line of argument deployed by Silberstein and McGeever?
They claim that there must be some general answer to the question how is multiple
realizability/universality possible? (Silberstein and McGeever 1999, p. 196). Their
answer, which favours ontological emergence, is that it is the appearance of high
level, emergent, causal forces (or higher level entities which generate these forces)
that explains both multiple realizability and dynamical autonomy. It seems clear
however, that both autonomy and multiple realizability are a simple consequence
of the applicability of a model to a natural system. I take it that Silbersteins and
6.1 Autonomy and Realization 93

McGeevers argument goes thus: very disparate submergent bases will generate the
same emergents and then these emergents, being the same across the set of disparate
submergent bases, will provide a natural explanation for at least multiple realizabil-
ity. I dont quite see how this explains dynamical autonomy unless the claim is that
the emergent features are just identical to the dynamically autonomous or stable
features. But then this does not seem to explain dynamical autonomy at all. It sim-
ply assumes autonomy at the level of the emergents. Why should emergent features
have this kind of stability? Many things that are intuitively thought of as emergent
are not very stable and quickly break up, and various intuitive emergents have their
own characteristic life-times. Special or characteristic instabilities of emergent fea-
tures ought to have an explanation no less than the stable features associated with
dynamical autonomy.
In any case, I agree that we do need an explanation of these features of emer-
gent phenomena, but Silberstein and McGeever seem to be offering a definition of
emergence in terms of dynamical autonomy and multiple realizability rather than
an explanation. As I said above, multiple realizability and dynamical autonomy are
the emergent features of the systems under consideration. Given an explanation of
autonomy which only requires conservative emergence it is otiose, an unnecessary
multiplication of entities, to posit an extra layer of radical emergence.
We saw above that at least for simple examples, multiple realizability and auton-
omy are in the first instance properties of the modelsthe restricted and abstract
mathematical description. The explanation of these features in real systems stems
entirely from the applicability of the model to the system, which depends only upon
the possibility of setting up the requisite mapping from the abstract description to
aspects of the real systems. These explanations, to the extent that we can find them,
depend upon how we derive the emergent feature from the properties of its con-
stituents in the model. Because the model will be, in general, much simpler than
the real phenomenon at issue it will be relatively easy to understand how emergents
emerge. To the extent the model applies to some real system, the explanation of
emergence will carry over automatically.
To continue with the example of the vortex, consider smoke rings (which are not
instances of irrotational flow and in fact are very complex). The smoke ring itself is
a high level entity that presumably emerges out of the interactions of myriads of
interacting molecules in the atmosphere. Such rings are highly multiply realizable
and occur in a wide range of circumstances, and can form from a wide range of
material. What explains the dynamical autonomy and multiple realizability of these
beautiful structures? Not the high level features themselves I would maintain. What
does explain it is the fluid mechanics of certain abstract models which are applicable
to a wide variety of physical systems.4
Fluid mechanics, basically, is just Newtonian classical mechanics applied to con-
tinuous substances. It does include some basic parameters of its own, notably pressure
and density, and it is a kind of field theory (with, in fact, deep analogies with classical
electrodynamics) which naturally regards the properties of continuous substances in
terms of continuous distributions of those substances properties. The equations of
fluid mechanics can model very complex situations, in which there is non-linearity,
94 6 Against Radical Emergence

chaos etc. But it seems clear that a supervenience thesis appropriate for continu-
ous substances holds in fluid mechanics (recall the discussion of supervenience in
Chap. 5). Given the state of the field at each point (these are the analogues of the parts
of a classical mechanical system) then the equations of fluid mechanics as expressed
for the particular system at issue entirely determine the dynamics of the system. No
explicit reference to high level features is necessary in principle. We have exactly
the kind of predictability in principle that we have for the other deterministic models
we have looked at.
The stability of smoke rings is a pretty direct consequence of a theorem in ideal
fluid mechanicsKelvins theoremwhich states that if angular momentum is con-
served then the circulation of a velocity field is constant (as above, the circulation
is kind of a measure of how much local rotation there is in the velocity field of a
flowing fluid). This seems to be a case where it is possible to give an explanation
of multiple realizability and dynamical stability in terms of the underlying micro
structure (in this case the structure of the velocity field). Of course, the explanation
applies in the first instance to the model but, as noted above, it will carry over to
those systems which realize the model.
Another example worth looking at is Edward Lorenzs famous non-linear but still
conceptually fairly simple model of atmospheric convection (see Lorenz 1963; the
canonical popular exposition of dynamical chaos is Gleick 1987 and for an excellent
philosophical discussion see Smith 1998). This example raises issues beyond those
of multiple realizability and dynamical autonomy even as it illustrates them.

6.2 Numerical Digression

Before discussing Lorenzs model directly, I want to digress briefly to consider


predictability and dynamical systems more generally. As noted above, when we use
the term dynamical system we might be referring to a real world phenomenon
such as, for example, a pendulum, the solar system or the weather. But we might
instead be referring to the mathematical models (sets of interlocking differential
equations for example) which may, to a greater or lesser extent, correspond to real
world phenomena or may be purely mathematical constructions with no counterparts
in the real world. It cannot be a mystery why those systems which do correspond
to certain mathematical models share key features with those models. The mystery
resides in the question of why there is a world which allows itself to be so modeled
by mathematics simple enough for human beings to actually devise.5
The mathematical models themselves exhibit various interesting but purely math-
ematical properties, including sensitive dependence on initial conditions. It does not
follow that nature exhibits these properties unless we assume that the models are in
some sense accurate. This is a substantial assumption since we know the models
are ultimately inaccurate (for example, they break down at certain length scales when
applied to the real world). Furthermore, there should not be much trouble defining
6.2 Numerical Digression 95

the notion of determinism for these mathematical models themselves even though the
concept of determinism tout court is complicated and difficult (see Earman 1986).
First recall the notion of the phase space of a system, which is the space of pos-
sible states of the system expressed in terms of the basic parameters of the system.
For a system of classical mechanics the parameters are the position and momentum
of each particle. A one particle system moving in one dimension has a two dimen-
sional phase space of position and momentum (six dimensions for motion in three
spatial dimensions). The mathematical characterization of the modeled system is
then essentially a constraint on the possible trajectories through phase space that the
system can traverse. For example, the phase space trajectory of an ideal (frictionless)
pendulum is a circle, so the allowable trajectories are such that p 2 + x 2 = C, where
p is momentum, x position and C is some constant fixed by the initial condition of
the system.
Determinism can be simply defined for such models: a model is deterministic if
and only if any point in the phase space of the system is in at most one trajectory (there
might be loopsas in the case of the ideal frictionless pendulum, and many regions
of phase space might be inaccessible to the system). In this sense, the Lorenz model
is clearly deterministic: there are no intersections of phase space trajectories from
any set of initial conditions. This is simply a mathematical fact. What is amazing
about such systems is that they never settle down into a looping trajectory.
Predictability of such a deterministic model requires that it be possible in principle
to derive a final state (usually specified by fixing a parameter interpreted as time)
from the equations of the model plus a given initial state. By in principle here I
mean predictable under relaxed computational restraints, by which I mean to allow
the use of a (thought experimental) computer with arbitrarily large but finite memory
and arbitrarily short, but non-zero, clock cycle time.
If the defining equations of a model allow for an exact analytic solution then we
ought to have predictability. For example, the simple differential equation:

df
= t + f (t) (6.3)
dt

happens to have an exact solution, namely: Cet t 1, where C is a constant. So, once
we are given an initial state of the model (which fixes the constant) we can directly
calculate the final state for any time. We might call this absolute predictability.
Many ideal mathematical models of real world systems have absolute predictabil-
ity: the two-body problem (which applies very well to binary star systems for exam-
ple), the frictionless pendulum (see Eq. 6.2 above, which applies pretty accurately
to well constructed pendulum clocks), or, in general, a huge set of different kinds of
oscillators and, at the other end of the scale of simplicity, models of black holes, of
which the cosmologist Subramanyan Chandrasekhar made the stirring (if ultimately
somewhat dubious) remark that in my entire scientific lifethe most shattering
experience has been the realization that an exact solution of Einsteins equations of
general relativity, discovered by the New Zealand mathematician Roy Kerr, provides
96 6 Against Radical Emergence

the absolutely exact representation of untold numbers of massive black holes that
populate the universe (Chandrasekhar 1987, p. 54).
But many models of dynamical systems do not admit of any exact analytic solution
and thus resist absolute predictability. Some of these are just hard to solve, but some
almost all complex systems of interest probablyare provably unsolvable (that is,
it can be shown that the solution is not finitely expressible in terms of standard
mathematical functions). Is there a way to predict the final state of the system,
given an initial state, despite this? Just as in the case of the Life world, we can turn
to simulation in place of absolute predictability. It is possible to approximate the
evolution of the system from its initial state and mathematicians have devoted much
ingenuity towards inventing and improving these numerical approximation methods
over the last 350 years or so. In essence, these methods work by discretizing time,
that is, by replacing the continuous time parameter with a ticking clock. Given that
the rule by which the system evolves is clearly stated (as in a differential equation)
it is then possible to calculate, more or less roughly, how the system evolves from
discrete time step to discrete time step.
Around 1769, Leonard Euler invented a simple method of numerical approxima-
tion which is not very efficient but can generate arbitrarily high levels of accuracy for
a given system once we relax computational restraints. it is worth working through
an example to see how this works.
Consider the following ultra-simple model:

df
= t f (t)2 (6.4)
dt
Despite its superficial similarity to Eq. 6.3, this is not easy to solve analytically.
However, we have been given the rule by which infinitesimal changes to the state of
the system cause it to evolve. Eulers method just pretends that this rule is valid when
applied to the finite jump from one discrete time step to the next. For example, let us
suppose we know that the system begins in the initial state f (0) = 1. Lets suppose
we are interested in the state of the system at time 1. If so, the interval from 0 to
1 is divided into n subdivisions. For concreteness and simplicity, although certainly
not accuracy, lets use just three time steps. So well have f (0) (which we know
already), f (1/3), f (2/3) and then f (1). f (1/3) is found by applying the rule for
changing the state of the system, that is, t f (t)2 , to f (0), which gives us
 
1 1
f = f (0) + (0 f (0)2 ) (6.5)
3 3

whichif Ive done my arithmetic correctlyworks out to be 2/3. The extra factor
of 1/3 is equal to our step size and serves to scale the intermediate steps in line with
our particular discretization of timewe are temporarily pretending that the function
f is that of a straight line between f (0) and f (1/3) which introduces errors, but
errors that shrink as we take more steps (this idea of treating the function as a straight
line between time steps is why Eulers method is called the polygonal method).
6.2 Numerical Digression 97

Anyway, to continue with the example, we now know the value of f (1/3) and we
can plug that in to the algorithm to obtain the value of the next time step:
      2 
2 1 1 1 1
f = f + f (6.6)
3 3 3 3 3

This works out to be 17/27. By repeating this process just one more time, we find
that f (1) = 1574/2187, a rather ungainly fraction approximately equal to 0.719707.
Probably there is no mistake in my arithmetic, but using a computer with a time step
of 0.001 yields a value for f (1) of 0.833121. Decreasing the step size to 0.000001
gives f (1) = 0.833383. The original approximation is very poor. On the other hand,
the difference in final value between a step size of 0.001 and 0.000001 is relatively
modest. If we were to divide the interval between 0 and 1 into 10100 steps wed end
up very close to the true value (and under relaxed computational restraints this would
by definition present no difficulty).
It is fairly straightforward to see how this method (and the others presented below)
can be generalized to apply to systems of interconnected differential equations. Sim-
ilar methods have also been devised for functions of many variables and systems of
partial differential equations (which are the more typical case when trying to model
real world phenomena).
In the world of real computers and metered electricity (to say nothing of the
world of computation prior to even the existence of electronic computers), it is
expensive to decrease the step size and this is the primary reason why mathemati-
cians have introduced other methods (a daunting partial list: Taylor, Runge-Kutta,
Euler-Cauchy, Heun, Verlet, Kutta-Nystrom, Curtis, Richardson, Merson, Scraton,
England, Adams-Bashforth, Milne, Adams-Moulton, Hamming and Numerov). So
far as I know these are all, fundamentally, refinements on the Euler method, which
derive their mathematical justifications from the Taylor series representations of
functions. And at least for the common refinements of Eulers method it is easy to
see that they do not improve the in principle accuracy (the accuracy attainable under
relaxed computational constraints) of the resulting approximations.
For example, Heuns method (also known as the mid-point rule) modifies Eulers
method by using a two-pass calculation of the next value of the target function. The
first pass, sometimes called the predictor step, uses Eulers method to determine
an approximate value for the target function, f , just as above. Calling this initial
value P for predictor value we have P = f (t) + S f  ( f (t), t)) (here f  is the rule
for changing the old value into the new valuethe derivative of f which we are
given in the definition of our model, and S is the step size). Using Eulers method
we would stop here, accept P as the value of f (t + S) and iterate it back into the
procedure. Heuns method instead uses P to compute the rate of change in f at the
end-point of the path from f (t) to f (t + S), compares it to the value of the rate of
change in f at the beginning of the interval and averages the two and then uses that
average in Eulers rule. That is, f (t + S) = f (t) + S/2( f  ( f (t), t) + f  (P, t + S)).
The two values summed in the latter half of this equation are the putative slope of
98 6 Against Radical Emergence

f at the beginning and end of the step change and it is an evident improvement in
the approximation to use the average of these two slopes rather than regarding f
as a straight line (if f was a straight line then f  ( f (t), t) and f  (P + t) would of
course be equal). It can be shown that Heuns method is definitely better than Eulers
method, with an error of order S 3 as opposed to the S 2 of Eulers method (since S
is by definition less than one the higher the exponent in the error the less the error).
In the real world of unrelaxed computational constraints, such an improvement is
well worth the cost of the extra computation at each iteration of the method which
Heuns method entails. If, for example, we use Heuns method to approximate our
example function with a step size of 1/3 as above (actually 0.333333) we achieve
f (1) = 0.854803; recall that Eulers method returns 0.719707 for that step size and
our most accurate calculation gave us 0.833121 using a step size more than 300,000
times finer than the original. Heuns method thus provides a huge improvement. But
it is clear from its construction that Heuns method does not add any new accuracy
in principle unavailable to Eulers method (it is a straight tradeoff of number of
computational steps versus step size).
The very popular Runge-Kutta method (which dates back to 1901) is a generaliza-
tion of Heuns method which deploys more predictor values of the slope of the target
function across the step change and adds an adjustable weighting of the contributions
of these slope values. The method can also be used interactively in which the step-
size is varied during the approximation to minimize error. Jos Thijssen goes so far as
to say that the Runge-Kutta method owes its popularity mostly to the possibility of
using a variable discretisation step (Thijssen 1999, p. 474). He also makes a remark
which is interesting with respect to predictability: it is possible to implement the
Runge-Kutta method with a prescribed maximum error, , to the resulting solution,
by adapting the step size automatically (Thijssen 1999, p. 474). Heuns method is
a low-level version of the Runge-Kutta method with a particular choice of number
of predictors and weightings. The most commonly used version of the method is the
Runge-Kutta method of order four (the order number comes from the derivation of
the method from the Taylor series expansion of the target function; Heuns method
is of order two) which iteratively calculates four predictor values of the slope of f .
As the reader can imagine, this begins to get very complicated, but the payoff is that
the error for Runge-Kutta of order four goes as S 5 , a rather large improvement over
Heuns method. To get a sense of how remarkably better the Runge-Kutta method is
compared to the Euler method consider that with a step size of 1/3 the Runge-Kutta
algorithm gives us f (1) = 0.833543 which differs only in the fourth decimal place
from the value obtained using Euler with a step size of 0.000001! Nonetheless, it is
once again clear that the intrinsic accuracy or the Euler method under the condition
of relaxed computational restraints is exactly the same as that of the Runge-Kutta
method.
Of course, we have been looking at trivially simple examples. To move slightly
in the direction of reality, consider that many physical systems are most directly
described in terms of the forces acting on the elements of the system and this naturally
gives rise to second-order differential equations and there are methods especially
suited to these (though it is mathematically possible to transform such equations
6.2 Numerical Digression 99

into sets of first-order equations), such as Verlets or Numerovs methods. Verlets


method has a neat leap-frog variation in which velocities are calculated at time steps
exactly half-way between the time steps at which positions are calculated. Verlets
method has another nice property, that of symplecticity (see below), which means
that the error in the energy of the approximated system remains within certain bounds
(that can be adjusted).
For all such methods, as a practical matter, we want to use a step size which is
compatible with the competing constraints of our computational resources and our
desire for an accurate prediction of the future state of a dynamical system as rep-
resented by a mathematical model. In general, the bigger the step size the cheaper
the computation but the less accurate the result. As we have just seen, clever meth-
ods let us get more accuracy out of a given step size as compared to less ingenious
methods. It would be nice to know, or at least have some idea, of what an appropriate
step size might be for any particular system. Here we begin to make our way back
to the topic of chaos, for at this point it looks like a clever mathematical device
known as Lyapunov exponents might help. Heres a rough characterization drawn
from Genay and Dechert (1996),Lyapunov exponents measure the rate of diver-
gence or convergence of two nearby initial points of a dynamical system. A positive
Lyapunov exponent measures the average exponential divergence of two nearby tra-
jectories, whereas a negative Lyapunov exponent measures exponential convergence
of two nearby trajectories If we take two points which are possible initial states
of a dynamical system, and we assume that these points are close to each other
(a somewhat slippery notion) then the difference in final state of the system is typ-
ically much larger than the difference in the initial states. In fact, in a system that
exhibits sensitive dependence on initial conditions, the difference grows exponen-
tially as the system evolves from each of the two initial states. (One can see that
this concept only makes sense for a deterministic system, as defined above, since a
non-deterministic system will not have a well defined trajectory through phase space
against which one could measure the accumulating error.)
Using a very simple example, one can express how the Lyapunov exponent can
help measure the exponential divergence thus: where f t (x) is the state the system is
in at time t given initial state f i (x), then | f t (x) f t (y)| (the absolute value of the
difference between the two final states) is approximately equal to | f i (x) f i (y)|etc .
Here c is the Lyapunov exponent for the system in question. Knowing the Lyapunov
exponent (or exponents, multidimensional systems have one for each dimension of
their phase space) reveals a lot about how chaotic the system is, helps us to discover
the systems fixed points or attractors, and tells us something about the qualitative
behaviour of the system. Basically, a positive Lyapunov exponent means a system is
unpredictable, in the sense that very small initial uncertainty about the state of the
system will rapidly balloon into a very great uncertainty. But this, once more, does
not suggest in any way that in principle predictability will fail. From the point of
view of predictability, a Lyapunov exponent merely gives us some idea of how small
a step size might be appropriate in order to achieve a given predictive accuracy for
a specified time. Since under relaxed computational restraints we can pick any step
100 6 Against Radical Emergence

size we like we can always defeat the exponentially increasing error for the specified
time, to the specified degree of accuracy.

6.3 Hitting the Emergence Wall

So it seems reasonably clear that we do have predictability in principle, or pre-


dictability under relaxed computational constraints for dynamical systems.6 Let us
then, finally, return to our discussion of the famous work of Edward Lorenz. Lorenzs
model can be expressed in just three interlocked differential equations that are worth
displaying:
d f (t)
= 10(g(t) h(t)) (6.7)
dt

dg(t)
= 28 f (t) g(t) f (t)h(t) (6.8)
dt

8h(t)
dh(t) = f (t)h(t) (6.9)
3
The functions f , g and h represent properties of an ideal convecting atmosphere:
f represents the speed of convection, g the temperature difference between distinct
convection currents and h represents a profile of temperature difference measured
vertically.
Although this model is in some ways very simple, the three functions are inter-
dependent. The system cannot be given an analytic mathematical solution and it
gives rise to chaotic behaviour in the usual sense that very small differences in initial
input conditions lead to very different final states after quite brief evolution times. It
was in fact through the necessity of using numerical approximation techniques that
Lorenz discovered that his system was chaotic (the difference in initial conditions
engendered by round-off errors in his computer quickly led to entirely dissimilar
evolutions). Subsequent investigation led to the discovery that this system is what
we might call predictable at the emergent level. What I mean is that no matter what
state the system begins in it quickly settles down to behaviour that is macroscopically
similar though microscopically chaotic. The macroscopic structure of the models
dynamics is immediately apparent from a typical diagram of the systems evolution,
the famous butterfly wings picture of the Lorenz strange attractor (see Fig. 6.1). The
diagram is simply a plot of the values of the three functions at different times.
No matter the initial state of the system (within reasonable limits) it will trace a
path very similar to this diagram, and this gross behaviour is entirely predictable.
This is certainly a kind of dynamical autonomy. On the other hand, every trajectory
is entirely distinct, with not a single point in common with any other trajectory, and
the system will switch seemingly at random from one wing of the diagram to the
6.3 Hitting the Emergence Wall 101

Fig. 6.1 The Lorenz attractor

other. The appearance of intersection in the diagram is an artifact of compressing


three dimensions into two and the need to draw the trajectory thick enough to be
visible; in fact the trajectories of the model are infinitely thin and interwoven with
an infinite complexity.
This example is entirely in line with the discussion above. If we seek an expla-
nation for the striking dynamical autonomy exhibited here, we need look no further
than the model itself. It is a mathematical fact that this model will possess the twin
strange attractors around which the system will, in its chaotic way, perpetually orbit.
Therefore, any system to which this model is applicable will automatically share the
models autonomy. And, the model will be applicable to any natural system which
permits a correspondence relation between the parameters of the models (the inter-
locked functions, f, g and h in this case) and some features of the natural system
(e.g. temperature, air current velocities, etc.) and which is, as implicitly assumed in
the model, sufficiently isolated from other disturbing influences. Multiple realizabil-
ity will similarly, and equally automatically, hold to the extent that there are many
different natural systems for which the appropriate correspondence between system
and model can be established.
New issues arise when we think about this correspondence relation. Since Lorenz
was intentionally attempting to investigate certain aspects of the Earths atmospheric
dynamics, he devised a model whose parameters correspond somewhat (but only
somewhat, for this is a crude model) to natural atmospheric variables, such as tem-
perature and wind velocity. But these are themselves high level features relative to
the atmospheres micro-constituents. As we have seen, this sort of emergence on
emergence means that exactly the same questions about model applicability, mul-
tiple realizability and dynamical autonomy arise at the level of the parameters of a
model such as Lorenzs.
102 6 Against Radical Emergence

For example, one of the parameters used by Lorenz was temperature, the emer-
gence of which is quite well understood. Temperature is modeled by the average
kinetic energy of the micro-constituents of the atmospherethat is, the molecules
of nitrogen, oxygen, carbon dioxide which (along with a host of lesser constituents)
make up the atmosphere of the Earth. These micro-constituents are themselves mod-
eled by statistical mechanics, which treats them as individual particles obeying the
mechanistic laws of classical physics. The way that temperature, pressure, density and
other thermodynamical features emerge from the properties of the micro-constituents
is a beautiful but familiar story which need not be repeated here.7
What I want to discuss is the way that a model such as Lorenzs will inevitably
break down if we take them fully seriously. What I mean can be best explained
from the point of view of the predictability of systems such as Lorenzs. These are
by now classic examples of chaotic systems which exhibit sensitive dependence on
initial conditions. Let us suppose that there was some natural system, the weather
in some region on Earth as it might be, that we sought to model with a system like
Lorenzs (no doubt it would have to be much more complex than the Lorenz system
with many more parameters and equations, but that does not matter here8 ). The point
of the exercise is to predict weather with some reasonable degree of accuracy for
some reasonable time into the future. We will not be satisfied if our predictions bear
no relation to the actual weather, or no more accurate relation than folk weather lore
can attain, or if satisfactory accuracy can only be obtained for a few days.
The fact that weather models are chaotic does not preclude predictability in princi-
ple, at least in the sense of predictability which includes simulation that we endorsed
above. But in the unavoidable absence of relaxed computational constraints the cost,
in terms of time and machine resources, of prediction increases exponentially as we
seek more accuracy or want to look further into the future at a given level of accuracy.
For the sake of the argument, however, suppose that we have an unlimited ability to
gather the extra information about the initial or current state of the system which is
needed for better and more accurate prediction and that we are operating under what I
called above relaxed computational constraints. From the point of view of the model
that is all that is neededand all that could be neededfor unlimited accuracy of
prediction out to a specified time.
But from the point of view of the natural system under study, this is not at all
true. If we seek ever more accurate predictions we must specify the initial conditions
of the system to a finer and finer degree. Suppose, for example, that we need better
measurements of the temperature throughout the region of interest (even the whole
Earth, if global climate is our concern). It is evident that the conditions for the
emergence of the relevant parameters, such as temperature, will give out as we
seek to measure the temperature in smaller and smaller volumes of the atmosphere.
Since temperature emerges from the averaged activity of the molecules that make
up the atmosphere, if we seek to measure temperature over volumes in which, on
average, no molecules whatsoever are present we cannot expect to get any sensible
value for the temperature. And yet there is a definite level of desired accuracy and
duration of prediction for which the model would demand temperature data for those
6.3 Hitting the Emergence Wall 103

volumes. Since such values are simply unavailable the model no longer is applicable
to its target natural system.
In general, there will be many reasons why the conditions for the emergence of
a models parameters break down. Fineness of spatial volume is a typical example,
but there might be as well temporal or energy level breakdowns as well. In our
example, the model would tend to break down for reasons such as the intrusion of
molecular level forces which are invisible at the level of the model. (I should hasten
to mention that this is not a practical worry. Current military global weather models
have typical cell sizes on the order of a few kilometres, with variable vertical scales,
and even there, of course, the data themselves which are assigned to these cells
are interpolated. It seems also likely that there is a finite size limit on how small a
temperature sensing device could be and this introduces another limitation on the
application of the model, which assumes that data are free in the sense that data
acquisition does not itself interfere with the modeled processes.)
Lets call the level of specificity which undercuts or eliminates the conditions of
the emergence of the parameters of our model the emergence wall. If a problem
requires data of greater specificity than the conditions of emergence permit then the
model has hit the emergence wall.
It is an interesting theoretical question, and one for which I have no idea of the
answer, what sort of theoretical accuracy in weather prediction we could attain
using current models and assuming that data sensors were as unobtrusive as we
likedbefore the model hit the emergence wall. Would it be months, years or cen-
turies? This is the question of how quickly error tends to grow in our weather models
(recall how this question can be expressed in terms of the value of Lyapunov expo-
nents). There is no doubt that there is some time for which prediction would fail
because of this problem.
If a model is chaotic then the error in prediction grows exponentially with the
uncertainty in the initial conditions and the desired length of prediction. That is, if the
initial uncertainty is represented as x then the error in prediction will scale as x
et , where is the Lyapunov exponent for this model (or, somewhat more accurately,
is the maximum Lyapunov exponent from the set of exponents which the model will
generateone for each spatial dimension; there are other ignored complexities here,
such as the fact that the Lyapunov exponents will vary across different regions of the
systems phase space). Simply as an exercise, lets suppose that the critical exponent,
, is equal to one (which is perhaps not too unreasonable seeing as the corresponding
exponent for the Lorenz system is 0.9), that the uncertainty of initial conditions in
our current weather model can be expressed in terms of a length scale (the cell size
discussed above) which we can rather arbitrarily set to, say, 3,000 m. Finally, suppose
that we can already predict the weather with acceptable accuracy for 4 days.
Given this imaginary data, we can compute the maximum length of acceptable
prediction before the model hits the emergence wall, where, again somewhat arbi-
trarily but conservatively, we set the latter value at roughly the size of a typical
molecule, or about 3 1010 metres (a cell size for which it is very obvious that the
concepts of temperature, pressure etc. that apply in weather modeling have no mean-
ing whatsoever). I have to admit that I am amazed at the answer. Within the confines
104 6 Against Radical Emergence

of this toy problem, even if we knew the state of the atmosphere at molecular length
scales we could make predictions as accurate as our presumed current ones for only
thirty extra days beyond the original four days.9 On the other hand, and somewhat
more optimistically, this suggests that there is a lot of room for improvement in our
standard methods of weather forecasting.
Weather prediction is notoriously difficult and nobody expects any accuracy
beyond just a few days at most, except of course at those places where the weather
tends never to change but that reflects no virtue of the weather models. The domain
which represents the opposite end of the accuracy scale is celestial mechanics, which
many cultures have cultivated and which for thousands of years has made stunningly
accurate predictions of events far into the future.10 It is thus interesting and some-
what disturbing that the planetary orbits are in fact chaotic (see Murray and Holman
2001) and, at least for the outer planets, are basically unpredictable on timescales
of a few tens of millions of years. The problem of the stability of the Solar System
goes back to Newton, who wondered whether God might have to step in every so
often to nudge the planets back into their proper orbits.11 The worry seemed real
as in the century after Newton the orbits of Jupiter and Saturn would not come into
line with astronomical prediction. But this great inequality was solved by Pierre
Laplace who took into account the 5/2 orbital resonance between the two giant plan-
ets, an achievement that confirmed his determinism and helped lead to his famous
anti-Newtonian remark that his system had no need for the hypothesis of God. This
particular intellectual pendulum swung back with the work of Henri Poincar on
Solar System dynamics in the late nineteenth century which led to a nice statement
of the phenomenon that came to be called chaos: it may happen that small dif-
ferences in the initial conditions produce very great ones in the final phenomena.
A small error in the former will produce an enormous error in the latter. Prediction
becomes impossible (Poincar 1908/1960, p. 68).
This is not to say that our Solar System is going to fall apart any time soon (if you
can call a few tens of millions of years soon). Although the orbital position of the
planets is chaotic the basic structure of the system is quite stable. For example, the
time before Uranus is likely to be ejected from the Solar System under the influence
of its much more massive neighbors, Saturn and Jupiter, is something like 1018 years
(Murray and Holman 2001, p. 77812 ). Long, long before that the Sun itself will have
burnt out.
There is a point of some interest here. Although both the weather and the Solar
System are chaotic dynamic systems, the timescale on which chaos reveals itself in
the latter case is so long that we can preserve the illusion that the Solar System is easily
predictable. The same would be true of the weather if we wished to use our models
to make predictions for the next five minutes (though of course it will take longer
than five minutes to get any predictions out of our weather models). Predictability
is a relative notion: relative to the timescales natural to human observers, relative the
natural timescales of chaos of the systems in which we are interested and relative to
the time it takes for our models to generate their predictions.
It is important to emphasize that the location of the emergence wall is thus also
relative to the model under consideration. A model of the solar system faces an
6.3 Hitting the Emergence Wall 105

emergence wall no less than a model of the weather, but the relationship between the
order of accuracy and length of prediction is of course radically different for solar
system modeling than it is for weather modeling. Nonetheless, because the solar
system is in fact a chaotic dynamical system, the model of more or less spheroidal
(the more or less can be mapped out in some detail as well) masses in mutual
gravitational interaction will give out if predictive demands are pushed far enough
(though perhaps the models accuracy with respect to the evolution of the real solar
system will never fail internally, as we are discussing, but rather more spectacularly
in the destruction of the solar system itself by some unmodeled force or event).13
While my weather model example was entirely made up and has no connection
to real weather forecasting models, its upshot is nonetheless relevant and sobering.
The emergence wall is real.
However, the issue only arises when we have a good deal of knowledge about the
conditions under which whatever high level feature we are interested in emerges. In
order to calculate, even very roughly and approximately, the domain in which the
emergence wall threatens a model, we need to have a good idea of how the emergence
of the models parameters comes about. Unfortunately, for a great many high-level
features of the world we have very little knowledge about the conditions of their
emergence. It is also necessary to have a reasonably good theory of the emergent
features themselves, at least good enough to define initial conditions in ways that
constrain the evolution of the system sufficiently to gauge how changes in the initial
conditions affect the systems development.
Consider, for example, the issue which is really driving the emergence debate: the
emergence of mental states, especially states of consciousness. Here, we are pretty
much completely in the dark on both counts. Notwithstanding all the various kind
of evidence we possess which links the mind to activities in the brain, a minute
fraction of which was discussed above in Chaps. 3 and 4, we know very little about
the conditions for the emergence of mental states. We have no theory of psychology
which is powerful enough to allow for even rough specifications of initial conditions
with enough bite to yield predictions of behaviour which we could contrast with
the behaviour of alternative, nearby initial conditions. We can not apply an analysis
anything like that of the weather models to the mind unless and until we have some
theory of the relation of mental states to some underlying features for which we also
possess some reasonably well worked out theory.
It is easy, tempting and possibly correct to speculate that mental states emerge
from the activities of whole neurons which stand in more or less particular, essentially
discrete, network structures (as discussed in Chap. 3). If so, we might guess that the
emergence wall arises somewhere near the length scales of individual neurons and
temporal scales characteristic of these neurons network activity (that is, their firing
rates or spiking frequencies). It is quite conceivable that we will someday have
a theory which provides a good dynamical understanding of the state evolution of
biological neural networks. Even in that neuroscientific utopia, we would further
require a much better psychological theory than any we possess now. In general
matters of the prediction of human behaviour, the best theory we have got is still our
ancient standby, folk psychology, and for all its evident strengths it is very weak on
106 6 Against Radical Emergence

assessing the differences in behaviour that arise from assigning closely similar initial
conditions to psychological systems. Say that Jill loves Jim very much or just a
lotwhats the difference in behaviour predicted by these different initial states,
even allowing that all else is equal?
Perhaps a theory pitched at the emergent level of psychology but explicitly mod-
eled on dynamical systems theory might offer some hope here. The so-called dynam-
ical systems approach to cognitive science aims precisely for this (for examples of
the approach in action see the various articles in Port and van Gelder 199514 ; a
more abstract philosophical account of the idea can be found in Horgan and Tienson
1996). In such a theory, psychological states would be represented as, or determined
by, interacting cognitive forces driving the evolution of the systems behaviour. If
such a theory could be constructed and if it allowed for the more or less accurate
measurement of these putative psychological forces, we might have a theory suitable
for discussing the emergence of psychology from neural activity, and thus the further
possibility of locating the emergence wall of psychology.
Be that as it may, we can ask a more general question. What, if anything, does
the existence of the emergence wall tell us about emergent features? The first conse-
quence is that whatever emergent features we might be concerned with, the theory
which deals with them is of limited applicability. There is, of course, a trivial sense
in which this is true: particular theories of emergents only apply to systems which
support the associated emergent features, and part of the point of invoking the notion
of emergence is that such systems are more or less rare. But the case of dynamical
systems such as we have been looking at reveals a more serious problem of limited
applicability. At least some theories of emergent features will fail within their domain
of applicability, or, it would be better to say, within what ought to be their domain of
applicability. Following the toy analysis of the emergence wall for the weather model
given above, that model would be unable to give, say, a fifty day prediction because
of an intrinsic failure. Any attempt to give such a prediction requires us to leave
the domain of emergence altogether leading to the total breakdown of the models
applicability even though we ultimately seek information that is entirely within the
purview of the model.
The general condition for such a breakdown is some characteristic scale of
emergence. Emergent features necessarily emerge from activity that occurs on scales
finer (in some not necessarily spatial sense) than the scale of emergence, and the
detailed behaviour of the emergents themselves remains dependent upon that activity.
If the relevant structure or processes at the fine scale fall apart or cease to occur then
the emergent features themselves will also disappear. In the weather model, the scale
was simply length but other scales are possible (time and energy suggest themselves
immediately; abstract scales of complexityif we knew how to measure itprovide
more esoteric possibilities).
Prior to drawing any philosophical conclusion about ontological emergence from
the fact of the emergence wall, it is worth briefly examining the generality of the
phenomenon. It is often said that Newtonian physics is a low-energy approximation
of relativistic physics. We can see that this comes very close to being equivalent, if
it is not strictly equivalent, to saying that Newtonian phenomena (that is, objects,
6.3 Hitting the Emergence Wall 107

events and processes that to which Newtonian models are applicable) are emergent
features, springing forth from some other domain. For example, Newtonian masses
are constant; they do not change merely because the massive object happens to be
moving. Thus a model of the acceleration of a Newtonian mass is quite simple: a
constant force applied to a mass will accelerate it at a rate inversely proportional
to its mass (Newtons second law), and this rate will be constant. It follows that
the velocity of such a mass will increase in direct proportion to the time the force
is applied to it. Of course, relativity tells us that this picture is far from true when
velocities become very large relative to the speed of light.
We can locate the Newtonian emergence wall once we fix how accurate a char-
acterization we seek as well as fixing the parameters of a certain problem (as in the
weather example above). Let us say that we are modeling the time it takes to accel-
erate something to 100,000 km/s under an acceleration of 10 m/s2 . This is a trivial
problem for the Newtonian model: the time we seek, t, is just v/a; in this case t is
about 116 days. Although the calculation is not quite so trivial,15 the correct special
relativistic model gives an answer of about 118 days. If it really mattered for some
reason exactly when (or even to within a day or two) the object of interest reached
the target speed (from the point of view of the observer) then our Newtonian model
would have hit the emergence wall.
A much less trivial example, and one with great practical import, is the corrections
to the Newtonian viewpoint necessary for the gps satellite navigation system to work.
A number of relativistic effects, derived from both the special and general theory of
relativity, including motion induced time dilation, the effect of the gravitational field
on local time and effects generated by the Earths rotation, have to be included in
the gps models. Without them, gps positions would degrade at the rate of some ten
kilometres per day and would thus quickly become useless (see Ashby 2003).
The Newtonian mechanics of mass and acceleration which the gps system reveals
as inadequate is only a tiny part of a much larger general problem of emergence:
the appearance of a classical world out of the non-classical physicsquantum
mechanics as well as relativitywhich underpins it. In general, we can say that the
classical world emerges from the underlying quantum and relativistic structure of the
universe. This is to say, because of the nature of the quantum world, the relativistic
world plus the particular state of the universe in our general vicinity, classical models
become applicable to a wide range of phenomena across a wide range of spatial,
temporal and energetic scales. This problem connects to many of the core problems
in the philosophy of physics. The measurement problem in quantum mechanics stems
from the putative mismatch between the observed classical world of definite objects
and properties as opposed to the smeared out world of state superpositions evidently
predicted by quantum physics. In general, one of the constraints on any interpretation
of quantum mechanics is that it somehow recover the sort of world which we observe
ourselves to inhabit. The kind of chaotic dynamics discussed above is a hallmark of
classical physics but there is some question of how anything like it can appear in
a world that is fundamentally quantum mechanical in nature (see Ford 1989, Belot
and Earman 1997).
108 6 Against Radical Emergence

While this is an extremely large, complex and by now highly technical issue, there
are some elementary, well known and important links between the correct physics
and the classical world which emerges from it. The toy example above shows that for
speeds much less than the speed of light, classical mechanics will be highly accurate
at predicting accelerated motion. Of course, our dealings with the world, even in a
scientific context, overwhelmingly involve such velocities.
A more interesting link, and now one between quantum and classical physics, is
provided by the so-called Ehrenfest equations. These state that the expectation values
of the position and momentum of a quantum system will obey laws which are close
analogues of the corresponding classical laws. In classical mechanics we know that
momentum is just the product of mass and velocity, p = mv, which can be rewritten
in the form of a relation between momentum and position as

dx
p=m (6.10)
dt
The corresponding Ehrenfest equation is this:

d x
 p = m (6.11)
dt
where x and  p are the expectation values of the position and momentum operators.
Roughly speaking, these are the mean values you would expect to get if you measured
position or momentum for a system in a given state many times (which is not to say
that you would ever obtain exactly the value x in any particular measurement).
Although it is interesting that quantum mechanics duplicates the classical equation,
it means little unless we also consider the uncertainty in position and momentum
against which the expectations values are defined, for we would hardly find ourselves
in a classical world if objects with quite definite velocities did not seem to have any
particular location!
Everyone knows that the famous Heisenberg uncertainty principle puts limits on
how well we can measure both position and momentum and it is generally agreed
that this is a feature of the world rather than a mere epistemic limitation.16 Quantum
uncertainty is relative to the mass of the objects involved. The uncertainty relation
itself is simply this: x p , where  (Plancks constant, h divided by 2 )
has the incredibly tiny value of 1.054 1034 (in units of energytime). We could
therefore know an objects position to a precision of more than a million billionth
(1015 ) of a centimetre while retaining an even larger order of precision in the
objects momentum (a billion-billionth (1018 ) of a centimetre-gram per second).
Obviously, for objects in the classical domain, the relevant masses are so large that
there is, for all practical purposes, no difference between the expectation value and
the values we get for each individual measurement.
The second Ehrenfest equation is the quantum mechanical analogue of Newtons
second law, F = ma, or
d2x
F(x) = m 2 (6.12)
dt
6.3 Hitting the Emergence Wall 109

The quantum version once again simply replaces the definite classical values with
expectation values, thus:
d 2 x
F(x) = m (6.13)
dt 2
Once again, this tells us little unless we consider the uncertainties involved. It can
be shown that for macroscopic objects the uncertainty is beneath consideration,
which means in the first place that the following somewhat subtle identity holds:
F(x) = F (x), and this means that the equation above falls exactly into line
with the Newtonian version. Then we can deploy the same analysis as for the first
Ehrenfest equation to show that for large objects the uncertainty in position will be
so small that there will be no practical difference between the expectation value of a
set of measurements and the values of the individual measurements.
That may seem reassuring inasmuch as such a tiny uncertainty at classical levels
will presumably not lead to any untoward quantum intrusions into the classical world.
In a certain sense, of course, this must be true since we do after all observe a pretty
good simulacrum of a classical world, but the issue is in fact very complex (see
Belot and Earman 1997, Belot 2000). As we have seen above, we can ask about
the emergence wall in terms of possible prediction times. Normally, this involves
asking how long the emergent level model will provide accurate (enough) prediction
before our quest for accuracy forces us to descend to an explicit account of the (or
a) submergent domain. We can also ask a kind of inverse question. Supposing we
begin within the submergent domain, how long will our predictions remain accurate?
This ought to have a trivial answer: forever. But it might not in particular cases, if
there is, for example, a hierarchy of levels of emergence and we thus expect to hit
another emergence wall as we attempt increasingly accurate predictions over longer
time periods.
Another more interesting possibility is that it might be the case that the known
stability of the emergent domain will put constraints on the submergent account.
How can this be? One radical answer would appeal to some kind of genuine, if rather
magical, emergent downward causation. But one must be careful here to spell out
clearly exactly what this view amounts to. Judging by the literature, it is apparently
tempting and all too easy easy to slip into the posture of radical or ontological
emergentism, and then to retreat under pressure to the idea that emergentism is just
an epistemic necessity, and that there is no downward causation in the genuinely
metaphysical sense of the term.
One instructively unclear writer who at least courted this confusion was the famous
neuropsychologist Roger Sperry who made the seemingly radical suggestion that the
study of mind and consciousness requires a shift to a new form of causality, a shift
specifically from conventional microdeterminism to a new macromental determinism
involving top down emergent control (Sperry 1991, p. 222).17 But when we get
to the proposed explanation of emergent control it seems less ontologically exciting
than it sounds at first. In truth, it seems no more than the claim that the laws of
microphysics have to be supplemented with some initial conditions in order for
any state sequence to be determined. Sperrys favorite example is how the molecules
110 6 Against Radical Emergence

of a wheel are controlled by its emergent circular shape. But either this just means
we have to take into account the initial arrangement of molecules plus those in the
environment or else it lapses into highly dubious claims about the emergent physical
properties and laws for the wheel as a whole (Sperry 1991, p. 225).
Of course, we can accept the explanatory importance, perhaps indispensability
of a sort deeper than the merely practical, of such macro descriptions, but this does
not entail radical emergentism. The deflationary reading is encouraged by Sperrys
acceptance of the core claim of the non-radical emergentist that emergent interac-
tions are accomplished without disrupting the chains of causation among the sub-
entities at their own lower levels, nor in their upward determination of the emergent
properties (Sperry 1991, p. 225). At this point it is tempting to reduce away top
down causation by a simple argument based on the transitivity of determination or
causation. It may be worth noting a certain similarity between Sperrys example
and Hilary Putnams discussion of the square block and round hole (Putnam 1975).
Putnam rightly points out that the intelligible explanation of why a the block wont
pass through the hole is not going to appeal to the fundamental physical state of the
block but will operate at a more everyday level. That however does not undercut the
claim of full determination of this effect by the micro-physical state. And Putnam is
consistently clear that the issue is one of explanation.
More prosaically, we have here simply a kind of test of the adequacy of the theory
of the submergent domain. In the special case of retrieving classical behavior out of
quantum mechanics we might ask the question in this way: how long would we expect
a classical (emergent) system to retain itsat least approximateclassicality? We
know that the answer is indefinitely and if the quantum account cannot achieve this
then it is at fault and not the classical world. Such derivations can thus provide a test
of the theory of the submergent domain.

6.4 Quantum Emergence

An especially interesting example of this sort of thing is the investigation of the


quantum stability of the Solar System undertaken by Wojciech Zurek (this work
stems from a research program of Zurek and Paz developed in a series of papers;
see for example Zurek 1991, Zurek 2002, Zurek and Paz 1994 and, for the particular
case of the Solar system, Zurek 1998). In yet another instance of an emergence
wall, the very small quantum level uncertainties in the position and momentum of
macroscopic bodies will interact with the deterministic chaos of certain dynamical
systems sufficiently to have effects within a reasonably short time (where short is
relative to the lifetime of the system at issue).18
While in classical models, the growth of uncertainty is merely epistemological
and results only in an inability to know the future state of the system, the quantum
model of the evolution of such uncertainties basically requires that the system go into
a coherent superposition of all the states compatible with that degree of uncertainty.
Chaos tells us that this uncertainty is going to grow exponentially quickly which
6.4 Quantum Emergence 111

ought to result in a kind of smearing out of position over time. But, one might be
forgiven for hoping, the smearing will require quite a long time. Unfortunately,
Zurek calculates that for the planetary bodies within our Solar System the time at
which Newtonian behavior should break downwith the positions of the planets
smeared out to the size of the system itself such that it makes no sense to speak
of their orbits or positionsis less than one billion years! Since the Solar System
is more like four and a half billion years old, and the Earth at least seems to have
retained a pretty definite position and orbit, there is evidently something wrong with
the Zurek calculation. This is a very beautiful problem in the way it unites the two
hallmark features of the classical and quantum worlds: deterministic chaos and non-
epistemic uncertainty, in a deep paradox. In general, chaos plus quantum mechanics
leads to the conclusion that the classical world is impossible.
Zureks own solution is that the constant interaction between the Solar System
and the galactic environment of dust and gas force the system to behave classically.
In effect, the environmental links prevent the system from entering or remaining for
long in the superpositions of states that would be produced by the normal evolution
of a quantum state, a phenomenon known in general as decoherence.
This is a ubiquitous feature of quantum systems. They are in constant interac-
tion with a huge and complex environment which acts, so to speak, as a continual
observer. This manner of speaking suggests that the quantum wave function describ-
ing the system in question is continuously trying to go into superposition states
but the influence of the environment is perpetually forcing the collapse of the super-
position into just one of its terms. There are indeed some theories that posit an
actual dynamical process of wave function collapse (for an excellent overview of the
Dynamical Reduction Program see Ghirardi 2008; Roger Penrose develops a distinct
theory of wave function collapse in Penrose 1989, especially Chap. 8).These theories
explicitly introduce new fundamental physics for which there is at the moment not a
trace of empirical evidence. Why are they needed? Why not let ordinary processes
of decoherence collapse quantum wave functions in the standard way?
One obvious problem with taking the easy way out is immediately apparent when
we step back and regard the entire universe as our system of interest. If our theories
are intended to provide the true account of the nature of reality, then there seems to
be no objection to the idea that the entire universe is itself an entity that ought to fall
within the purview of physical theory. But the entire universe does not interact with
any environment and thus what we might call total superpositions ought to remain
uncollapsed. There is no reason to deny that such total superpositions will involve
macroscopic, ostensibly classical objects.
But then there is no way to escape going all the way to a view in which the universe
is in some incalculably complex superposition of everything that could have happened
since the beginning of time. This picture of the universe generally goes by the name
of the many worlds interpretation of quantum mechanics, first proposed by Hugh
Everett (Everett 1957), according to which the universe contains infinitely many
alternative histories corresponding to all eventsor, better, all possible sequences
of eventspermitted by the laws of quantum theory and the initial quantum state
of the universe. This does not seem to be progress. But there is an ongoing theo-
112 6 Against Radical Emergence

Fig. 6.2 Double slit experi-


ment

retical framework in which it can, pretty much, be shown that the overwhelming
majority of the worlds that make up the many worlds universe will appear classical
(i.e. things will appear to have definite attributes not smeared out and weird super-
positional properties). This approach to the emergence of the classical sometimes
goes by the name of decoherent histories and there is already a huge literature,
both scientific and philosophical, exploring its prospects and consequences (Joos
et al. 2003 provides a technical book length treatment; see also Tegmark 2007a
and for a good philosophically oriented overview and discussion see Bacciagaluppi
2007).
The basic idea is that within the quantum jumble, environmental decoherence
will select certain states which will become effectively isolated from each other,
in the precise sense that they will not interfere with each other in the quantum
mechanical meaning of interference. This comes close to the claim that they will
act like classical states. Sequences of these states can be re-identified over time
and generate the worldsand the observers within themwhich make up the total
universe. The superpositions do not disappear. They become so to speak manageable,
without overtly observable effects (save in special cases where quantum coherence
is preserved, as for example in a physics laboratory for a short time).
Lets put this aside for a moment to lay out some basic features of quantum
mechanics that will help clarify the claims made by the decoherent history approach
about the emergence of the classical domain. Probably the best way to do this is to
consider the famous double-slit experiment.
Imagine a beam of particles (in principle it does not matter what kind or size)
directed towards a barrier in which only two slits allow passage. The distance between
the slits has been very carefully calibrated to the size of the impinging particles. At
some distance behind the barrier is a detector plate that can register the particles that
make it though the slits. The setup is schematically illustrated in Fig. 6.2.
Quantum mechanics tells us that, assuming that no one (or no instrument) is
looking at the slits, the particles go into a superposition state which is the sum of the
6.4 Quantum Emergence 113

two possible paths they can take towards the detector screen. We can write this as
follows:
1
= (t + b ) (6.14)
2

In this formula, t and b stand for the state of a particle which traverses the
top slit and the bottom slit respectively. The total state is an equal sum of both
possibilities (the root sign arises as a technicalitywe have to square the state to get
the actual probabilities of measurement). This is a state quite distinct from either of
its components and most certainly does not merely represent our ignorance of which
path any given particle might take. Now, suppose we are interested in knowing
where our particle will be detected. The whole point of quantum mechanics is to
provide an algorithm which allows us to transform descriptions of states such as
provided in Eq. 6.14 into a number which represents the probability of any particular
measurement we might wish to make on the particle. Of course, in this case we are
interested in the position on the detector screen.
Let us say we wish to know the chance the particle will end up in region R on the
screen. The formula for this probability is written thus:

 | PR  (6.15)

If you think of as like a vector, PR is the projection of that vector onto a certain
axis (which represents a certain measurement value of some observable property)
and the procedure is like measuring the length of the projection of onto this axis.
The details dont really matter here; what matters is that this inner product will
generate the probability value we seek. But consider that we know that is actually
a somewhat complex state so Eq. 6.15 should really be written out, rather painfully,
as:  
1 1
(t + b ) | PR (t + b ) (6.16)
2 2

Themathematical rules of the inner product allow us to factor out and multiply
the 1/ 2 and combine the states rather as if we were just multiplying two sums to
obtain:
1
[t | PR t  + b | PR b  + t | PR b  + b | PR t ] (6.17)
2
Notice the last two terms in this monstrous formula. These are the interference
terms. The first two terms represent the probability of the particle being detected
in region R when the particle takes a normal path through just one of the slits. If
we consider them alone we get a nice distribution on the detector screen with peaks
corresponding to the two slits, which is just what we would expect if the particles
were indeed little BBs or marbles being shot through two slits. The extra interference
terms however modify the probability and reveal the truth: the distribution of hits
114 6 Against Radical Emergence

on the screen is a pattern reminiscent of the way waves flow through and around a
barrier, as illustrated at the right of Fig. 6.2.
Although it is no great feat to perform the double slit experiment with real waves
and we have all observed the interference patterns formed on water in a boats
wake, it is only in recent times that it has been possible to do the experiment with
electrons (very particle like entities but difficult to work with on a one-on-one basis).
Nonetheless, a team of researchers at Hitachi managed to perform the electron double
slit experiment and have produced a beautiful video of the gradual buildup of the
interference pattern produced by a beam of electrons (see Tonomura 1989).19
It is important to note that the electrons were produced one by one and did not
pass through the experiment in bunches. This is perpetually mind boggling. If the
electrons are going through one by one, how do they know how many slits there are
in the vicinity? If theres only one slit there wont be any interference at all.
It is very weird that particles should behave like thatbbs and marbles certainly
dont. However, if one has a way to tell which slit the particles are traveling through
then the interference patterns disappear and the particles behave more the way parti-
cles ought to behave. In terms of the quantum mechanical description, this requires
that the interference terms somehow be suppressed or eliminated. How would obser-
vation do that? It is actually not hard to see how this works (given the occasional
mathematical pronouncement from on high).
So, suppose we have got a device which tells us which slit a particle passes through
on the way to the detector. I am going to assume here that we have a perfect device
which does not disturb the particle in any tangible way and with 100 % certainty
delivers the truth about the path of the particle. Still, it must be the case that after
the particle passes by, the detector changes state and since the particle in in a super-
position then the detector will also be in a superposition. How should we represent
the act of measurement? The basic idea is that the particle and the detector become
correlated but this correlation is quantum mechanically special. Formally, we can
represent a post measurement state, say the measurement of the particle as passing
through the top slit, as t Dt where Dt indicates that the detector registers the
particle as passing through the top slit and is mathematical entity called the tensor
product which we can simply consider as representing the joint state of the particle
and detector.
Seeing as the particle can go through either slit and emerges in a superposition
of these two possibilities the system consisting of particle plus detector will also go
into a superposition, the representation of which is analogous to Eq. (6.14):

1
j = [(t Dt ) + (b Db )] (6.18)
2

This total state, j , is one in which the states of the particle and the detector are
entangled.20 Entanglement is a fundamental feature of quantum mechanics but one
that has some bizarre consequences which we will consider below. In this case, it
seems quite reasonable that the superposition should encode joint states of particle
and detector. At one level, it simply expresses how an (ideal) detector ought to work.
6.4 Quantum Emergence 115

But, the reader must be wondering, how does this bear on the question of why,
when the detector is present, the interference pattern disappears? To understand this
we need to think about how we can find out the probability of the particle being
detected in a certain region of the screen if the initial state is j . There is no way we
can disentangle the detector from the particle, and the presence of the detector at least
makes a clear mathematical difference. The algorithm for probability determination
remains the same. To calculate the probability that the particle will hit in region
R we apply the same procedure as before. However, this time we need an operator
which applies to the joint system of particle plus detector. Luckily, operators can be
combined in pretty much the same way states can be. Also, we dont want to make
any measurement on the detector itself, we only want to know about the particles
position. We can then replace PR with PR I where I is the identity operator which
is such that for any state I () = . The probability we seek is then given by

j | (PR I ) j (6.19)

Obviously, this is going to get very messy


when written out in full, but, it will
take the form of four distinct terms (the 1/ 2 will move to the outside of the whole
expression as above and Ill ignore it from now on). The first term will look like this
(using mathematical properties of the inner product and the tensor product):

t Dt | (PR I )(t Dt ) = t Dt | PR t I Dt  (6.20)


= t Dt | PR t Dt  (6.21)
= t | PR t  Dt | Dt  (6.22)
= t | PR t  (6.23)

The last line follows because the inner product of a state with itself is equal to 1
(thinking of the states as vectors the overlap of a vector with itself is perfect). So in
this case the probability that the particle will be found in region R is not affected by
the detectors presence. But consider one of the cross terms:

b Db | (PR I )(t Dt ) = b Db | PR t I Dt  (6.24)


= b Db | PR t Dt  (6.25)
= b | PR t  Db | Dt  (6.26)
=0 (6.27)

The contribution of this cross term to the total probability has been eliminated.
Why does this follow? Because the two detector states are orthogonal which is to
say they represent completely distinct outcomes. Using the vector image, they are
at right angles to each other and have absolutely zero overlap. Of course, the other
cross term would devolve to 0 in exactly the same way. The only terms that would
make a non-zero contribution to the probability of the particle being found in region
R are the two terms which represent the particle taking the ordinary path through one
116 6 Against Radical Emergence

of the two slits. That is, we get particle like behaviour with no interference pattern
simply because of the presence of the detector.21
This is an example of decoherencethe initial (coherent) particle superposition
state has been effectively transformed into a state in which the particles are act-
ing classically. Of course, the more compendious system of particle plus detector
remains in a superposition or, as we might say, quantum coherence has spread out a
bit into the environment. To make the problem more vivid, imagine that the detec-
tor is a conscious observer (somehow able to perceive and track the particles in the
experiment, but let that pass). According to this analysis, this observer ought to end
up in a superposition of seeing the particle pass through both of the slits, whatever
that would mean. One can scale up thought experiments like these all the way to the
infamous Schrdingers cat.
However, the trick can be repeated. We can view the external environment of the
detector as a kind of additional measurement device which will effectively transform
the particle plus detector super-system into one that appears classical in the same
way the presence of the detectors makes the particles act classical from their
the detectorspoint of view (the mathematics would be identical to the above save
for even more unwieldy formulas).22 Ultimately, coherence is smeared out into the
entire universe and, it is hoped, the tiny subsystems of the universe that constitute
conscious observers such as ourselves will have experiences of a mostly classical
world. This provides both the explanation of why the worlds in the many-worlds
of the Everettian interpretation of quantum mechanics are of the sort we perceive
as well as the principle for distinguishing these worlds from each other as separate
branches in the vast total state of the universe.23
We are now in a better position to investigate the most frequently made claim that
quantum mechanics shows that some kind of radical emergence must exist in our
world. The bizarre phenomenon of entanglement (and quantum mechanical super-
positional states in general) is sometimes taken as conclusive evidence for some kind
of ontological emergence (see Silberstein and McGeever 1999; Humphreys 1997b;
Teller 198624 ).
We have already seen examples of entanglement, in the coupling of measured sys-
tem and detector. But entanglement is a very general feature of quantum mechanics
that goes far beyond the context of measurement. It arises from the linear superpo-
sition principle, the fundamental feature of quantum mechanics that any two states
of a system can be added to form a new state. Given certain other core features of
quantum mechanics, this leads to the following possibility. Consider a system which
creates particles in pairs with some conserved property, such as spin, which can take
either a positive or negative value (we can ignore the magnitude and just label these
+ and ). Assuming we start with zero total spin then the spins of the individual
particles (call them A and B) must sum to zero to preserve the conservation of angular
momentum. There are two ways that this can happen, namely, if A has spin + and
B has spin , or the reverse. There is no way to control the polarity of a particular
particles spin during pair production, so when, for example, A is created it is in a
superposition of + and spin, and similarly for B. But since the total spin of the
6.4 Quantum Emergence 117

system has to be zero, if we measure A and find it has spin + then we immediately
know that B has spin (or at least will, if measured, give a guaranteed result of ).
This state, known as the singlet state, can be expressed thus:

1
[(A+ B ) (A B + )] (6.28)
2

where as above the symbol (tensor product) represents the joint state of particle
A and B.25 There is no way to decompose this complex superposition into a form in
which the A states and B states are separated, hence the use of the term entanglement
to describe such states.
Entanglement has many peculiarities. Notoriously, since measurement in effect
forces a state into one of its components, it entails that no matter how far apart A and
B are, upon measuring A to be + (), it is instantaneously fixed that a measurement
of B will give (+). Also, it seems clear that quantum mechanics is endorsing, or
perhaps revealing, some kind of holism about entangled states; they are not reducible
to purely local states of the component particles. Furthermore, there is no way to
devise any local properties of A and B which can carry the observed correlations
between possible spin measurements (though this is possible if the properties can
exchange information instantaneously across any distance, a possibility that conflicts
with our current understanding of the physical world). So entanglement is very weird.
However, our question is about whether this phenomenon gives any support to
the claim that modern science provides examples of radical emergence. The answer
seems clearly to be no. The situation does not seem to be that different from the
case of mass, which if we take specific mass properties, is actually quite interest-
ing. For example, the specific mass property exemplified by a hydrogen atom is
1.007825037 amu. The mass of a proton is 1.00727638 amu and that of the electron
is 0.000548579867 amu. The atomic mass is not quite the same as the sum of the
masses of the components because the energy which binds the electron to the proton
must be taken into account according to the relativistic principle of energy-mass
equivalence. This latter is an empirical law which governs the causal interaction of
the proton and the electron. The empirical principle involved could have been differ-
ent. But, crucially, the principle is a physical law which operates over purely physical
properties. No one would think that the mass of the water molecule is a radically
emergent phenomenon.
Just as the complexities of mass composition depend logically upon the laws
governing mass and energy (which are basic physical laws), so too the complexities
of entanglement are a logical consequence of the laws governing the basic properties
and interactions of the quantum world. In some ways, the analogy is quite close. One
would be severely misguided to think that one could simply add up the masses of
the constituents of an object, considered independently, to compute the total mass of
the object. The mass of an object is, in a sense, not reducible to the mass properties
of the individual components. Their interaction has to be taken into account. But
this does not show that an objects mass is radically emergent from the mass of
its constituents, because the interactions are governed by the fundamental laws at
118 6 Against Radical Emergence

the level of the constituents themselves. So too, the superposition principle and the
formation of joint states are fundamental principles of basic quantum physics. The
singlet state is a state fully described by the fundamental laws of quantum mechanics.
The entangled state is a predictable result of the basic laws of the quantum systems
and their interactions. In fact, it was predicted, by Schrdinger, who introduced the
term entanglement, right at the birth of quantum mechanics (Schrdinger 1935).
There is no hint of radical or ontological emergence here, although the oddity of
entanglement is also emphasized by this analogyunlike in the case of mass, there
is apparently no (current ongoing) interaction between the entangled particles!

6.5 Fusion Emergence

One of the most carefully worked out accounts of putative radical emergence, and one
which again leverages entanglement, is that of Paul Humphreys (see e.g. Humphreys
1997a, b). In fact, Humphreys regards the formation of entangled states of quantum
systems as paradigm examples of what he calls fusion (see Humphreys 1997a).
According to this theory, fusion is the appearance of new properties or, more accu-
rately, instantiations of properties never before manifested in the world. Such fusions
bear the general hallmark of emergence of course, but what suggests that they imply
radical emergence is that fusions have new causal powers and that the precursor
entities which generate fusions lose their independent identity or even cease to exist
(hence it would be wrong to consider such precursors to be constituents of the fusion).
Humphreys suggests that the canonical form of fusion creation can be expressed in
terms of a temporal operation which transforms precursor property instances (which
might also be thought of as events if events are taken to be instantiations of properties
at particular times by particular objects) into new forms. More formally, if Pmi (xri )t1
is an instantiation of property Pm by xr at time t1 (and where the superscript letters
indicate the level in the natural hierarchy of emergence) then the fusion of two such
events bring about fusion at time t2 is expressed as [Pmi Pni ][(xri ) + (xsi )](t2 ). In line
with the idea that fusion involves novel instantiations and the loss of identity of the
precursors, this can also be written as [Poi+1 ][xti+1 ](t2 ). The second form highlights
that both the property and the object are new rather then mere composites.
The idea of fusion is intriguing and evidently coherent. Is it real? There have been
serious philosophical criticisms of the viability of the fusion approach to emergence,
which question both whether it can deliver the advantages Humphreys claims (see
Wong 2006) and whether fusion really is the best account of domains where it ought
to apply, such as chemistry (see Manafu 2011). But leaving aside particular philo-
sophical objections, it seems unlikely that fusions will embody radical emergence if
entanglement is the paradigm case, simply because of the arguments given above.
Something like fusion seems to be a real feature of nature, and one which is impor-
tant to emphasize. However, the failure of part-whole reductionism or the fact that
nature strays widely from obeying a purely compositional or additive metaphysics
does not mean there is radical emergence.
6.5 Fusion Emergence 119

Instead, we find that the mechanism of fusion (or entanglement) is predicted by


fundamental theory, which theory in fact requires the features which engender entan-
glement to successfully predict the activity of even the most basic entities. We also
find that the causal powers of the fusions are all drawn from the same family as those
that characterize lower level entities (such powers that derive from mass, energy,
motion etc.). No fundamentally new observables are introduced through entangle-
ment. No new laws of nature beyond those of fundamental theory are required to
specify the states which result in fusion; these are instead strict consequences of the
states of the precursor entities.
Although it cannot be over stressed that nature is much stranger than the old
mechanists would have or even could have dreamed, entanglement (and presumably
fusion in general) is nonetheless an emergent feature of the world which does not
rise to the level of radical emergence.
Chapter 7
Emergence and Supervenience

7.1 Completeness, Closure and Resolution

The metaphysical relation of supervenience has seen most of its service in the fields
of the philosophy of mind and ethics. Although not repaying all of the hopes some
initially invested in itthe mind-body problem remains stubbornly unsolved, ethics
and aesthetics not satisfactorily naturalizedthe use of the notion of supervenience
has certainly clarified the nature and the commitments of so-called non-reductive
physicalism, especially with regard to the questions of whether explanations of super-
venience relations are required and whether such explanations must amount to a kind
of reduction (a good discussion of these issues can be found in Kim 2005).
I think it is possible to enlist the notion of supervenience for a more purely
metaphysical task which extends beyond the boundaries of ethics and philosophy of
mind. This task is the clarification of the notions of emergence and emergentism,
which latter doctrine is receiving again some close philosophical attention (see for
example McLaughlin 1992, Kim 1993, Bedau and Humphreys 2008, OConnor and
Wong 2009, Clayton and Davies 2006, Corradini and OConnor 2010, Macdonald
and Macdonald 2010).
I want to try to do this in a semi-formal way which makes as clear as possible the
relationships amongst various notions of supervenience as well as the relationship
between supervenience and emergence. I especially want to investigate the impact
on our ideas of supervenience of an explicit consideration of a very familiar but
under explored notion which is crucial to our scientific understanding of the world:
the temporal evolution of states. It will turn out that the impact is significant and
extensive. I do not pretend that what follows is fully rigorous, but I do hope the semi-
formality makes its commitments and assumptions clear, and highlights the points
of interest, some of which I think are quite surprising. For readers uncomfortable
or impatient with such formalism, I provide explications of all the formulas which
should render the discussion easy to follow.
I want to begin with a series of definitions.

W. Seager, Natural Fabrications, The Frontiers Collection, 121


DOI: 10.1007/978-3-642-29599-7_7, Springer-Verlag Berlin Heidelberg 2012
122 7 Emergence and Supervenience

D1. A theory, T, is total if and only if it possesses completeness, closure and


resolution.
These are jointly defined as follows: Completeness is the doctrine that everything
in the world is a T-entity or, in principle, has a non-trivial T-description and as
such abides by closure and resolution. Closure entails that there are no outside
forceseverything that happens, happens in accordance with fundamental T-laws
so as to comply with resolution. Resolution requires that every process or object
be resolvable into elementary constituents which are, by completeness, T-entities
and whose abidance with T-laws governing these constituents leads to closure. For
the particular example of physics (the only theory that could have any chance of
becoming a total theory) these definitions become: Completeness is the doctrine
that everything in the world is physical (has a non-trivial physical description1 ) and
as such abides by closure and resolution. Closure entails that there are no outside
forceseverything that happens, happens in accordance with fundamental physical
laws so as to comply with resolution. Resolution requires that every process or object
be resolvable into elementary constituents which are, by completeness, physical and
whose abidance with physical laws governing these elementary constituents leads to
closure.
It may be worth reemphasizing here that this is not an endorsement of part-whole
reductionism, though it is consistent with it. We know from quantum mechanics (see
the discussion above in Chap. 6, pp. xx ff.) that the states of wholes are not simply
functions of the states of their parts considered in isolation but this does not tell against
the characterization given in the text. Quantum mechanics is a celebration of how the
fundamental interactions of things can be understoodrigorously understoodto
yield new features. It is, if you like, a mathematically precise theory of emergence,
but one that obeys the strictures of resolution. The kind of emergence endorsed by
quantum mechanics is a form of conservative emergence. it is also worth noting that
conservative emergence has no essential commitment to micro-fundamentalism: the
view that all fundamental features of the world reside at the microscopic level. As will
be discussed below, it is also perfectly possible to imagine large objects which are
fundamental or simple. An example is a black hole (at least as classically described)
which is essentially a gigantic elementary particle with only three properties: mass,
charge and angular momentum.
We could distinguish a purely formal notion of totality from that defined above.
A formally total theory is one that would be total if only it were true. It is arguable
that the mechanical world view2 is, or at least was intended to be, formally total (but
had no chance of being true of this world). Perhaps classical physics (Newtonian
mechanics + electromagnetism) was similarly supposed to be a formally total theory.
Since I am going to assume that final-physicswhatever it may turn out to beis
true, the notions of formal totality and totality collapse for it.
D2. T-possibility: something is T-possible if and only if it exists in some T-possible
world, that is, some world that obeys the fundamental laws of theory T.
(Example: physical possibility is existence in some physically possible world, that
is, a world that obeys the fundamental laws of physics. To avoid making physical
7.1 Completeness, Closure and Resolution 123

possibility epistemically relative we can regard physics to be the true, final physics
whether or not humans ever manage to discover such a theory. We can call this
final-physical-possibility.)
D3. Efficacy: a state, F, of system is efficacious in producing a state, G, in system
if and only if had not been in state F, would not have been in state G (it is
possible, and usually the case, that = ).3
A simple example of state efficacy might be the way the mass of an object is effi-
cacious in the energy produced when the object falls from a certain height; if the
mass had been less the energy would have been different. By contrast, the color of
the object is, presumably, not efficacious in the energy produced by falling. We can
also say that F has efficacy if and only if there is a state for which F is efficacious in
its production. It is useful to also have a notion of efficacy explicitly relativized to
the states of particular theories, so:
D4. T-Efficacy: a state, F, of system is T-efficacious in producing a T-state, G, of
system if and only if had not been in state F, would not have been in state G (it
is possible, and usually the case, that = ; the state F may or may not be a T-state).
As in D3, F has T-efficacy if and only if there is a state for which F is T-efficacious in
its production. As a serious theory of efficacy this has a number of problems. There
is an ongoing and vigorous debate about whether supervening realms can be said to
have genuine causal power (see the extensive literature that has sprung from Kims
first articulation of the exclusion argument in Kim 1992). This will be of importance
in later chapters. The current notion is intended to be very weak and undemanding.
For example, according to D4 it is quite possible that moral properties are efficacious
insofar as it is plausible that they supervene on natural properties in ways that permit
cases of counterfactual dependency (but see footnote 9 below). Perhaps it would
be better to label this prima facie efficacy, ostensible efficacy or just leave it as
simply counterfactual dependency but for the sake of simplicity of expression I will
stick with the term efficacy.4

7.2 Supervenience

The concept of supervenience is well known and can be rather swiftly characterized,
although it is important to distinguish a number of variants (for an overview see
McLaughlin and Bennett 2008; for the philosophical development of the notion see
Kim 1993, Chaps. 59). Supervenience is a special relation of dependence, or at least
correlation, of one domain upon another. It is often taken to be a relation between
families of properties or states, where a family of properties is a set of properties
that define the domains at issue.
For example, psychological properties form one family while physical properties
form another. There is a relation of supervenience of the psychological properties
upon the physical properties if, in accordance with the special relation of depen-
dence defining supervenience, we hold that (all instances of) psychological properties
124 7 Emergence and Supervenience

depend upon (instances of) physical properties. It is natural to extend the notion to
allow the supervenience of one theoretical domain upon another, in which case the
state or property families are given by the theories at issue (as it might be, psychology
versus physics). We could also define a supervenience relation between theories by
extension of a supervenience relation between their associated theoretical domains
(even where these domains might be hypothetical rather than actual). That is, if the
U-domain supervenes on the T-domain then we can say that theory U supervenes
upon theory T.
The exact nature of the dependency relation which defines supervenience is inten-
tionally left rather vague, but one core idea is that there can be no difference in the
supervening domain without a difference in the subvening domain, as in the Dali
test discussed above in Chap. 5. For example, we might claim that there can be
no psychological difference without an underlying physical difference, or that there
could be no ethical difference between actions without some physical difference, or
that there could be no aesthetic difference between two objects without some phys-
ical difference between them and so on. A natural way to express this is in terms
of indiscernibility with respect to the subvening domain requiring indiscernibility
with respect to the supervening domain. Another way is to define supervenience
directly in terms of the determination of properties in the supervening family by the
properties of the subvening family. It is interesting that these two approaches quite
naturally lead to very distinct forms of supervenience.
Lets begin with a catalogue of some basic forms of supervenience, after which I
want to introduce a new form that comes in several variants. The three basic forms
of supervenience of interest here are strong, weak and global supervenience. The
former two notions of supervenience can be formally expressed in terms of families
of properties and a direct relation of determination between them. In what follows
I will take the line that property families are identified by the distinctive predicates
employed by particular theories. The family of chemical properties is the set of prop-
erties distinctively referred to by chemistry, the family of physical properties is that
set of properties distinctively referred to by physics, etc. I add the term distinctively
only to indicate that there must be some selection from all properties referred to by a
theory since some are inessential to that theory or are deployed only to link various
domains to that of the theory at issue (most obviously in the description of mea-
surement instruments and observations). We can also expect that there will be some
overlap between theories, but I think we ought to regard this common occurrence as
an intrusion of one theoretical scheme into another. In such cases, we ought to assign
the overlapping property to the more basic theory.
Given a pair of families of properties, we can define a supervenience relation
between them in various ways. Strong supervenience is typically defined as:
D5. Strong Supervenience: Property family U strongly supervenes upon family T if
and only if ()(F U )(F (G T )(G ()(G F))).
This says that it is necessarily true that for any instance of a property in U there is a
property in T such that having that property guarantees having the U property. It does
not say, though it permits, that some particular T property underlies every instance
7.2 Supervenience 125

of the target U property. If it is the case that every U property supervenes upon one
T property then we can say that U reduces to T. The state G is called a realizer of F
and we say that G realizes F. For many domains, especially that of psychology, it is
thought that there can be what is called multiple realization, in which a variety of
different T properties subvene the instances of a single target U property. Notice the
second necessity operator, which ensures that G subvenes F in every possible world.
(That is, in every possible world, anything that manages to exemplify G will also
exemplify F, but not necessarily vice versa.) While some G or other in T is required
for F to be exemplified, there may well be many Gs that will do the job. Call this
set of properties the realizer-set of F (or the T-realizer-set of F).
One should wonder about the nature of the internal necessity operator deployed
in this definition, as well as the ones to follow (the external operator serves to
indicate the non-contingency of these philosophical claims). As we shall see shortly,
if one varies the modal strength of this operator one gets different forms of the
supervenience relation. The canonical interpretation of strong supervenience is that
the internal operator is full-on absolute necessity: there are no possible worlds where
an object has G but lacks F.
The second form of supervenience of interest to us exploits the possibility of
modifying the internal necessity operator to the maximum degree, by eliminating it.
Weak supervenience is thus defined as follows:
D6. Weak Supervenience: Property family U weakly supervenes upon family T if
and only if ()(F U )(F (G T )(G ()(G F))).
Formally, the only difference between strong and weak supervenience is the absence
of the internal necessity operator in the latter. Intuitively speaking, the difference
between weak and strong supervenience is that although they agree that the super-
vening domain is determined by states of the subvening domain, the structure of
this determination relation can be different in different worlds. We might sloganize:
for strong supervenience, once a realizer always a realizer, but this fails for weak
supervenience. Just because some G realizes F in some world does not mean that
it will realize it anywhere else in logical space. A simple (if somewhat imperfect)
example of weak supervenience, presented by Kim (see Kim 1993, Chap. 4, pp. 62
63), is the supervenience of the truth of a sentence upon the sentences syntax. It
must be that any two sentences that are syntactically identical have the same truth
value (and of course every sentence has a syntactic structure). But we do not expect
the truth value to be the same from world to world, as we vary the facts which make
the sentences true. We might thus expect that syntactic structure plus a specification
of the facts strongly subvenes the truth of sentences.5 The difference between weak
and strong supervenience will turn out to be very important for the clarification of
various notions of emergence.
In fact, it might be better, albeit unconventional, to put a kind of dummy modal
operator within the formula so we can generate distinct versions of supervenience by,
as it were, gradually reducing the modal strength of the internal necessity operator.
Strong supervenience employs full-on logical necessity but we can imagine a form
of weak supervenience that still demanded that G determine F but only, for example,
126 7 Emergence and Supervenience

across all physically possible worlds. That is, it might be preferable to write the
formula for weak supervenience thus:

()(F U )(F (G T )(G ()? (G F))) (7.1)

where ? can be replaced by whatever grade of necessity we desire, with the zero
grade of weak supervenience just being intra-world dependence.
A quite different approach to supervenience is also possible. One can express
supervenience in terms of indiscernibility rather than property determination (see
Haugeland 1982, Hellman and Thompson 1975, 1977). One method of doing this
is directly in terms of possible worlds and thus avoids the explicit appeal to modal
operators. Supervenience of U upon T would require that, at least, if there is agree-
ment about the assignment of T-states to systems then there is agreement about the
assignment of U-states to systems. We might write this as:
D7. Global Supervenience: Property family U globally supervenes upon family T if
and only if (w)(w )((w T w ) (w U w )).
Where X is the relation of indistinguishability with respect to the X family
of properties. So D7 says simply that any two possible worlds which are indistin-
guishable with respect to the subvening property familys distribution (that is, the T
properties) are also indistinguishable with respect to the supervening property fam-
ilys distribution (the U properties). Despite the elegance of D7 I want to introduce
an alternative definition which is closer in form to those of strong and weak superve-
nience. Obviously, if two worlds are indistinguishable with respect to F-properties
then any system with an F-property in one world will also have it in the other, and
vice versa (I am going to take it for granted that if the former system does not exist
at all in the latter world then that amounts to a difference in F-property distribution).
So we can rewrite D7 as:
D8. Global Supervenience: Property Family U globally supervenes upon family T if
and only if (w)(w )()(F U )((w T w Fw) Fw ).
We can also introduce different forms of global supervenience by tweaking the
domain of possible worlds. By default, the domain is just all possible worldsso this
might be appropriately labeled logical global superveniencebut different types
of supervenience arise by restricting this domain in some way. For example, if we
restricted the domain to just physically possible worlds then certain scenarios where
the supervenience of the mental upon the physical fails might be ruled out. An extreme
but uninteresting case would be two worlds that entirely lack physical entities (they
are, say, inhabited only by angels). Such worlds are physically indistinguishable but
could be psychologically different so supervenience of the mental on the physical
would fail. The most significant case, of course, is worlds which are physically
indistinguishable from the actual world.6
I need to introduce yet another complication in order to allow for a non-trivial role
for the temporal evolution of states. I am going to modify the standard definition of
global supervenience by requiring that the T-indiscernibility of worlds be restricted
7.2 Supervenience 127

to indiscernibility up to the time when Fw obtains. This opens the possibility that
global supervenience can fail for properties that depend for their existence at one time
on states which occur at a later time, if the possible worlds at issue lack what I will
call below T-temporal supervenience (or temporal determination or, loosely speaking,
determinism). There are some properties that do have this kind of dependence upon
the future. Whether a predictiona future tensed sentenceis true or not obviously
depends upon the future. Less trivially, whether an action, for example, is good or
bad might depend upon its consequences. If two worlds which were T-indiscernible
up to the time of the action could diverge (with respect to T) after the action then
it could be that the action was good in one world but bad in the other. If we were
inclined to hypothesize that moral properties and consequences supervene upon
the physical state of the world up to the time of the action such divergence would
represent the failure of that hypothesis. We could distinguish an absolute global
from a limited global supervenience of U upon T, the former involving absolute
world T-indiscernibility across all space and time, the latter only indiscernibility up
to the occurrence of a given U-state. Fortunately, such a distinction would be of little
assistance in what follows, so I shall resist adding yet another kind of supervenience.
In any event, the world based formulation reveals an ambiguity in the notion of
supervenience (for discussion and an interesting dispute about their relationship see
Petrie 1987; Kim 1993, Chap. 5; Paull and Sider 1992). The formulation of global
supervenience in terms of worlds, unlike the definition of strong supervenience given
above, does not explicitly require that the T-state that subvenes a U-state be a state
of the very same system that exemplifies the U-state. This is thus an extremely
weak form of supervenience. For example, it permits two worlds that differ only
in the position of a single atom somewhere in, say, the interior of the star Vega to
have radically distinct distributions of psychological propertiesperhaps one is the
actual world but in the other there are no minds whatsoever! Since these worlds are
physically different, mere global supervenience does not prevent them from being
also psychologically different. In the limit, global supervenience is consistent with
our world being the only world that contains consciousness if it should be the case
that all possible worlds that are in any way physically distinguishable from the actual
world are devoid of consciousness. Paull and Sider (1992) argue that it would be hard
to understand how the structure of possible worlds could be such, but the point here
is that the concept of global supervenience by itself does not dictate the metaphysical
structure of modality.
It is not difficult to strengthen global supervenience into a form that makes the
indiscernibility of particular systems rather than whole worlds the basis of super-
venience, a form we might naturally call local supervenience (with the caveat that
the term local refers to the unity of systems, not to immediate spatio-temporal
neighborhood):
D9. Local Supervenience: (w)(w )()()(F U )(((G T )(Gw
Gw ) Fw)) Fw ).7
This adds the condition that it is the systems, and , that are such that if they
are T-indiscernible across possible worlds then they will also be U-indiscernible.
128 7 Emergence and Supervenience

Local supervenience is not quite the same as strong supervenience.8 The latter does
not require full local indiscernibility as a condition of supervenience but only the
sharing of one critical property from the subvening family. Though less weak than
global supervenience, local supervenience is still a very weak notion. It permits, for
example, two human beings which differ only in the position of an atom somewhere
in the interior of their big toes to differ radically in their psychological properties
perhaps one system is me but the other has no psychological properties at all!
Problematic examples such as that of Vega or the big toe reinforce the intuitively
plausibility of the super-localization of strong supervenience, for it seems rea-
sonable to suppose that some T-properties might be irrelevant to the possession of
U-properties. For example, in some possible worlds (maybe even in the actual world)
there are creatures physically identical to us except that they are made out of anti-
matter rather than matter. This would, in all probability, seem to be psychologically
irrelevant but they would fail the test of indiscernibility since although systems com-
posed of matter and anti-matter can share almost all their physical properties they
are obviously physically discernible (have each of them catch an ordinary baseball,
but stand way back). Of course, global and local supervenience do not prevent non-
identical systems from possessing the same supervening properties, but we could
not use either global or local supervenience to argue for our anti-matter cousins pos-
session of mind, whereas strong supervenience would probablydepending upon
the exact range of physical properties we take to subvene mindprovide such an
argument.
Evidently, strong supervenience implies local supervenience but not vice versa.
If we assume strong supervenience and the antecedent of local supervenience we
obtain the local T-indiscernibility of and across w and w . By strong superve-
nience, there is a T-state, G, that has which necessitates F. Since and are
T-indiscernible, must also possess G and therefore we must have Fw . The
reverse fails for we cannot deduce simply from the fact that and share G across
possible worlds that and are fully T-indiscernible across the worlds. This argu-
ment could fail if we allowas I think sound philosophy forbidssome very dubious
metaphysical chicanery which encodes every feature of a possible world as a prop-
erty of an individual in that world. An example would be properties like exists in
a world where the speed of light is 300,000 km/s, so that any discernibility between
worlds will translate automatically into a discernibility between individuals across
those worlds. It might thus be natural to think of local supervenience in terms of
indiscernibility with respect to intrinsic properties.
It is, furthermore, obvious that local supervenience implies global superve-
nience but that once again the reverse fails to hold (since the assumption of local
T-indiscernibility of two systems will not lead to the T-indiscernibility of their entire
possible worlds).
However, the definitions can be brought together by fiat, if we restrict attention
to domains where reasonable and plausible supervenience relations are local and
particular. This restriction is important since it is arguable that efficacy really is both
local and dependent upon particular states, and we have a strong interest in domains
that are efficacious. An illustration of a non-efficacious and non-local domain is
7.2 Supervenience 129

that of financial instruments. Money does not supervene locally upon the physical.
Two locally physically identical scraps of paper could differ in their monetary value
depending upon, for example, the intentions and social-status of their creators; our
authorities intend that only properly minted notes are real money, anything else is
counterfeit. But for that very reason, money cant cause anything as such, but only
via its exemplifying certain physical features that cause certain beliefs (in people) or
certain other physical states (for example, in vending machines). Of course, this is
not to say that appeal to monetary properties is explanatorily impotent (after all, isnt
the love of money the root of all evil), but only that its efficacy is not rooted in its
being money, which is invisible, so to speak, to the causal processes of the world.9

7.3 Temporal Supervenience

I want now to introduce a new form of supervenience: temporal supervenience, in


which the state of a system at one time is determined by the state of the system at an
another time (generally speaking, an earlier time if we think of causal determination).
Temporal supervenience, as I call it, is simply a familiar notion with an unfamiliar
name. But while it is odd to employ the term thus, I use the name temporal super-
venience to emphasize the analogies between the evolution of the states of systems
through time and the kinds of supervenience relations we have already discussed. As
we shall see, the two notions have quite deep and somewhat surprising relationships
as well.
The formalization of this notion unfortunatelyfrom the point of view of ready
intelligibilityrequires the addition of a temporal term to our definitions but these
are in other respects quite analogous to the previous forms.
D10. Temporal Supervenience (ts): The states of system temporally super-
vene upon the states of if and only if (F)(t1 )(Ft1 (G)(t0 )(Gt0
()(t2 )(t3 )(Gt2 Ft3 ))).
Here, and below, F and G are possible states, or properties, of a system .10 Call F
the successor state and G the predecessor state. To avoid clutter, it is not explicitly
stated in the definitions but it is assumed that where contextually appropriate the
indices on the temporal variables represent time order, so in D10 t0 is prior to t1 and
t2 is before t3 . I make no attempt to specify the amount of time there should be between
states (so some indulgence on the part of the reader is requested when contemplating
the temporal distance between the specified times) or to address the issue of whether
time is continuous or discrete. In essence, D10 says that a systems states temporally
supervene upon earlier states if there is an earlier state which determines the later state
to occur. This is simply familiar temporal evolution of a systems states expressed
in the language of the supervenience relation, which applies in a remarkably similar
way to that of standard supervenience.
130 7 Emergence and Supervenience

D11. Full Temporal Supervenience (fts): The states of system fully temporally
supervene upon the states of if and only if:
(F)(t1 )(Ft1 (G)(t0 )(Gt0 ()(t2 )(t3 )(Gt2 Ft3 ))).
The difference between ts and fts is that in fts there is unique temporal deter-
mination both backwards and forwards in time (which is not to say that we have
backwards causation). One can, that is, as easily foretell the past as the future of the
system from its current state. Though it wont figure much in the discussion below,
full temporal supervenience is nonetheless important since, generally speaking, fun-
damental theories of physics tend to exemplify it.
D12. T/U-temporal supervenience: The T-states of system temporally super-
vene upon the U-states of if and only if (F T )(t1 )(Ft1 (G
U )(t0 )(Gt0 ()(t2 )(t3 )(Gt2 Ft3 ))).
D13. Full T/U-temporal supervenience: The T-states of system temporally super-
vene upon the U-states of if and only if:
(F T )(t1 )(Ft1 (G U )(t0 )(Gt0 ()(t2 )(t3 )(Gt2
Ft3 ))).
Note that T and U can be the same theory (or family of states). In the discussion
below, intra rather than inter-domain temporal supervenience will figure most promi-
nently. So instead of writing T/T-temporal supervenience Ill just use T-temporal
supervenience. The notions of T/U-temporal supervenience are more useful than
the more basic ts and fts since we normally are concerned with the relations of
temporal supervenience either within theories or across theories, rather than from
an abstract, non-theoretical standpoint. This is especially so when we consider the
proper understanding of emergence, with its assumption that the world can be ordered
into ontological levels which, more or less, correspond to distinct theories.
Unsurprisingly, the kinds of modal differences between strong and weak super-
venience can be duplicated within temporal supervenience, simply by inserting a
modal necessity operator to indicate the strength of the connection between G and
F. The generic form would be this:
(F T )(t1 )(Ft1 (G U )(t0 )(Gt0 ? ()(t2 )(t3 )(Gt2
Ft3 ))),
where, as above, ? represents a variable grade of necessity.
The different notions of temporal supervenience generated in this way are exact
analogues of the difference between weak and strong supervenience as given above.
Intuitively, the difference is that weak temporal supervenience requires that every
possible world exhibit unique state determination across time (backwards and for-
wards for full temporal supervenience) but that the particular state-to-state transitions
can differ from world to world whereas in strong temporal supervenience the par-
ticular determination relation at issue holds across possible worlds. This difference
can matter philosophically, as we will eventually see below.
One general condition on the above definitions of temporal supervenience should
be noted. It is understood that the systems in question are undisturbed systems,
where undisturbed is taken to mean that there are no T-influences which are acting
7.3 Temporal Supervenience 131

on the system which are not part of the system. We can allow for approximately
undisturbed systems where the unaccounted for T-influences are insufficient to much
alter the state transitions referred to by the definitions. Also, for cases of disturbed
systems, we can always create an undisturbed system by letting the boundaries of
the system grow to encompass the T-disturbance.

7.4 Top-Down Discipline

D14. Top-Down Discipline: A family of states (or theory), U, has Top-Down Dis-
cipline (TDD, or U/T-TDD) relative to a family of states (or theory), T if and only
if:
(1) U supervenes upon T
(2) for every U-state, , the set of realizing T-states is such that each element can
temporally evolve into a realizer of any permitted U-successor of and every
permitted U-successor of is realized by the temporally evolved state of a
member of the T-realizer set of .
D14 is complex and requires elucidation. So, if there is a U-successor state of into
which s T-realizer cannot evolve then TDD fails. Another way to define the notion
of Top Down Discipline is to say that in addition to the supervenience of U upon
T it is the case that every element of the T-realizer set of a given U-state, U1 , can
evolve into elements of the realizer set of all the permitted U-successors of U1 . Some
discussion of the possibilities the definition allows might make this notion clearer.
Assume that U supervenes upon T. TDD fails if there is a U-state, 1 , which has a
set of realizers that evolves into a set of T-states that does not form a realizer-set of
a U-state which is a permitted (by U) successor of 1 . A simple abstract example
would be this. Suppose that 1 is multiply realized by the set of T-states {1 , 2 , 3 }.
Suppose further that the laws of temporal evolution in T are as follows: 1 1 ,
2 2 , 3 3 (note we are thus assuming T-temporal  supervenience).
 We have
TDD (so far as this example is concerned) if the set 1 , 2 , 3 multiply  realizes
one U-state, which we might naturally label . If, perchance, , realizes one
  1 1 2
U-state while 3 realizes another, TDD fails (since, for example, 1 cannot evolve
into a member of the realizer set of this latter U-state). Now, consider a case where
T-temporal supervenience
 fails. In that case instead of {1 , 2 , 3 } evolving to the
determinate set 1 , 2 , 3 we have an indeterminate evolution. For simplicity, lets
confine the indeterminacy to 3 which, say, can evolve either into 3 or 3 (where
these states cannot both obtain, no more than can any  pairof possible  realizers). Thus
T-temporalevolution will lead from { , , } to , , , . TDD still holds
 1 2 3 1 2 3 3
if this set, 1 , 2 , 3 , 3 , multiply realizes a single U-state. If this set does not
multiply realize a single U-state but rather underlies,   say, two U-states, 1 and 2

where 1 , 2 , 3 multiply realizes 1 and 3 realizes 2 then TDD fails even

in the loose environment where T-temporal supervenience does not hold (since, for
example, 1 cannot evolve into a realization of 2 ). But notice that it is possible to
132 7 Emergence and Supervenience

Fig. 7.1 Indeterminate


evolution preserving TDD

Fig. 7.2 U-temp. super-


venience without T-temp.
supervenience

have top down discipline even if the set of T-realizers of some U-state do not evolve
to a set which realizes a single successor U-state. This can occur if the T-realizers
can each indeterministically evolve to a realizer of any permitted (by the laws of U)
successor of the initial U-state, as in Fig. 7.1.
Top-down discipline can exist from a supervening domain, U, to its supervenience
base domain, T, even if T lacks T-temporal supervenience and U enjoys U-temporal
supervenience. In such a case, we could say that there is de-randomization of T
(see Fig. 7.2). It is possible that the apparent deterministic character of classical
(or macroscopic mechanics) is the result of this sort of de-randomization, as the
quantum states that realize the macro-states provide top-down discipline for the
macro-domain. That is, while there may be intrinsic randomness at the micro-level,
it somehow cancels out in the myriad interactions involved in the realization of the
macro-states and their dynamics.11
I think that much of the interest in multiple realizability which has been shown
by philosophers lies in the possibility of a failure of top-down discipline rather than
7.4 Top-Down Discipline 133

the mere possibility of multiple realization itself. For suppose that the supervenient
domain, call it U, enjoyed top-down discipline with respect to its supervenience
base, T. If there is T-temporal supervenience (and in much of physics, even, perhaps
given unitary evolutionin quantum mechanics, we seem to have full temporal
supervenience) then top-down discipline implies that there is also U-temporal super-
venience (see R4 below). This would strongly suggest that the supervenient domain
can be reduced to its supervenience base, since there is a set of subvening states that
exactly map on to the theoretical relationships of the supervenient domain. That is,
there would be a model of U in the subvening T-states.
While we might expect that some domains, for example that of chemistry, enjoy
both top-down discipline and a reductive relation relative to physics, this is not
generally the case. Most domains above physics, simply do not have (and do not
necessarily even want to have) the resources to fully determine state-to-state transi-
tions. If this lack of temporal supervenience in the supervening domain is coupled
with the lack of top-down discipline (which I think is the usual case, since we pre-
sumably have physical temporal supervenience for the underlying realizing states of
the higher-level states), the case for reduction is very weak even though every super-
venient state has, of course, an entirely definite, if very large, set of realizers in the
supervenience base. This is because there is no model of the supervenient domain
within the base. Consider Fig. 7.3. From the point of view of the T-domain situa-
tion, the U-state at t1 has to be thought of as a disjunction of the particular T-states
that can each realize that U-state. But that disjunction will not act like a U-state
since it transforms into a disjunction that cuts across U-classifications. This seems
to me a powerful reason for regarding supervenience without top-down discipline as
a non-reductive relation between domains or theories.12 However, even if there is no
reduction of U to T, the T situation nonetheless fully determines the U situation and
perhaps even explains why the U laws hold.
An interesting example of the failure of top down discipline stems from the rela-
tion between thermodynamics and statistical mechanics. Although the mechanical
account of thermodynamical properties perhaps comes as close to reduction as any
real inter-theoretic relation could, it fails. The reason is that top-down discipline is
missing. It is a law of thermodynamics that the entropy of an isolated system never
decreases (an isolated system in equilibrium should be able to maintain its entropy
at some maximum). However, the thermodynamical properties of a system are not
brute intrinsic properties but rather depend upon lower level features of the systems
constituents. The thermodynamical states of, for example, a gas are realized by the
particular states of motion of its constituent molecules.
It is provable that the mechanical correlate of entropy (defined in terms of the
probability of micro-states occupying certain regions of the systems phase space;
see Sklar 1993 for a philosophically based explication) is exceedingly likely to
increase with the natural evolution of the molecular states, but it is not certain. This is
because there are some possible molecular states which will evolve into lower entropy
configurations. If we take an isolated container of some gas and introduce through
a small hole extremely hot gas the system will lose entropy via our interaction but
will then evolve so that the gas regains equilibrium. But it must be possible to take
134 7 Emergence and Supervenience

Fig. 7.3 A mode of failure of


TDD

the state of the gas that has attained equilibrium and, at least in imagination, reverse
all velocities of all gas molecules.13 Since mechanics possesses what I have called
full temporal supervenience, the gas would evolve to a state where hot gas would
stream out of the input hole; it would thus be a naturally evolving system (which it
is fair to regard as essentially isolated) with decreasing entropy, contrary to the laws
of thermodynamics.
In our terms, there are certain particular realizing states that fail to evolve into
permitted thermodynamical successor states, which is incompatible with top-down
discipline. Does this show that standard thermodynamics is just plain false, or is
a kind of special science with only approximately accurate, ceteris paribus laws
which occasionally break down because of certain incursions from below?
Notice however that it is always possible, in the face of a recognized failure of
top-down discipline, to develop, or at least try to develop, a more discriminating
U-theory that differentiates the initial U-state at t1 into two states that differ in
some heretofore unrecognized U feature. This would permit a U explanation of why
the original (now regarded as under-characterized) state could evolve into distinct
U-states. This new found difference within the U domain would reflect a partition of
the set of T-realizers into two sets, each realizing a distinct U-state. This would restore
(so far as our Fig. 7.3 goes) top-down discipline. But it would also strengthen the
case for reduction, as we would have succeeded in finding a set of realizers that act
like U-states throughout their dynamics. There might be some general theoretical
pressure to search for such discriminators, at least in some cases.14 However, my
own feeling is that most high-level theories are not in the business of giving such a
complete description of their domain as to fully constrain its dynamics. The more
fundamental a theory is taken to be the stronger this pressure will be; on the other
hand, very high level theories, such as psychology, will hardly feel it at all.15
7.4 Top-Down Discipline 135

Fig. 7.4 A mode of failure of


STDD

If we take seriously, as we should, the possibility of indeterministic evolution of


states, we ought also to consider that there may be definite probabilities associated
with possible state transitions. We can easily adapt our notion of top-down discipline
to incorporate this refinement.
D16. Statistical Top-Down Discipline: A family of states (or theory), U, has statistical
Top-Down Discipline (STDD, or U/T-STDD) relative to a family of states (or theory),
T if and only if:

(1) U supervenes upon T


(2) for every U-state, , with permitted successors {1 , 2 , . . . , n } and transition
probabilities { p1 , p2 , . . . , pn }, each T-realizer of , , has permitted successors
1 , 2 , . . . , j such that each k realizes one of {1 , 2 , . . . , n } and if k
realizes i then the transition probability from to k equals pi .

This is unfortunately complex but an example of a failure of statistical TDD


should make it clearer. In Fig. 7.4 we see a U-state that can indeterministically evolve
into two U-states. The realizing T-states mirror the indeterminacy (they meet the
initial definition of TDD given above) and in fact manage to duplicate the correct U-
transition probabilities overall (on the crucial assumption that the U-state is equally
likely to be realized by the two possible realizing T-states). But the T-transition
probabilities do not mirror the U-transition probabilities at the level of the individual
T-realizers. So statistical top-down discipline fails.
To my mind, the kind of situation illustrated in Fig. 7.4 is one that also counts
against reduction of the U-states to the set of realizing T-states. These states do
not behave like the U-states they realize at the level of individual states, though
their interactions conspire to make the U-state description accurate at its own level.
The particular probabilities in this case may seem somewhat miraculous (obviously
136 7 Emergence and Supervenience

they have been rigged to illustrate the desired failure of STDD), but the miracle is
less pronounced in cases where high-level statistics emerge out of myriads of low-
level processes and the high-level statistics reflect various laws of large numbers.
Furthermore, we expect that normally the development of the subvening theory, or
at least the recognition of its relation to the supervening theory, will follow on the
development of the supervening theory and of course the follow-up theory must be
constrained to produce known statistical relationships.16
Normally, we take the possibility of indeterministic state evolution in the domain
of a high-level theory to be the result merely of ignorance ofor sometimes uncon-
cern aboutunderlying determining factors. Such factors may involve unknown, or
imprecisely specified, features of the high-level theory or, more often I think, may
involve features of lower level theories that intrude upon the dynamics of the high-
level theory to determine certain state transitions. For example, it seems entirely
plausible to suppose that in certain cases the total psychological state of someone
may not determine which action they will perform. Nonetheless, some action will be
performed, and there may be no underlying psychological feature that accounts for
this. There will, though, be some sub-psychological features which tip the balance.
For example, in choosing between the cheese plate or the chocolate cake for dessert,
perhaps the level of some neurotransmitter within some critical network of neurons
plays a crucial role even though there is absolutely no consciousor unconscious in
a psychological sense of the term as opposed to a merely non-consciouselement of
ones mental state that reflects that neurotransmitter level precisely enough to account
for ones choice. If the underlying realizers of the high-level states are differentiated
by features that do not correspond to differences in the high-level characterization,
we would expect the probabilities of the state transitions of the realizers to differ from
those of the high-level state transition probabilities. Thus we would expect statistical
top-down discipline to fail (in the limit of temporal supervenience the probabilities
of transition in the low-level theory go to one or zero).
As noted above, this would also put a strain on a reductive account of the rela-
tion between the high-level states and the set of low-level realizing states. But when
reduction fails, we have at least the two options mentioned above: declare the high-
level theory false, or accept that the high-level theory is not in the business of totally
constraining the evolution of the systems it describes but is rather focused on eluci-
dating certain typically occurring patterns. Since the range of incursions from below
is indefinitely broad and need have no theoretical unity we should not expect any
special or unified account of the failures of higher level theories to fully characterize
their domains and dynamics.

7.5 Some Consequences

The definitions given above result in a number of theorems or at least a number of


notable relations between them that, in addition to their own interest, can be used
7.5 Some Consequences 137

to clarify a variety of possible views on the nature of emergence. Some additional


assumptions are required for certain of these results, which are discussed as necessary.
R1. Final-physical-possibility does not imply physical-totality.
What I mean by this claim is just that there is no inbuilt necessity that the laws of
final-physics will sustain completeness, closure and resolution. It is, I think, the goal
of many modern physicists to produce a theory that has totality. The structure of
basic physics now in place would appear to be that of one aiming, so to speak, to
be a total theory, but the current theory is manifestly incomplete (or worse, incoher-
ent). We do not have a coherent theoretical description of every physically possible
situation, even if we permit such understanding to be general and qualitative. For
example, we just do not know the physics of processes that essentially involve both
quantum and gravitational processes (and the two fundamental theories involved,
general relativity and quantum field theory seem to be fundamentally inconsistent
with one another). There are several possible approaches to integrating the quantum
world with gravitation (for an intelligible overview of some see Smolin 2001), but
none are very far advanced at present and it remains far from clear whether any of
them will succeed.
In fact, it is a possibility that there is no theory which integrates these disparate
domains. Nature herself needs no theory and does not calculate the evolution of the
universe. There can be no a priori guarantee that there is a unified mathematical
description of this evolution. Even something as wild as Ian Hackings Argentine
fantasy cannot be ruled out. Hacking imagines that
God did not write a Book of Nature of the sort that the old Europeans imagined. He wrote a
Borgesian library, each book of which is as brief as possible, yet each book is inconsistent
with every otherFor every book, there is some humanly accessible bit of Nature such that
that book, and no other, makes possible the comprehension, prediction and influencing of
what is going on. (Hacking 1983, p. 219)

While it does seem clear that the research on quantum gravity is intended to complete
physics in a way that provides totality, whether physicists can succeed in developing
a final theory that is total depends not only upon their ingenuity but also upon the
nature of the world itself. It is impossible to say now whether this research will or
even can succeed, for it is not given beforehand that there is a single physical theory
that can encompass all elementary physical processes. Nor can we yet rule out as
incoherent the sort of radical emergence (to be defined below) that denies closure
via resolution. Therefore we cannot say in advance that the final physics is a total
theory or that the worlds that are final-physically possible are all such as to observe
totality.
R2. Strong supervenience of U upon T is compatible with the absence of T-temporal
supervenience.
Failure of T-temporal supervenience means only that there is a T-state, , of a system
that does not have a predecessor which leads uniquely to . Obviously, this does not
138 7 Emergence and Supervenience

prevent U from strongly supervening upon T unless further substantial conditions


are met.
R3a. Strong Supervenience of U upon T and T-temporal supervenience does not
imply U-temporal supervenience.
The reason is that Top-Down Discipline of U relative to T might fail. That is, the
set of realizers of some U-state(s) might lead to realizations of different subsequent
U-states, even though each such realizer T-state has a unique outcome, as illustrated
in Fig. 7.3.
R3b. Nor does strong supervenience of U upon T and the absence of T-temporal
supervenience imply the absence of U-temporal supervenience.
This is possible because there could be top-down discipline of U relative to T
despite the failure of T-temporal determination, as illustrated in Fig. 7.2 (see the
discussion of de-randomization above).
R4. Strong Supervenience of U upon T, Strong T-temporal supervenience and top-
down discipline of U relative to T implies Strong U-temporal supervenience.17
If we have T-temporal supervenience then U/T-TDD implies that every T-state
which realizes some U-state, , must evolve to realize a single successor U-state.
By strong supervenience, must have a realizing T-state. So must have a unique
successor, which is to say, we have Strong U-temporal supervenience.
R5. Strong supervenience of U upon T implies that U-states (probably) have
T-efficacy.
Suppose that U strongly supervenes upon T and consider some U-state, . The
state has a set of T-realizers {1 , 2 , . . . , n }. To test if has T-efficacy in the
production of some state we consider the counterfactual:
for some actual system, S, and some actual outcome T-state, of system S* , if S had not
been in state then S* would not have been in state .

In the nearest possible world where S is not in state we cannot have any of s
realizer states obtaining; that is, none of {1 , 2 , . . . , n }. We can assume that S being
in state was the outcome of one of these realizing states obtaining (since we need
only find one such to reveal efficacy). Since none of these obtain in the counterfactual
situation, it is unlikely that S* s being in state would come about nonetheless. Actual
judgments of efficacy would have to depend upon particular circumstances, but it
seems that it is very probable that states of strongly supervening domains have (or
typically have) efficacy. To take a definite example, suppose that I want an apple and
then reach out and take an apple. Was my desire efficacious in this transition? Well,
according to strong supervenience we assume that my wanting an apple was realized
by some physical state, P, from the set of possible realizers of apple-wantings. In
the counterfactual situation of my not wanting an apple, Palong with all the other
possible apple-wanting realizing physical stateswould not obtain (since if it did,
by supervenience, my desire would exist after all). Would I still reach out for the
apple? It is certainly possible to imagine situations where this occurs: suppose that
Dr. Horrible has trained his action-gun upon me and is ready to force my body
7.5 Some Consequences 139

to reach for the apple at the appropriate time should I lack the desire. However,
such situations of counterfactual overdetermination are, presumably, very rare, and
thus we may conclude that strongly supervening states very probablytypically
have efficacy. (If T has full strong T-temporal supervenience then we can say that
U-states definitely have T-efficacy. For then S* being in state would have a unique
predecessor and if that predecessor did not occur then S* would not be in state .
But then the counterfactual supposition that S is not in state would provide the
guarantee that the predecessor realizing T-state did not obtain and so S* would not
be in state as required. This may be of interest since physics appears to enjoy full
strong temporal supervenience.18 )
R6a. Strong T-temporal supervenience implies global supervenience for any domain
with T-efficacy.
Recall that to claim that U globally supervenes upon T is to say that any two worlds
that agree on their assignment of T-states to systems will agree on their assignment
of U-states. Symbolically,

(w)(w )()(F U )((w T w Fw) Fw ). (7.2)

Thus the denial of global supervenience would be expressed , after some manipula-
tion, as
(w) (w )()(F U )(w T w Fw Fw ). (7.3)

That is, the denial of strong supervenience entails that there are indiscernible
T-worlds that differ with respect to the non-supervening U-state, F. To test whether
F has T-efficacy we must evaluate the following counterfactual:
If F had not been the case then H would not have been the case.

Here, H is some outcome T-property of state which obtains in the source world
(i.e. the world from which we will evaluate the counterfactual, w in the above) and
which is putatively brought about by F. To perform the evaluation we consider the
T-possible world most like the source world save for the differences necessitated by
assuming F. The T-possible world most like the initial world would be one that
was identical with respect to T (up to the time when F obtains), differing only with
respect to F (and possibly other U-states). We know there is such a world, by the
denial of global supervenience (w in the above). However, by strong T-temporal
supervenience, that world evolves over time in exactly the same way as the source
world. Therefore the counterfactual is false and F cannot have T-efficacy, contrary
to the assumption of R6a. So global supervenience must hold.
R6b. Strong T-temporal supervenience implies strong supervenience for any domain
with T-efficacy.
This argument is slightly less convincing than that for R6a, because we need an
additional assumption. Suppose we have T-temporal supervenience but there is a
T-efficacious domain, U, that does not strongly supervene upon T. Then by the
definition of strong supervenience, there is a T-possible world where there is a system,
140 7 Emergence and Supervenience

, and U-property, F, such that

F (G T )(G ()(G F)). (7.4)

(That is to say, there is no property, G, which subvenes F in the appropriate way.)


So we have F and

(G)(G ()(G F)). (7.5)

Now, this means either (1) that has no T-properties whatsoever or (2) there is
such a T-property but it does not necessitate F. If the former, then is a radically
non-T system. Suppose F has T-efficacy. Then the presence of F makes a difference
in a T-property. But since F is a property characterizing utterly non-T entities, the
presence or absence of F is not marked by any necessary T difference. For while it is
perhaps possible to imagine that there might be some kind of a brute metaphysical
connection between some T-state and the presence of F, this connection is not a
T-law (T-laws do not say anything about radically non-T objects). Violation of this
connection is thus not a violation of any T-law, and so the world in which this
connection is broken is a T-possible world. So, given T-efficacy, there could be two
T-indiscernible situations which differed in their outcome because of the difference
in F. But this violates strong T-temporal supervenience. That is, since F is not marked
by any T-state we can take the F world and the F world to be T-indiscernible (and
worlds cant get any more similar in T-respects than T-indiscernibility), and then use
the argument for R6a to show strong supervenience.
Now, for the second case, suppose that strong supervenience fails because of (2).
Then there is a T-property, G, that has but is such that G does not necessitate F.
This entails that there is a world in which some system has G but does not have
F. We might then try to argue that in every world, G has the same outcome by
strong T-temporal supervenience. Thus in whatever world we choose to evaluate
the counterfactual which tests for the T-efficacy of F, there will be no T-difference.
Therefore F does not have T-efficacyit cannot make any difference.
But this wont quite work as it stands since it is open to the following worry.
The counterfactual test requires that we go to the world most similar to the source
world except that F holds. What if this is a world where G holds? Abstractly
speaking, this seems to be possible. However, such a world will be quite unlike the
source world, since strong T-temporal supervenience requires that Gs predecessor
not appear in the test world (else we would get G after all) or else we have a miracle
(which immediately violates T-temporal supervenience). That is, the assumption of
G propagates other T-changes throughout that world. Thus it is very plausible
that a G world is not the most T-similar to the source world. After all, we know
that there is a world in which G and F. If this is correct then the test world
contains G and hence must evolve to the same successor state as the source world,
thus revealing that F does not possess T-efficacy.19
Since strong supervenience implies weak supervenience it trivially follows that
strong T-temporal supervenience implies weak supervenience of T-efficacious
7.5 Some Consequences 141

domains. It is also the case that since strong supervenience implies global superve-
nience we have R6b implies R6a. Furthermore, since strong supervenience implies
what I called local supervenience, we also get that strong T-temporal supervenience
implies local supervenience.
Note also that we have to assume T-efficacy in the above since nothing can rule
out the possibility that there are parallel domains that do not supervene upon T
but rather exist entirely independent of the T-world yet enjoy rich causal relations
amongst themselves, a situation that would be approximated by considering
Leibnizs system of monads but without the pre-established harmony. The assump-
tion of T-efficacy forges an essential link between the U and T domains. Such an
assumption is reasonable since we have little interest in hypothetical domains that
are entirely isolated from each other. In particular, we are not very interested in
an epiphenomenalist view of the mind-body relation, though it is important to see
that epiphenomenalism cannot be ruled out by any considerations advanced thus
far. It is also interesting to note that, given (R5), we have it that strong T-temporal
supervenience implies that U is T-efficacious if and only if U strongly supervenes
upon T.
This highly interesting and perhaps initially surprising result reveals the signifi-
cance of temporal evolution of states for the metaphysics of dependence. If we have a
domain the states of which evolve through time according to the laws of that domain,
then there are tight constraints placed upon the states of any other domain which are
to have effects within that initial domain. They must ride upon the lawful transi-
tions of the initial domain to both preserve those lawful transitions and have their
own efficacy, which is to say, that domain must supervene upon the initial domain.
R6c. Weak T-temporal supervenience implies weak supervenience for any domain
with T-efficacy.
The argument for this claim is still weaker since additional assumptions (or modal
intuitions) are needed. The argument proceeds in parallel with that of R6b. But when
we consider the first horn of the dilemma, that might be a radically non-T system,
we must consider the counterfactual, if had not been F then things would have
been T-different. It seems to me that the closest world in which is not F is one in
which the T-temporal supervenience relations are not altered (since F has nothing
whatsoever to do with T, it is hard to see why the T relations would be different in
that world).20 If so, Fs T-efficacy would fail. (The alternative idea, I guess, is that
because of some kind of pre-established harmony, in the nearest world where is
not F, the T-temporal supervenience relations must be altered enough to make the
counterfactual come out true. But even in such a case, it seems that it is the alteration
in T that accounts for the difference in outcome so that intuitively F has no efficacy
in the T domain after all.) The other horn of the dilemma leads to the claim that
there is an object, , in the very same world as that in which has F such that
has G but does not have F. Then in that very world we have a test of Fs efficacy
andbecause of weak T-temporal superveniencewithin any world the T-temporal
supervenience relations are the same. Thus G will lead to the same outcome for
system as for system . So Fs T-efficacy seems to fail. If it is insisted that some
142 7 Emergence and Supervenience

kind of singular causation is possible then we must use the counterfactual test, and
then we can employ the plausibility argument given immediately above.
R7. T-Totality implies strong T-temporal supervenience (up to intrinsic randomness
of T).
Totality is a very strong condition on the nature of the laws of a theory as well as
on the metaphysical structure of the world as constrained by that theorys descrip-
tion (roughly, constituent structure with bottom-up causation sufficient to yield
all phenomena). But is it enough to guarantee temporal supervenience? Let us see.
Assume that T is (supposed to be) a total theory but that T-temporal supervenience
fails. Then there is a T-property, G, of system that does not have a unique outcome
(lets say that in such a case G diverges). If G is a complex state then by the
property of totality I labeled resolution we can resolve it into a set of elementary
T-constituents that act entirely according to T-laws. Therefore, if G does not have a
unique outcome this must be because some elementary state does not have a unique
outcome. So we might as well consider G to be such an elementary state. It is
then impossible for G to diverge because there is a sub-T theory which realizes
the T-states and which accounts for the divergence of G. For then not everything
that happens would be the result of the operation of T-laws and T-totality would be
violated. The only possibility of divergence is if T has some intrinsically random
elements within it. That is, if it is a brute fact that for some T-state two (or more)
distinct states can ensue. To take a common example, on certain views of quantum
mechanics (e.g. those that espouse the uncontrollable collapse of the wave function
view of measurement) QM-temporal supervenience fails. A particular uranium atom,
in state H , may or may not fission. If it does we get, say, state H1 ; if it does not we
get state H2 . There is nothing within quantum mechanics to account for this (and,
at least on the view of QM we are considering, no hidden variable lurking beneath
quantum mechanics either). The fissioning or lack of fissioning at any particular
time is intrinsically random. If there is no intrinsic randomness then it seems that
totality implies temporal supervenience. We could leave this result there: if there is
no intrinsic randomness in the elementary states of T then totality implies temporal
supervenience (this is less trivial than it appears since high-level theories can fail
to observe temporal supervenience without possessing intrinsic randomness; totality
implies that the lack of temporal supervenience must result from intrinsic random-
ness, not the sorts of intrusions from below that characterize high-level theories). In
fact, it implies strong temporal supervenience since totality is a property of the laws
of a theory and so naturally sets the conditions of possibility relative to that theory.
However, there is more to say about intrinsic randomness. It is important to see that
the possible existence of intrinsic randomness does not fundamentally change our
result. To take account of this possibility we would have to complicate our definitions
considerably, along the following lines. In place of individual states we would have
to take probabilistically weighted sets of states. We could then recast our arguments
in these terms. Instead of a unique outcome state as the defining characteristic of
temporal supervenience we would have a unique statistically weighted set of states.
Although this would get very messy I think in the end we would get completely
7.5 Some Consequences 143

analogous results to those obtained when we do not consider intrinsic randomness.


A form of statistical temporal supervenience would be defined in terms of predictably
weighted ensembles of states.
As an illustration, consider a view once defended by Karl Popper and John
Eccles (Popper and Eccles 1977). In support of a form of Cartesian dualism, Popper
and Eccles hypothesized that perhaps the mind could surreptitiously act under the
cloak of quantum mechanical indeterminacy, subtly skewing the intrinsically random
processes occurring at the synapses of the neurons. This is conceivable, but it would
be experimentally revealed, in principle, by noting that the distribution of outcome
states of synaptic conditions did not strictly match the statistics predicted purely on
the basis of quantum mechanics (once enough evidence had been accumulated it
would be overwhelmingly likely that orthodox QM was failing to predict the correct
statistics). In this way, quantum mechanics would be refuted. If quantum mechanics
is true (and total, or part of the total final-physics), then the mind can only act in
accordance with the statistics predicted by quantum mechanics. This would bear
out the statistical version of totality. This reveals that intrinsic randomness within a
theory only complicates temporal supervenience but does not destroy its essence.
We could then define the statistical efficacy of a state, F (within a theory that allows
for some intrinsic randomness) in terms of the presence of F making a difference
to the outcome statistics over repeated counterfactual trials. For example, adding
some weight to one side of a die (pretending for the sake of the argument that the
die is an example of an intrinsically random system) is statistically efficacious for
while it does not prevent any number from coming up it does change the outcome
statistics over many trials (perhaps only very subtly).
R8. Strong Supervenience of every T-efficacious domain, U, upon T and strong
T-temporal supervenience implies T-Totality.
Suppose every T-efficacious domain, U, strongly supervenes on T but that
T-totality fails. Then either closure, completeness or resolution fails.
If completeness fails then there is an entity which has no (non trivial) T-description,
that is, an entity which is a radically non-T object. This entity must be from some
domain, U. But then there could be a difference in U with no difference in T, for
while it is perhaps possible to imagine that there might be some kind of a brute
metaphysical connection between T-states and the U-states, this connection is not
a T-law if U (or that aspect of it relevant to the nature of the entity in question) is
a radically non-T domain. Violation of this metaphysical connection is thus not a
violation of any T-law, and the world in which this connection is broken is thus a
T-possible world. But this violates strong supervenience.
Suppose, then, that closure fails. Then for some domain, U (which, here and below,
might be T itself), some U-state, ,21 occurs in violation of some T-laws (say then that
is a miraculous state or, for short, a miracle). Butby strong supervenience
has a realizing T-state, .22 By strong T-temporal supervenience, has a predecessor
state, 0 , for which is the necessary unique outcome. Could occur but occur in
violation of T-laws? No, for then it would be T-possible for not to occur even though
its predecessor state does occur. If it is not a matter of T-law that 0 led to then
144 7 Emergence and Supervenience

there is a T-possible world where we have 0 and but where does not occur. But
that violates T-temporal supervenience. Therefore, s occurrence is not in violation
of any T-law. Since is the realization of , s occurrence does not after all violate
any T-law, so closure cannot fail.
Finally, suppose that resolution fails. Then there is a domain, U, and a U-state, ,
such that either, (1) there is no constitutive description of in T-elementary terms
or, (2) there is such a description but the presence of a particular instance of leads
to system behaviour distinct from the behaviour of s elementary T-constituents
as they would act under the T-laws governing the elementary T-constituents. Lets
label this possibility the divergence of s behaviour from that of s elementary
realizersthe shadow of a significant form of emergence is obviously looming here.
The first disjunct violates completeness.23 On the second disjunct, there must be a
T-state that subvenes , call it which is composed of a set of elementary T-features
{1 , 2 , . . . , n } (we know we have this decomposition by way of the assumption
that resolution fails via divergence). T-temporal supervenience means that there is
a unique outcome of each i , so {1 , 2 , . . . , n } has a unique set of elementary
T-features as its outcome. Therefore, divergence of s behaviour from that of s
elementary realizers violates T-temporal supervenience.24 Since we have assumed
that T-temporal supervenience holds, such a cannot exist, and therefore resolution
holds. So T-Totality follows.
R9. Strong T-temporal supervenience implies T-Totality (across domains with
T-efficacy).
From above (R6a) or (R6b), strong T-temporal supervenience implies Strong
T/U supervenience or global T/U supervenience for any domain with T-efficacy.
Therefore, from (R8a) or (R8b) the result follows.
R10. Strong T-temporal supervenience if and only if T-Totality (across domains with
T-efficacy).
Various forms of this follow from (R9) and (R6).

7.6 Varieties of Emergence

We are now in a position to characterize emergentism in some detail and discuss


distinct forms it might take. Emergentism is the doctrine that certain features of the
worldfeatures of the emergent domainemerge out of other features from another
domain, call it the submergent domain. We have already seen enough complexity
to know that defining exactly what emergence is and how it works is not so easy.
The simplest view, and one that dovetails with the approach of this chapter, is to
regard emergence as relative to theoretical descriptions of the world. A feature is
emergent only if it is part of one theoretical description but not another. For example,
the valence of an atom is emergent inasmuch as it forms a part of chemical theory
but not a part of physical theory (i.e. physics). Or again, the fitness of a genome
7.6 Varieties of Emergence 145

is an emergent feature insofar as it is utilized by evolutionary biology but not, for


example, by chemistry.
Of course, this preliminary criterion is but a part of what it is for a feature to
be an emergent feature. We must add a notion of the direction of emergence, for
while valence is a good example of an emergent feature we are not inclined to call
spin an emergent just because spin is not mentioned in evolutionary biology. The
direction of emergence brings supervenience into the picture in a natural way. For
the additional idea is that of determination of the emergent feature by features of the
submergent domain. Thus, we find it appropriate to say that valence is determined
by physical features, but have no reason at all to suggest that spin is determined by
features peculiar to evolutionary biology. It is the nature of this determination relation
that clouds the issue of emergentism, and suggests that work on supervenience may
be of assistance in its clarification.
For example, if we have strong supervenience of U upon T then we have what are
in effect laws of emergence that are constant across all T-possible worlds. These
laws of emergence are expressed in the latter part of the formula definition of strong
supervenience (i.e. the ()(G F) part (where, recall, G T and F U )
part of the definition). Notice that this provides another reason for preferring strong
supervenience over global or local supervenienceit locates a definite T-state as the
base for the emergent properties and this is in line with most emergentist thought.25
If we consider the difference between strong and weak supervenience in terms of
emergence, we see that weak supervenience allows for the laws of emergence to vary
across submergently possible worlds, which is an interesting and, as we shall see,
actually critical component of any serious form of emergentism.
One digression. Certain properties can perhaps be called emergent even though
they fail to meet our first criterion. Mass, for example, figures in physics, yet the
mass of a physically complex object can be thought of as an emergent. This is
a compositional sense of emergence, roughly characterized as a feature which an
object has but which no proper part of the object possesses, although the parts possess
cognate properties. Thus, having a mass of 1 amu is a property of an (ordinary)
hydrogen atom, but none of its proper parts have this property. This seems to me
rather a degenerate sort of emergence, for the generic propertythe determinable
if you will, in this case mass, equally applies to both the whole and its proper parts.
It is not surprising that a supervenience relation also holds between the submergent
properties and the compositionally emergent properties, and usually one that is pretty
straightforward and unlikely to lead to any substantial issues of emergentism.
This is not to say that the relation between the mass of a composite and that of its
constituents is simply additive. It is not. Because of the mass-energy convertibility,
the energy of binding amongst constituents is part of the law of emergence for the
mass of the composite system. The mass of the composite is somewhat less than the
sum of its constituents, which is to say that energy is released through the formation
of the composite. But the law of emergence in such case follows from the laws of
the submergent domain; the laws that govern how massive entities interact to form
more complex structures fully determines the mass of the composite entity.
146 7 Emergence and Supervenience

In marking out the central features of emergentism we must begin by contrasting


emergentism with dualism. Emergentism is anti-dualist; emergent features are fea-
tures of objects which always have descriptionsalbeit incomplete insofar as they
neglect the emergentsfrom within the submergent domain. Emergence does not
generate a realm separate and apart from the submergent domain. A second crucial
feature of emergentism is the denial of epiphenomenalism; emergent properties are
supposed to be efficacious, their presence makes a difference to the way the world
goes. However, the nature of this efficacy is not always clear and can vary from a
weak (and generally plausible) to a very strong (and quite implausible) claim about
the role of the emergents in the unfolding of the world.
We can use the results of this chapter to define the two fundamental types of
emergence (along with a rather peculiar and probably useless additional variant).
The weakest form of emergence is one which offers no threat to the operation of
the submergent domain from which the emergents spring. To put it another way,
the existence of such emergents is explicable (in principle, as discussed below) on
the basis of the submergent domain. Examples of such emergence are, presumably,
the liquidity of water, the shape of macroscopic objects, the chemical properties
of substances, the weather, etc. Such an emergence poses no ontological threat
the emergents are clearly features of systems describable in submergent terms. And
emergents of this kind can be said to have a kind of efficacy. The view that meets
these conditions is what I have called conservative emergence.
D17. U conservatively emerges from T if and only if T is a total theory and U has
T-efficacy.
Some remarks on this definition. It is admittedly highly abstract and rather remote
from any real phenomena that might serve as examples. But we have seen such
phenomena above in the discussion of dynamical systems and emergence in Chap. 6
in the features called dynamical autonomy and multiple realizability. A more
general positive characterization of the kind of autonomy at issue is given by Margaret
Morrison as follows: with emergent phenomena we have generic, stable behavior
thatis immune from changes to the equations of motion of the system (Morrison
2006, p. 886). Such stability will however always be subject to intrusion from below
and this is a kind of symptom of low level determination.
If T is a total theory and U has T-efficacy, then U strongly supervenes upon T (by
R6b and R10), so we know that features at the T-level completely determine those
at the U level and do so in terms of the constitutive structures. Because of resolution
we can expect there is, at least in principle, an explication of the origin of emergent
properties based upon the elementary T-features into which every U feature can be
resolved. That is not to say that where there is conservative emergence we should
eschew higher level explanations in search of lower level ones. In fact, the higher
level features might be precisely those that generate understanding of the systemic
properties.26 Nonetheless, the subvening level must be such as to enable generation
of the stable higher level structures whenever it attains the appropriate state. Such
emergents can have efficacy in the way that complexes of elementary T-features can
have efficacy. This, in turn, will allow such emergents to pass the counterfactual test
7.6 Varieties of Emergence 147

of efficacy, and hence they will meet the definition of efficacy given and used above.
Nonetheless, everything that happens, including the combinations of T-elementary
features that underlie the emergents, happens in accord with the laws of T.
Furthermore, it is worth recalling the discussion of Chaps. 5 and 6. When I say that
under conservative emergence we would have an in principle explication of emer-
gence in terms of the submergent domain I do not mean that the explication would
be simple or in any sense practically attainable. It might be of such complexity that
it will remain forever beyond our full comprehension. Generally speaking, these
explications will proceed on a case by case basis, by the deduction from T-states
and T-laws of all the behavioural capacities of U-states as well as the deduction of
U-laws as springing from these behavioural capacities. We already know enough
about complex systems to be quite sure that the detailed explanation of many emer-
gents will be beyond our best efforts. However, even in the absence of detailed
accounts of conservative emergence we might well have a pretty fair general idea of
how the determination relation works in many cases.
A good recent example illustrates both the nature of conservative emergence and
the need for an in principle clause (I draw the example from DiSalvo 1999). We
have known for a long time how to perform thermoelectric cooling, in which electric
current is directly converted into a temperature gradient. The effect was discovered
in 1834 by Jean Peltier (you can now buy specialty picnic coolers and the like that
operate thermoelectrically). The advantages of such cooling include compact size,
silent operation and no moving parts, but applications have been limited by the
low efficiency of current materials. Thermoelectric cooling operates at the junction
of two different conductors. Passing a current through the junction causes charge
conductors to diffuse away from the junction, taking heat with them. While this
is an extremely over simplified and highly schematic explanation, it reveals how
thermoelectric cooling is conservatively emergent. The efficiency of the process is
critically dependent upon the nature of the conductors forming the junction however,
and is expressed in a parameter known as zt. Known materials have a zt of about 1;
if materials of zt around or above 4 could be found, thermoelectric cooling would
vie with conventional methods of refrigeration for efficiency. Unfortunately, there
is no general and practical way to accurately predict the zt of a substance. DiSalvo
explains the situation thus:
Understanding electrical carriers in crystalline solids is one of the triumphs of modern
quantum mechanics, and a theory of te [thermoelectric] semiconductors has been available
for about 40 years. This transport theory needs one input: the electronic band structure. More
recent advances in determining the band structure, based on density functional theory and
modern computers, give acceptable results. The main input to band theory is the crystal
structure of the material. Known compounds can be sorted into a much smaller group of
crystal structure types. A given structure type may be adopted by many compounds, and
by comparison, we can often predict which elemental compositions will have this same
structure because of similar atom sizes and average valence, for example. However, many
new ternary and quaternary compounds adopt new structure types which cannot be predicted
beforehand, and without the crystal structure, electronic band structure cannot be calculated.
Not only is the inability to predict crystal structure (and thus composition or properties)
the main impediment to predicting which new materials will make better te devices, this
148 7 Emergence and Supervenience

inability is most often the limiting factor in obtaining improvements in most other materials
applications. (DiSalvo 1999, p. 704)

This inability to predict conservatively emergent properties stems from a number


of problems: the incompleteness of our grasp of theory, our inability to perform
extremely complex calculations, lack of knowledge of details of atomic structure,
and so on. But there is no real question that a mathematical archangelto once
again borrow Broads evocative termunfettered by limitations of computational
speed or memory capacity could deduce zt from quantum mechanical principles and
knowledge of the basic physical structure of the candidate materials.
More abstractly, if we have a total T-theory then we can in principle explicate
the behaviour of any system of any complexity from a knowledge of its elementary
T-structure. We know from totality, that all systems have such a structure and closure
guarantees that such predictions are in principle possible (they may, of course, yield
only statistical results depending upon the nature of the T-theory).27
So, conservative emergence is the model of emergence one must adopt if one
accepts that physics is (or will be) a total theory. It may be the natural view of
emergence from within the scientific view of the world, since that view is taken by
very many thinkers to include the claim that physics, which provides the fundamental
description of the world, is a total theory. But I would like to remind the reader that,
as noted above in the discussion of (R1), no one knows for certain whether or not
the final physics will be a total theory, and hence no one knows if the fundamental
structure of the world is physically total either. Whether or not the world is total is
an empirical matter, and cannot be decided by any a priori metaphysical arguments.
The original emergentists, which include Mill (see Mill 1843/1963, especially
Bk. 3, Chap. 6), Lewes (who introduced the term emergentism in Lewes 1875),
Morgan (1923), Alexander (1920) and Broad (who, in 1925 articulated and vigor-
ously defended the coherence and general plausibility of emergentism although in
the end did not fully endorse it), would not have been satisfied with mere conserva-
tive emergence (for an excellent general discussion of their views, see McLaughlin
1992). They wanted more, and in particular they wanted their emergents to possess
both a stronger form of efficacy and a more mysterious and portentous relation to
the submergent domain than conservative emergence allows. Furthermore, although
the move from submergent to emergent was to be mysterious it was to be a part of
the natural order, not a mere accident or lucky chance. That is, the presence of an
emergent feature was supposed to be in principle unpredictable even given a com-
pletely precise specification of the submergent domain and a complete theoretical
understanding of that domain. A sign of this kind of emergence is, as Broad put it,
that in no case could the behaviour of a whole composed of certain constituents
be predicted merely from a knowledge of the properties of these constituents, taken
separately, and of their proportions and arrangements in the particular complex under
consideration (Broad 1925, p. 63).
The point of talking of prediction in principle is to provide a natural way to
erase the epistemological constraints which can cloud metaphysics. The claim of
impossibility of prediction of U-states on the basis of fundamental T-state even in
7.6 Varieties of Emergence 149

principle is the denial of determination or strong supervenience of U upon T. It


is conceivable that this venerable way to approach the ever present gap between
epistemology and metaphysics which links in principle predictability with strong
supervenience masks another distinction, a distinction between predictability (in any
sense) and determination. If so, the deployment of the idea of prediction in principle
would become (even more of) a metaphor for the determination of all properties but
those of the submergent domain. But I take it that Broad and the other emergentists
did intend to speak of a lack of determination or supervenience when they talked of
a lack of predictability in principle and I will follow them in this.
Following Chaps. 5 and 6 above, let us call this hypothetical, new form of emer-
gence radical emergence. It is obvious that radical emergence implies that the sub-
mergent domain is not total (or that the theory of the submergent domain is not
total). The failure of totality can be further diagnosed as a failure of closure. Com-
pleteness can hold, since the emergents are not new substances; and resolution can
hold in the sense that complexes that possess emergent properties can be resolved
into elementary constituents of the submergent domain. But the behaviour of these
complexes ismost emphaticallynot given by the concerted behaviour of those
elementary constituents as they act, or would act, solely under the laws of the sub-
mergent domain. Thus closure must fail. We also know, from R10, that the failure of
totality implies that we do not have strong T-temporal supervenience.
So if radical emergence is true then physics is not total. This could obtain in two
ways. The first is that physics, as a theory, could be merely formally total. That is,
physics could have the form of a total theory but be false of the world. Right now,
given the pretensions of physics and its structure, this seems to be the only way
radical emergence could be true. It is from this viewpoint that a severe tension is
generated between radical emergence and physical theory. But the other way totality
can fail is, I think, more promising. It is possible to imagine physics just giving up
its efforts to be total and resting content with describing the nature of the ultimate
constituents of the world with no implication that this description will implicitly
fully constrain all of the worlds behavioural possibilities. It will, that is, be possible
to resolve every complex physical entity into ultimate physical constituents, but not
possible, even in principle, (and not thought to be possible) to recover the behaviour
of every such complex merely from the interactions of the constituents as they act
according to the laws of fundamental physics. This would of course be to embrace
radical emergentism.
This would indeed be a radical departure from our usual understanding of the
aim of physical theory, for it requires a physics that is essentially incompletable,
one admitting that the transition from elementary physical activity to the activity
of complex physical systems is not entirely governed by fundamental physical law.
Thus it feels instantly implausible to modern sensibilities. And this implausibility
may be grounded in more than emergentisms unfashionable opposition to the current
physicalist zeitgeist, since emergentism may contradict some of the very general
principles upon which our modern physical understanding of the world is based. But it
is difficult to decide whether radical emergence actually requires the violation of such
principles. For example, does radical emergence entail the violation of the principle
150 7 Emergence and Supervenience

of the conservation of energy? It seems that it might not, and there are at least three
ways to arrive at this conclusion. However, one of these ways, due to McLaughlin
(1992), reveals the almost irresistible urge back towards the totality of physical theory
and the consequent demotion of radical emergence to mere conservative emergence.
McLaughlins suggestion applies to a system with emergent features acting in a
way that appears to diverge from the action we would expect based on the physical
understanding of the constituents of the system, thus violating the conservation of
energy. Energy conservation can be reclaimed by positing a new sort of potential
energy field which the emergent features can, so to speak, tap. The difficulty with
this solution is that this potential energy field will naturally be counted as a new
and basic physical feature of the world, which restores totality to physics and with it
predictability (in principle) of the behaviour of complex systems from a knowledge
limited to all the fundamental physical features of the system in question.
An example to illustrate this problem is the bizarre Casimir effect, which at first
sight may seem to offer an instance of radical emergence. If two flat metal plates are
placed very close to each other (but not touching) there will arise a (non-gravitational)
force between them, pushing them together ever so slightly. Is this the radical emer-
gence of a new force emerging from certain quite particular macroscopic configura-
tions of matter? No. The standard explanation of the Casimir effect is, roughly, that
there is energy in the quantum mechanical vacuum which is, because of the nature of
the metal plates and arcane details about the possible distributions of virtual photons
and their characteristic wave lengths between and beyond the plates, slightly greater
outside the plates than between them. The point here is that the explanation spoils the
appearance of radical emergence, for the potential energy locked in the vacuum is
explicable in elementary terms, and thus the force between the plates is predictable
(in principle) just from basic physical features (and of course it was predicted, by
Hendrik Casimir, in 1948 and unambiguously experimentally observed in 1997).
McLaughlins proposal, then, is a general method of transforming radical into
conservative emergence, by the postulation of new potential energy fields which can
be regarded either as stemming from or as themselves constituting new elementary
physical features of the world. That is, these fields might be explicable in more
elementary physical terms (rather as in the example of the Casimir effect) or they
might be new brute facts about the basic physical structure of the world.
The second proposal retains the radical nature of emergence but requires that there
be a high-level conspiracy to balance the energy books. That is, the defender of
radical emergence can believe that energy is created or destroyed, as needed, out
of the blue when certain complex configurations are realized but that somehow an
equal amount of energy disappears, or appears, from the universe somewhere else
whenever these configurations arise. This is not logically incoherent, but would of
course be utterly mysterious from the point of view of basic physics.
With respect to this defense of energy conservation, it seems the defender of
radical emergence might do better to simply allow the conservation of energy to
lapse on the grounds that it is better to have one mystery rather than two (the second
being the odd and oddly coordinated disappearance/appearance of energy). After all,
7.6 Varieties of Emergence 151

if we are allowing radical emergence there is no reason to deny that energy itself can
radically emerge.
But a third method of saving energy conservation is perhaps more in line with
radical emergentism and its assertion that fundamental physics, conceived of as a
total theory, is incompletable. The main idea here is that energy conservation is
a system relative property, and those systems exhibiting emergent properties will
abide by energy conservation as systems, with no implications about the processes
involved in the systems coming into being. What I mean can best be explained by
an example. Physical systems can often be described mathematically in terms of a
certain function, called the Hamiltonian, which encodes relevant properties of the
system as well as forces acting on the system. The simplest case of use to us is the
classical Hamiltonian of a single particle constrained to move in a single dimension
subject to a field of force. The mathematical expression is this:

p2
H (x, p) = + V (x)
2m

Here, p represents momentum, m mass and the function V (x) represents a force
field in which the particle moves. From this equation one can deduce the functions
governing the position and momentum of the particle over time. Most significantly,
the Hamiltonian function is an expression of the energy of the system, and it can be
shown that the time rate of change of H (x, p) is exactly 0, i.e. that the energy of
the system cannot change over time. But notice that this description of our system
says nothing about the nature of the particle involved, and nothing about the nature
of the force which governs its motion. So a system with emergent properties could
instantiate this description at the level of the emergent features. The radical emergen-
tist regards as another matter altogether the issue of whether, or how, the constituents
(entities or processes) of this system unite or combine to create the whole system.
Thus we are free to regard energy conservation as a constraint only upon systems as
such. For if radical emergentism is true, there is no way to understand the creation
of complex systems entirely in fundamental physical terms. Simple, non-emergent
systems will obey the principle of the conservation of energy and so too will com-
plex systems with emergent properties. The transition from simple, non-emergent to
complex, emergent systems is not explicable by basic physics and is thus not bound
by principles restricted to fundamental physics.
Although radical emergence denies the totality of the submergent domain, it is
an open question whether we could allow strong supervenience within our radical
emergence, since while non-T-totality implies non-T-temporal supervenience, it does
not imply that strong supervenience fails.
However, it is easy to argue that strong supervenience is at odds with radical
emergence, for it would make the emergent features objectionably unexplanatory or,
in a way, epiphenomenal. Consider that the lack of T-temporal supervenience entails,
at least, that it is possible for two indiscernible T-states to have different outcomes. If
these indiscernible T-states are the base for an emergent property then, given strong
supervenience, they will subvene the very same emergent property. Therefore the
152 7 Emergence and Supervenience

emergent property will be unable to explain why there is divergence when you have
T-indiscernible initial states. The lack of T-temporal supervenience is brute (relative
to U at least). If we want T-divergence to be explained by the emergent features then
we cannot have strong supervenience.
To make this argument slightly differently, the radical emergentists believed that
the behaviour of complexes was in principle unpredictable from a knowledge
however completeof the entities, relations and the laws governing the elementary
submergent features. They nonetheless took it that the emergents were supposed to
explain the divergence of the behaviour of the complex from the behaviour of the
complex as it would be if it were determined solely by submergent laws and states
alone. But as noted, if we have strong supervenience then the complexes would
always subvene the same emergent feature (if any). If the behaviour of the complex
was the same in all possible worlds then we would recover temporal supervenience
and hence totality and we would have restored conservative emergence. The action of
the complex would after all be predictable on the basis of the state of the elementary
submergent features constituting the complex. Thus if the emergents are to explain
the divergent behaviour of complexes, we cannot have strong supervenience.
Although completely mysterious from the point of view of fundamental physics,
the emergentists thought that the emergence of high-level features was nonetheless
a part of the natural order. Once we know that, for example, a particular chemical
property arises from the combination of certain basic physical entities, we can infer
that this chemical property will arise whenever we have this physical combination.
As Broad puts it: No doubt the properties of silver-chloride are completely deter-
mined by those of silver and of chlorine; in the sense that whenever you have a whole
composed of these two elements in certain proportions and relations you have some-
thing with the characteristic properties of silver-chloride (Broad 1925, p. 64). But
this relation is a law which could have been discovered only by studying samples of
silver-chloride itself, and which can be extended inductively only to other samples
of the same substance (Broad 1925, p. 65, my emphasis).
Thus it seems that since the emergence is not a product of T-laws acting by
themselves, there are T-worlds that differ with respect to the emergents attendant
upon the same T-state (this is a variation that does not violate any T-laws). But at the
same time, within these worlds we can generalize the emergence of emergent features
across all intra-world instances of indiscernible T-states. This is an exact statement of
a claim of weak supervenience of the emergent features upon T-features. The situation
is illustrated in Fig. 7.5. And we can then define radical emergence as follows:
D18. U is radically emergent from T = U weakly supervenes upon T, where T is a
non-total theory.
There is an interesting and perhaps rather attractive symmetry of causation in
radical emergence that is lacking in doctrines that espouse totality, and endorse con-
servative emergence. The failure of totality under radical emergence is explicable in
terms of a very strong form of top-down causation. Totality will fail when complex
systems fail to act in the ways they should act if their behaviour was entirely generated
by the interactions of their constituents according to the fundamental laws governing
7.6 Varieties of Emergence 153

Fig. 7.5 Radical emergence


and weak supervenience

those constituents and their interactions. Our label for such failure is divergence,
so we can say, in short, that divergence is, or ought to be, explicable by top-down
causation. Now, as noted above, the divergence of complex systems as described by
high-level theories is commonplace, and such divergence is explicable by bottom-up
causation; we expect that high-level generalizations will fail because of intrusions
from below of effects stemming from lower-level processes or structures, as when
a computer program outputs erroneous data because of a cosmic ray hit on a memory
chip or a person contracts cancer because an ultra-violet photon has damaged some
DNA in one of their constituent cells.
Radical emergence entails that there will be exactly analogous intrusions from
above as well: genuine as opposed to the merely apparentor at least entirely
explicable in low-level termstop-down causation found in total theories. When
there is radical emergence complex systems, described in terms of low-level theory,
will suffer from effects stemming from higher-level processes or structures, effects
which are simply not predictable solely from the low-level state of the systems
and not fully determined by them (i.e. not determined with the strength of strong
supervenience).
Another interesting feature of radical emergence is that it tends to conspire to
give an illusion of totality. That is, radical emergence of U from T entails weak
T-temporal supervenience (up to intrinsic randomness of T). Thus, within a world,
T-complexes that are indiscernible all act exactly the same (or, at least, generate
the same behavioural statistics). Such a world could look like it was T-total and
encourage the search for a total T-theory. A rather bizarre further highly speculative
possibility is that such a total theory could perhaps, given sufficient ingenuity, be
found despite the existence of genuine radical emergence. The theory would be false,
but not testable. One warning sign of such a situation might be the multiplication
beyond plausibility of potential energy fields (of the sort discussed above) required
to handle multi-component interaction. More likely, the very complexity of those
T-systems in which radical emergence might be found would rule out any test of
154 7 Emergence and Supervenience

emergence. The systems of interest would just be too far from the T-constituents for
any calculation based solely upon fundamental T-laws of how they should behave to
be feasible. That is, of course, the situation we are in and shall remain in.
The issue of testability could become more contentious if it should turn out that
the mathematical details of our best fundamental theory rule out not only ana-
lytic solutions of critical equations (a situation we are already in, as discussed in
Chaps. 5 and 6) but also simulatability.28 It is worth remembering that the totality of
physics is not practically testable for the simple reason that the instruments used in
physical experimentation are themselves highly complex physical entities for which
the hypothesis of radical emergence would, technically, have to be ruled out. The
discovery and verification of properties of the most basic physical entities are the
very ones that require the most complex instruments, such as particle accelerator
complexes, as well as the extremely long historical chains of experimental inference
which necessarily involve myriads of highly complex instruments. If it should turn
out that certain complex and actual physical systems required for the testing of basic
theory are in principle unpredictable because of certain mathematical limitations
then it may be that the totality of physics is simply not a testable hypothesis at all.29
The contrast between conservative and radical emergence can also be expressed
in a familiar theological metaphor. Imagine God creating a world. He decides it shall
be made of, say, quarks and leptons and a few bosons that, in themselves, obey certain
laws. But He has a choice about whether His new world shall be total (relative to these
elementary constituents) or not. That is, He must decide whether or not to impose
serious laws of emergence on top of the properties of the basic entities. Either way,
a world appears, but the worlds are different. Which world are we in? It is impossible
to tell by casual inspection and perhaps impossible to tell by any experiment, no
matter how idealized. Thus it may be that radical emergentism cannot be ruled out
by any empirical test whatsoever, and thus it may be that we live in a world of radical
emergence.
Chapter 8
Generalized Epiphenomenalism

8.1 Universal Physical Resolution

I want now to turn to the question of how we should understand the status of
conservatively emergent phenomena. This is important because, as I have argued,
the natural understanding of both the structure of current physical science and its evi-
dent goals suggest that the only acceptable form of emergence will be conservative
or epistemological.
I aim to show that a common and plausible interpretation of what science tells us
about the fundamental structure of the worldthe scientific picture of the world
or SPW for shortleads to what Ill call generalized epiphenomenalism, which is
the view that the only features of the world that possess genuine causal efficacy are
fundamental physical features. I think that generalized epiphenomenalism follows
pretty straightforwardly from the SPW. At first, it might seem that generalized epiphe-
nomenalism is fairly innocuous, since its threat is too diffuse to provoke traditional
worries such as those about the putative epiphenomenal nature of mental states.1 If
mental states are epiphenomenal only in the same sense that the supposed powers of
hurricanes, psychedelic drugs or hydrogen bombs are epiphenomenal, then probably
there is not much to worry about in the epiphenomenalism of the mental. I agree that
the epiphenomenalism of hurricanes and the like is manageable, but it will turn out
that ensuring this manageability requires that mental states have an ontological status
fundamentally different from that of hurricanes, drugs and bombs, a status that is
in fact inconsistent with the SPW. So Ill argue that generalized epiphenomenalism
does have some seriously worrying consequences after all.
The SPW takes as its starting point the modern naturalistic conviction that the basic
structure of the world can be discovered by scientific investigation and that there is
no ground for positing a metaphysical understanding of the world distinct from a sci-
entific understanding (a slogan: fundamental science is metaphysics with numbers).
As discussed in Chap. 7, three interlocking features seem of central importance to
the SPW: completeness, closure and resolution. Focusing on physical science, com-
pleteness is the doctrine that everything in the world is physical and as such abides by

W. Seager, Natural Fabrications, The Frontiers Collection, 155


DOI: 10.1007/978-3-642-29599-7_8, Springer-Verlag Berlin Heidelberg 2012
156 8 Generalized Epiphenomenalism

closure and resolution. Closure entails that there are no outside forceseverything
that happens, happens in accordance with fundamental physical laws so as to comply
with resolution. Resolution requires that every process or object be resolvable into
elementary constituents which are, by completeness, physical and whose abidance
with laws governing these constituents leads to closure.2
Take anything you like: a galaxy, a person, a flounder, an atom, an economy.
It seems that anything can be resolved into the fundamental physical constituents,
processes and events which determine its activity. Indeed, our best theory of the
creation of the universe, as outlined in Chap. 2, maintains that at very early times after
the big bang the universe was quite literally resolved into its elementary constituents.
At that time the universe consisted of an extremely hot, highly active sea of quarks
(that would later, after sufficient cooling, combine into more familiar composite
particles such as protons and neutrons), leptons (including the electron needed for the
laterafter still more coolingcombination of protons and neutrons into elements,
that is, chemical kinds) and elementary force exchange bosons. It is these which, to
speak roughly and to use an evocative expression of Morgan, provide the go of the
universe, driving it from state to state.3 Completeness, closure and resolution and
their inter-relations are concisely expressed in the startling thought that the universe
is running entirely and solely upon the interactions of these elementary constituents
no less today than when it was 1037 s old.
It is crucial to emphasize that the SPW is a metaphysical, not an epistemological
doctrine. It does not concern itself with how or whether we could understand every-
thing in terms of full resolution. In fact, such understanding is quite impossible, for
reasons of complexity (of various sorts) that the SPW itself can spell out. Innumer-
able immensely difficult questions arise at every stage of resolution,4 and there is no
practical prospect whatsoever of knowing the full details of the physical resolution
of anything much more complex than even such an apparently simple object as a
proton.
The metaphysical picture is nonetheless clear. And since the world has no need
to know the details but just runs along because the details are the way they are, the
problems we have understanding complex systems in terms of fundamental physics
are quite irrelevant to the metaphysics of the SPW.
We can extend the philosophical model of computational simulation used in
Chap. 5 to better reveal how completeness, closure and resolution are supposed to
work, and which might also serve as a test of ones attitude towards the SPW.5 Call
the model the superduper computer simulation thought experiment. It goes like this.
Imagine the day when physics is complete. A theory is in place which unifies all the
forces of nature in one self-consistent and empirically verified set of absolutely basic
principles. Not too long ago there were some who saw this day as perhaps drawing
near (e.g. Hawking 1988; Weinberg 1992). Optimism seems to have somewhat fallen
off of late however. No matter. I am talking of a perhaps very distant possible future
or world. Of course, the mere possession of this theory of everything will not give
us the ability to provide a complete explanation of everything: every event, process,
occurrence and structure. Most things will be too remote from the basic theory to
8.1 Universal Physical Resolution 157

admit of explanation in its terms; even relatively small and simple systems will be
far too complex to be intelligibly described in the final theory.
But seeing as our imagined theory is fully developed and mathematically com-
plete it will enable us to set up detailed computer simulations of physical systems.
The range of practicable simulations will in fact be subject to the same constraints
facing the explanatory use of the theory; the modeling of even very simple systems
will require impossibly large amounts of computational resources. Nonetheless, pos-
session of a computational implementation of our final theory would be immensely
useful. Real versions of something very like my imaginary scenario now exist and
are already fruitful. For example, there are computer models of quantum chromo-
dynamics that can compute the theoretically predicted masses of various sub-atomic
particles in terms of their constituent quarks (see Drr et al. 2008; for a more popular
presentation of an earlier calculation see Weingarten 1996). The looming problem
of computational intractability is all too evident, for realizing these calculations
required, in the 1996 attempt, the development of special mathematical techniques,
the assembling of a dedicated parallel supercomputer specially designed for the nec-
essary sorts of calculations (a computer capable of eleven billion arithmetical oper-
ations per second) and roughly a year of continuous computing. Weingarten reports
that a special 2-year calculation revealed the existence of a previously unrecognized
particle, whose existence could be verified by examining past records from particle
accelerator experiments. The later effort reported in Drr et al. (2008) had access
to vastly more powerful computational resources; their computer could crank out
two hundred thousand billion operations per second but since their modeling was
much more detailed they still required a years worth of computer time. Modeling
the interactions of particles would be a much more challenging task, suggesting to
the imagination computational projects analogous to the construction of medieval
cathedrals, involving thousands of workers for many decades.6

8.2 Superduper Simulation

Now I want to introduce a thought experiment that flatly ignores the inevitably
insuperable problems of computational reality. Imagine a computer model of the
final physical theory which has no computational limits (we can deploy as much
memory as we like and compute for as long, or as fast, as we like). Further, imagine
that detailed specifications of the basic physical configuration of any system, at any
time, in terms appropriate for the final theory, are available. If the configuration
of any physical system is specified as input then the output configuration of the
system, for any later time, can be calculated (and appropriately displayed). If the final
theory should turn out to be non-deterministic (unlikely as that may seem, given that
quantum mechanics, which seems to form the basis of any final physics we can now
envisage, provides for the completely deterministic evolution of the wave function
of any system7 ) then we can permit multiple simulations to run simultaneously,
thus to duplicate the statistics to be found in the real world. Now, there is nothing
158 8 Generalized Epiphenomenalism

incoherent in the idea of an absolutely perfect simulation. In fact, we might have


some of them in physics already. The Kerr equations for rotating black holes are
(if the general theory of relativity is true), absolutely perfect models of these strange
objects. Recall Chandrasekhars confession that in my entire scientific lifethe
most shattering experience has been the realization that an exact solution of Einsteins
equations of general relativity, discovered by the New Zealand mathematician Roy
Kerr, provides the absolutely exact representation of untold numbers of massive black
holes that populate the universe (as quoted in Begelman and Rees 1996, p. 188).
In fact, we know that Chandrasekhar should not have been quite so shattered, for
general relativity does not provide the complete description of any actual black hole.
There are known quantum effects, such as Hawking radiation, and presumably only
the long hoped for ber-theory which will unite the realms of relativity and the
quantum will provide the exact representation of the objects we call black holes.
However, even if certain physical systems allow of perfect simulation, we are not
so lucky with the rest of the world in general, and so even within our dream certain
approximations in the input configurations will have to be allowed. We cannot input
field values for every point of space-time and it is conceivable that some configura-
tions require an infinite amount of information for their specification if, to give one
example, certain parameters take on irrational values which never cancel out during
calculation. Let us therefore imagine that we can input specifications of whatever
precision we like, to allow for modeling the system for whatever time we like, to
whatever level of accuracy we desire. Even though it is not physically realizable, I
think the idea of such a computer program as a thought experiment is perfectly well
defined.8
So, let us imagine a computer simulation of a part of the world. Step one is to
restrict our attention to something we call simplea bob on a spring on the moon
say. The simulation covers a restricted region of space and time (though the program-
mer would have to set up boundary conditions that represent the influence of the
rest of the world), and must be defined solely in terms of the values of fundamental
physical attributes over that region. The programmer is not allowed to work with
gross parameters such as the mass of the bob or the stiffness of the spring, or the
gravitational force of the moon, but must write her code in terms of the really basic
physical entities involved. (It might help to imagine the code written in terms of
the properties of the atoms of the pendulum, its support structure and moon, though
these are themselves not really physically basic.) The SPW predicts that the output
of this computer simulation, appropriately displayed, would reveal a bob bouncing
up and down, suspended above the lunar surface.
Step two is to up the ante. Now imagine a simulation of a more complex situation,
for example a father and child washing their dog, in their backyard on a lovely
sunny day. Do you think the simulation would mimic the actual events? I venture to
maintain that our current understanding of the structure of the world very strongly
suggests that such a simulationif only it were possiblewould re-generate both
the action of the pendulum and the behaviour of the father, child and dog (along with
tub, water, water droplets, soap, sunlight, etc.).
8.2 Superduper Simulation 159

Although something of a digression, it is worth considering the details of such


simulations a little more closely. The thought experiment is of course outrageously
idealized. We assume unlimited (but finite) memory and allow unlimited (but finite)
processing time (or, indefinitely, but finitely, fast computing machinery). Even rela-
tive to such generous constraints, there are questions about the feasibility of such
simulations in general. For example, many extant theories can allow mathematical
singularities to arise (as in the formation of a black hole) or suffer from other math-
ematical pathologies. Technical problems also abound which are exacerbated by the
need for computationally efficient algorithms (these are not a concern for the thought
experiment of course). While some of the mathematical models of ideal physical sys-
tems have exact solutions (e.g. the frictionless pendulum) almost all such models lack
exact analytic solutions and thus have to be tackled by numerical approximation. The
field of applied mathematics that deals with numerical approximations of differen-
tial equations is exceedingly complex, far transcending the elementary discussion
given above in Chap. 6; for an overview see Thijssen 1999). It is known that not
all systems can be simulated if we require the simulation to obey certain otherwise
apparently desirable mathematical constraints corresponding to physical traits such
as energy conservation which govern the real world process being simulated (see Ge
and Marsden 1988; Umeno 1997). The general question whether any given theory
is simulatable either fully, partially or for all physically realistic models cannot be
answered apart from a detailed study of the theory in question.9
As discussed in Chap. 5, it is perhaps also possible that nature transcends the
Turing Limitthat is, can only be described in terms of uncomputable functions.
This adds another layer of uncertainty about whether a digital computer (equivalent
to a universal Turing machine) can simulate all the mathematical functions which cor-
rectly and completely describe nature. Nature might use, so to speak, uncomputable
functions in getting the world to move from state to state. A very simple example of
an uncomputable function is the function E(x, y) defined as E(x, y) = 1 if x = y
and E(x, y) = 0 if x = y. Even if x and y range over computable real numbers, E
is not computable (since youd have to check an infinite number of digits to verify
the identity of two real numbers). If nature uses real numbers in ways that make
the evolution of some system depend closely (e.g. chaotically) on the exact value of
E then our simulation may turn out to be impossible.
But notice that we dont necessarily have to restrict ourselves to Turing machines
(or the souped up equivalents we all have on our desks and in our laboratories)
or purely digital simulations of continuous differential equations. If nature works
beyond the Turing limit then presumably we can build computers that exploit
that region of physics. One example of such an enhanced computer that has been
studied by theoretically minded computer scientists is what Turing himself called
Oracle machines (see above Chap. 5, n. 11). These are Turing machines that can, at
well defined points during computation, call upon an oracle to provide the output of
an uncomputable function. No one knows for certain that oracle machines cannot be
built, but if nature uses uncomputable functions then nothing seems to prevent us
from in principle incorporating such aspects of nature into our computing machinery.
For the argument advanced in this chapter, the crucial constraint involves resolution.
160 8 Generalized Epiphenomenalism

The simulation must operate solely over the basic constituents of nature and only
over the properties of those constituents. The importance of this restriction has to do
with the notion of emergence, to be discussed immediately below.
The simulation thought experiment can be used to provide a simple and clear
characterization of emergence. An emergent is anything that is not coded into the
simulation. Thus a thunderstorm is an emergent entity since, I take it, we would not
need, in addition to coding in the quarks, leptons and bosons and their properties, to
add thunderstorms as such to our simulation code. Temperature would be an example
of an emergent property (thermodynamical properties in general would be emergent
properties), as would be such features as being hydrogen (chemical properties in
general would be emergent properties), being alive (biological properties would in
general be emergent properties), etc. This idea is hardly new. Dennett has a famous
example of a chess playing computer that liked to get its queen out early (see
Dennett 1978a). There is no get-the-queen-out-early code in the chess program.
Another example is Douglas Hofstadters amusing tale of desperate computer users
requesting that the thrashing number be raised (thrashing being when the user
load on a server overwhelms its multitasking capacity and memory causing it to
descend into seeming paralysis). There is of course no such value written anywhere
in the machines code that is the thrashing numberthis value just emerges (see
Hofstadter 1986).

8.3 Towards Epiphenomenalism

As we have seen, although the founders of the philosophical doctrine of emergentism


(Mill, Lewes, Morgan, Alexander, Broad) would have agreed with my examples as
examples of emergent features, they wanted more from their emergents. They wanted
their emergents to have an active role in the world, not a merely passive or derivative
role. That is, they believed in radical emergence in addition to the unexceptionable
conservative emergence. The world, according to these emergentists, goes differently
because of the presence of emergents. It does not behave simply in the way it would if
the properties of the basic physical constituents of the world were solely efficacious
and the emergents were simply conglomerations of basic physical entities obeying
the basic laws of physics. In deference to their desires, we have distinguished a
conservative, or epistemological, from a radical, or ontological, emergence. In terms
of our simulation thought experiment, radical emergence is the claim that, despite
being an entirely accurate representation of the basic physical constituents of the
world, the simulation will not render an accurate simulation of the development of
the whole world as these basic constituents combine and interact to form ever more
complex structures. That is, despite the accuracy of the simulation with regard to
basic physical features, we will have to code into our simulation additional features
that come into play only when certain combinations of the basic features appear.10
The emergentists favourite example of what they took to be a relatively straight-
forward, uncontroversial example of radical emergence was chemistry. They took
8.3 Towards Epiphenomenalism 161

it to be the case that the theory of the atom simply could not account for all the
various chemical properties and chemical interactions found in the world. And they
meant this to be a metaphysical claim. They were well aware of the difficulties of
complexity which stood, and stand, in the way of fully understanding chemistry in
terms of physics, or feasibly predicting chemical properties on the basis of basic
physical properties. The emergentists were not merely noting these difficulties of
complexity. They were denying that chemistry resolves into physics (or that chemi-
cal entities resolve into physical entities) so as to obey closure. As Broad put it, even
a mathematical archangel could not deduce chemistry from physics. I have put our
simulation of basic physics in place of the angel, so the claim of radical emergentism
is just that the simulation will not provide an accurate representation of distinctively
chemical events. This is a kind of empirical claim. And though in a certain sense it
is untestable, the development of quantum mechanics has severely undercut the case
for chemistry being radically emergent. It now seems clear that chemical properties
emerge from the properties of the basic physical constituents of atoms and molecules
entirely in accord with closure and resolution. So-called ab initio methods in compu-
tational chemistry increasingly fund this confidence, as for example in this passage
unwittingly directly opposed to Broads pronouncement: ab initio computations are
derived directly from theoretical principles with no inclusion of experimental data
(Young 2001, p. 19). And that is the sign of conservative emergence. The claim that
conservative emergence exhausts all the emergence there is in the world is simply the
claim that all features not coded into the simulation are subject to resolution under
closure.
Of course, this is not to say that the way chemistry conservatively emerges from
more basic physical features follows anything like Broads conception of the purely
compositional, mechanistic growth of complexity. We know that the relation of emer-
gence in this case (and many others) is much more interesting insofar as it depends
on the peculiar features of quantum mechanics. It may be that chemistry provides
an illustration of what Paul Humphreys calls fusion emergence (see Humphreys
1997b and for a dissenting view with regard to chemistry see Manafu 2011). But, as
discussed above in Chap. 6, fusion emergenceand quantum mechanics in general
does not seem to take us outside the boundaries of conservative emergence.
Thus the SPW, with its endorsement of the totality of fundamental physics (that is,
its completeness, closure and resolution) asserts that all emergence is conservative
emergence. Now, does conservative emergence entail generalized epiphenomenal-
ism? The threat is clear. Conservatively emergent features have no distinctive causal
job to do; whatever they do is done through the agency of the basic physical features
that subvene them. But that does not directly entail epiphenomenalism. The SPW
does not impugn the existence of conservatively emergent features so one might
suspect that some notion of supervenient causationperhaps the kind of weak
efficacy granted in Chap. 7will suffice to grant efficacy to these emergents. And let
me emphasize yet again that conservatively emergent phenomena have indispensable
explanatory jobs and there is no prospect of or desire for their elimination.
162 8 Generalized Epiphenomenalism

Nonetheless, I believe that generalized epiphenomenalism does follow from the


SPW. There are at least three strong arguments for this conclusion, which Ill label
the economy argument, the screening-off argument and the abstraction argument.11

8.4 The Economy Argument

I take it that, as mentioned above, causation is a metaphysical relation, and that in


particular it is the relation that draws forth one event from another across time, or that
determines one state as the outcome of a previous state, or that constrains nature to
only certain sequences of states.12 This is no definition and I do not want to prejudge
issues about the possibility of backwards causation or about whether the relata of the
causes relation are necessarily limited to events as opposed, for example, to states
or objects. I want only to draw attention to the central idea that causation is the go
of the world (to recall the vivid expression of Morgan); it is causation that drives the
world from state to state. The issue of concern to me is, so to speak, how much go
there is in the world or how widely it is distributed throughout the world. Morgan was
clear about how widely he wanted to spread the go: There are physico-chemical
events, as such; there are vital or organic events, as such; there are conscious events,
as such. All are integrated in the effective go of the system as a whole (Morgan
1923, p. 131). I think to the contrary that the SPW forces all the go of the world into
a very confined space.
The definite question I want to address is this: is there, from the point of view of the
scientific metaphysics outlined above, any need to posit causal efficacy at any level
above that of the fundamental physics, or is all of the go lodged at the metaphysical
root of the world? This question must be sharply distinguished from the question
whether we need to deploy theories of (or descriptions of) levels of reality far higher
than those described by fundamental physics in order to predict occurrences in the
world, to explain what happens in the world and to understand or comprehend what
is happening around us. I think it is obvious that we require high-level theories or
descriptions for these essentially epistemic tasks. You are not going to understand
why a square peg wont fit in a round hole in terms of the fundamental physics
governing the constituents and environs of peg and hole. But that by itself does not
entail that we need to posit any causal efficacy to square pegged-ness or round
holed-ness. No less evident than the need for high-level descriptions to understand
this relationship, is that the fundamental physics of the relevant constituents is all
you need to ensure that, as a matter of fact, the square peg just wont go into the
round hole.13
The superduper computer simulation thought experiment is supposed to draw this
to our attention. Imagine the fundamental physics simulation of peg approaching
hole. There is no need to code into the simulation anything about squareness or
roundness, or whether something is a peg and something else is a hole, or that the
peg is moving towards the hole or anything else at a level of description above that
of fundamental physics. Nonetheless the world of the simulation reveals that the peg
wont go through the hole. How can that be if there really is some kind of genuine
8.4 The Economy Argument 163

causal efficacy to the pegs being square or the holes being round? It would seem
reasonable to suppose that if you leave some genuine efficacy out of your simulation,
it wont manage to remain similar to the world where that missing efficacy resides
and has its effects (recall our definition of radical emergence versus conservative
emergence here). Leaving out of the simulation features that make a genuine causal
contribution to the evolution of the worlds state ought to cause the simulation to drift
out of synchronization with the real world. But, by our hypotheses of completeness,
closure and resolution, no high-level features are ever needed to get our simulation
to accurately duplicate the world.
Of course, if you regard causation as a non-metaphysical relation and instead think
of it as some kind of explanatory or fundamentally epistemic notion then I will happily
grant you its existence in the high-level features. Further, I suppose, it is then actually
missing from the low-level fundamental features that are too complex, particular and
bound to specific contexts to explain things like why square pegs wont go in round
holes. It is not implausible to think that our commonsense notion of causation is
rather unclear about the distinction between epistemology and metaphysics (and this
confusion might account for much of the trouble we have making sense of such
things as the causal relevance of, for example, mental properties).14
But whatever the proper analysis of causes may be, there remains the meta-
physical question of where the go of the world resides and how much of it has to
be posited to get the world going the way it actually does go. What I mean can be
clarified by a simple example which highlights the difference between the metaphys-
ical and explanatory aspects of causation. We know that smoking cigarettes causes
lung cancer. Although this is a correct statement of an explanatory causal relation-
ship, it does not pinpoint the fundamental features of the world that bring about this
relationship. In any particular case of cancer caused by smoking there will be some
underlying events whose own causal relationships will account for the gross link we
are interested in. In any particular case of cancer, there will have been a chemical
change in the DNA in some cell which led to that cell becoming cancerous and
where the change involved was itself conditioned by certain specific chemicals in
cigarette smoke. This chemical process accounts for the observed causal link. Nor
are such chemical processes immune to a similar explanation in terms of yet more
fundamental causal relationships.
Thus explanatory causation has an obviously hierarchical but also promiscuous
structure or is a multi-inter-level phenomenon. By calling explanatory causation
promiscuous I mean that causal explanations happily invoke relations between any
levels in the hierarchy of natural structure.15 The hierarchical nature of causation is
what underlies the hierarchical structure of nature which we discover through both the
applicability of high level theories, via our conscious perception of the world and our
commonsense categorizations. The existence of high level structure anything like we
find in our world is not a metaphysical necessity. There seems nothing incoherent, for
example, in the idea of a universe that consists of nothing but diffuse hydrogen gas.
The existence of high level causal relations is crucial for the existence of high level
structure in general, for merely random or arbitrary conjunctions of basic physical
features are not likely to admit of a theoretical treatment of their own. Nor are they, in
164 8 Generalized Epiphenomenalism

general, perceptible as objects. Thus the set of all objects with mass between ten and
twenty kg and positioned less than 10 km from me is not susceptible to a distinctive
theoretical treatment as such (though of course the objects in this perfectly well
defined set fall under a large number of distinct theories and common categories).
Explanatory causal relations form their own high level theoretical structure, which
explains why we can discover them within the domain of high level causation. The
commonsense or folk theory of causation involves a host of principles and rules
of thumb, including maxims such as causes precede effects, association tends to
imply causality or that, all else equal, a cause suffices to bring about its effect. These
commonsense ideas can be refined without any need to inquire into the basic physical
mechanisms underlying any particular level of causal relationship, especially via
Mills famous methods (introduced in Mill 1843/1963, Bk. 3, Chap. 8). The methods
are level independent, and while they are normally used to discover causal relations,
we can look at them backwards, so to speak, to see a domain of causal relations
from the point of view of some high level theory. For example, the theory of plate
tectonics validates its ontology of crustal plates, mid-ocean rifts, subduction zones,
etc. by showing how these things fit into a system of causal relations mappable via
Mills methods.16
Naturally, we humans have invented our theories to enable explanation and pre-
diction. A good theory is one which supports the discovery of explanatory causal
relationships, by way of such methods as noted above. The reason we are interested
in causal relations at all is that they provide a basis for understanding why events
occur as they do, and offer the hope of our being able to predict and sometimes
control events of particular interest to us.
But of course we can ask whether, and to what extent, the vastly intricate systems
of causal explanation we have developed and applied to the world with such evident
success over the last few centuries reflect the ontologically basic causal ordering (or
fundamental constraints) of the world. The answer cannot simply be read off the
success of our theorizing. We can identify two core features of the explanatory aspect
of causation that will help to answer this question.
Explanations and predictions are offered to satisfy human desires, both practical
and more purely epistemic, and thus necessarily depend on our being able to com-
prehend the frameworks within which such explanations and predictions are offered.
Causal explanation is also level-promiscuous, in the sense that any level of theory (or
causal structure) can be appealed to in explanation, and inter-level connections can
also be deployed. Thus, for example, in the explanation of atmospheric ozone deple-
tion and its effects, many levels are invoked. These include atmospheric dynamics,
human preferences for refrigeration, swimming and sun-bathing, the thermodynam-
ics of compression, condensation and expansion of gases, fairly low level chemistry
of the interaction of chlorine and oxygen, and still lower level physics in the account
of why ultraviolet light appears in the solar spectrum along with its typical harmful
effects.17
We can label these two features of explanatory causation (1) the comprehensibility
condition and (2) the level promiscuity condition. True metaphysical causation is
most certainly not subject to (1), but what about (2)? As we have already noted,
8.4 The Economy Argument 165

high level causal relations have at least some dependence upon lower level causal
relations. A natural and important question to ask is whether this dependence is total
or, in other words, whether high level causal structure supervenes upon low level
structure. The characteristic claim here is that any two possible worlds that agree on
their lowest level causal structure will also agree about all high level causal structure.
As we have frequently emphasized, it is by no means obvious that such a super-
venience claim is correct. Some philosophers have disputed it. Morgans (1923)
emergent evolution and Broads (1925) emergentism both clearly denied the causal
supervenience claim in favor of there being genuine inter-level causal relations. These
emergentists postulated two classes of laws: intra- and inter-level laws (Broad called
the latter trans-ordinal laws). The intra-level laws are more or less standard laws
of nature which, from the explanatory point of view, would be confined to a single
science. The inter-level laws, or at least some of these laws, are laws of emergence
and dictate novel causal powers that come into being upon the creation of certain
lower level structures. Both Broad and Morgan saw chemistry as a prime example
of this kind of emergence and the confidence engendered by this supposedly uncon-
troversial example encouraged them to extend the domain of radical emergence.
Unfortunately for them, the codification of quantum mechanics after 1925 provided
strong evidence that chemical properties supervene on underlying physical features,
and that chemistry does not require the postulation of any novel causal powers (see
McLaughlin 1992). But as they saw it, chemical structures, such as H2 O, lawfully
form from lower level entities but the properties of water are not merely the resultant
of the causal powers of these atomic constituents and their low-level interactions.
Rather, novel properties with distinctive causal powers emerge when hydrogen and
oxygen come together.
However, such ontological emergence is itself a lawful phenomenon expressing
the inbuilt inter-level structure of our universe. In terms of our characterization
of ontological causation in terms of constraints upon allowable state sequences, the
emergentists postulated that emergent features impose further or extra constraints
beyond those imposed by the underlying features. Put in terms of our discussion in
Chap. 7, radical emergents stand in a form of weak supervenience to their subvening
features and processes. That is, the emergents can be different across possible worlds
that are indiscernible with respect to the relevant underlying features.
But while radical emergence is a coherent position, we have seen abundant evi-
dence in Part I of this book why the SPW rejects it. Emergence is nonetheless an
extremely important feature of the world, but it is limited to conservative or epis-
temological forms. Hierarchical and promiscuous causal explanation depends on a
rich system of conservatively emergent features without which we would have no
intelligible account of the structure and dynamics of the world. I cannot deny that
the primary sense of the word cause and its cognates may well be this epistemic or
explanatory sense, but I also think that the metaphysical question about the go of
the world remainsand remains philosophically pressing.
So, if you are inclined to think that causation is primarily an epistemological or
explanatory relation, or that both metaphysical and epistemological notions jointly
constitute our concept of causation, I wont argue about the word. Define kausation
166 8 Generalized Epiphenomenalism

as the metaphysical relation between events (or whatever the appropriate relata may
be) that drives the world forward or constrains state sequences. Our question then
is whether high-level features have any kausal efficacy; the metaphysical question
remains as pressing as ever. (In what follows, however, I generally keep to the c
spelling to spare the readers sensibilities.)
We ought not to multiply entities beyond necessity. In the particular case of the
metaphysical question of where causation works in the world and how much of it
there is, we ought to posit the minimum amount, and the simplest nature, necessary to
obtain the phenomena we seek to account for. The phenomena are just the events that
make up our world (at any level of description). The minimum amount of causation
we need to posit is causation entirely restricted to the level of fundamental physics.
This follows from closure, completeness and, especially, resolution. Fundamental
physics (at the moment) suggests there are, currently active, four forces (weak, strong,
electromagnetic, and gravitational) whose concerted exertions (within a backdrop of
spacetime and quantum fields) generate all the variety we can observe in the world
at large.18 Crudely speaking, our superduper simulation requires only the simulation
of these forces (and fields) in spacetime to provide a total simulation of the world at
every level. This is fairly obvious if we think of simulating the world when it was
just a few nanoseconds old; completeness, closure and resolution entail that it is no
less true of simulations of the current universe (or parts thereof).
The SPW with conservative emergence sees the world as structured into roughly
demarcated levels. It is enough that such demarcation be rough and imprecise
because nothing of deep metaphysical import hangs on it. Broadly speaking, a level
of reality exists when a family of properties forms a distinct system, which typically
involves a set of lawful regularities, causal (not kausal) relations and the like and
where the system is more or less autonomous, or relatively insulated against distur-
bances generated by properties outside the system. It is telling, however, that this
autonomy is always subject to what I called in Chap. 7 intrusions from below.
These levels are in essence what Dennett has called patterns (see Dennett 1991).
Patterns are structures, and relations amongst structures, that are visible from certain
viewpoints. Only from a viewpoint which encompasses the nature, goal and rules of
chess is a forced checkmate visible. It is from the viewpoint of chemical science that
the systems of chemical kinds and affinities are apparent, and such patterns might
not be visible from other viewpoints (even ones not too far distant, as for example
Paracelsuss vitalistic iatrochemistry which is in the ballpark of our chemistry). But
although they are familiar, patterns are somewhat odd. They inhabit a curious zone
midway between, as it were, objectivity and subjectivity for patterns are there to be
seen, but have no function if they are not seen.19 By the former, I mean that patterns
are not just in the eye of the beholder; they are really in the world (it is not optional for
us to decide that salt does or does not dissolve in water or that a checkmate is forced
in two moves) and they provide us with an indispensably powerful explanatory and
predictive grip upon the world. By the latter, I mean that the only role they have in
the world is to help organize the experience of those conscious beings who invent
concepts for them and then think in terms of these concepts. That is, although the
world is rightly described as exemplifying a host of patterns, the world itself, so to
8.4 The Economy Argument 167

speak, has no use for them. In terms of our thought experiment again, high-level
patterns do not need to be coded into the world-simulation in order to ensure the
accuracy of the simulation, and this is just because it is the fundamental physical
features of the world which organize the world into all the patterns it exemplifies and
they do this all by themselves, with no help from top-down causation.
Some philosophers, such as Kim, attempt to ground the causal efficacy of reducible
or functionally definable higher-order features via the claim that the causal powers
of an instance of a second-order property are identical with (or a subset of) the
causal powers of the first-order realizer that is instantiated on that occasion (Kim
1998, p. 116). But this wont work; first-order realizers are complexes of fundamental
features and thus, according to my argument, have in themselves no causal efficacy.
Everything they can do in the world is entirely the work of their own constituents.
Realizers are not fundamental but are themselves patterns which are picked out by
their relation to pre-existing high-level patterns (such as the elements of psychology,
economics, geology or whatever). The economy argument shows that there is no need
to suppose that realizers as such have any efficacy; if we imagine a world in which
they lack efficacy the world proceeds just as well as an imagined world in which they
do have efficacy, unless, of course, we enter the realm of radical emergence but we are
here trying to understand the commitments of the SPW. Metaphysical economy then
strongly suggests that we take our world to be the former world. In fact, realizers are
in a sense worse off than the unrefined descriptions of high-level theory, for at least
these latter have an explanatory role within their own theory whereas the realizers
are epistemically inaccessible and explanatorily (as well as causally) impotent. We
believe in them because we believe in completeness, closure and resolution but they
are, in most cases, very remote from the features they are supposed to realize and
can rarely take part in our explanatory projects.
Doubtless there is a harmless sense of top-down causation which is perfectly
acceptable and appropriate for use within pattern-bound explanations. For example,
researchers developing the scanning tunneling electron microscope were able to
manipulate individual atoms to form an ultra-tiny billboard spelling out the name of
their corporate sponsor, IBM (see Eigler and Schweizer 1990). Thus we can explain
the location of a particular atom by reference to the intentions of the operator of a very
macroscopic machine. But the SPW takes it that those very intentions are completely,
albeit elusively, accommodated within a vastly intricate web of micro-states which,
within their environment, push the target atom to its final location. Intentions, like
planets, animals and molecules, have no need to be specially written into the code
of the world-simulation.
Consider, for instance, the Coriolis force, which gunnery officers must take into
account when computing the trajectory of long-range cannon shells. The general
significance of the Coriolis force can be gathered from this resolutely realist passage
from the 1998 edition of the Encyclopedia Britannica:
The Coriolis effect has great significance in astrophysics and stellar dynamics, in which
it is a controlling factor in the directions of rotation of sunspots. It is also significant in
the earth sciences, especially meteorology, physical geology, and oceanography, in that the
Earth is a rotating frame of reference, and motions over the surface of the Earth are subject to
168 8 Generalized Epiphenomenalism

acceleration from the force indicated. Thus, the Coriolis force figures prominently in studies
of the dynamics of the atmosphere, in which it affects prevailing winds and the rotation of
storms, and in the hydrosphere, in which it affects the rotation of the oceanic currents.

This force is a conservatively emergent property of the earth, or any other rotating
system, of evident usefulness in a variety of high-level descriptions of the world.
But in the context of assessing causal efficacy in terms of the fundamental physical
features of the world, it is highly misleading to say that the Coriolis force causes
diversions in, for example, a shells trajectory. At least, if we really thought there
was such a forcehence with its own causal efficacy, the world would end up being
a much stranger place than we had imagined. Just think of it: rotate a system and a
brand new force magically appears out of nowhere, stop the rotation and the force
instantly disappears. That is radical, brute emergence with a vengeance. Of course,
there is no need to posit such a force (and it is called, by physicists if not engineers,
a fictitious force, as is centrifugal force). The Coriolis phenomena are related to the
underlying physical processes in a reasonably simple wayin fact simple enough for
us to directly comprehend, but, no matter the complexity, our imaginary computer
model of any rotating system would naturally reveal the appearance of a Coriolis
force.
The Coriolis force can serve as a more general model for the relation between
basic and high-level features of the world. The Coriolis force is an artifact of the
choice of a certain coordinate system. If you insist upon fixing your coordinates
to the surface of the Earth, you will notice the Coriolis force. If you take a more
natural, less geocentric, non-rotating, coordinate system as basic, the force will never
appear (though artillery shells will of course still track a curved path across the
surface of the Earth). In general, high-level features are artifacts which arise from
the selection of a particular mode of description. If, so to speak, we impose the
chemical coordinate system upon ourselves, we will find peculiar chemical forces
at work (or we will find chemical properties to be apparently efficacious). If we
make the metaphysical choice of the more fundamental basic physical framework,
these distinctively chemical efficacies will as it were disappear (though, of course,
the phenomena chemists like to explain via co-valent bonds, hydrogen bonds,
etc. will still occur). The fact that these phenomena result solely from fundamental
physical activity suggests that no efficacy should be granted to chemical features as
such (rather as the Coriolis force should be seen as an artifact arising from a certain
point of view or choice of reference frame).
Thus, the simplest and most natural interpretation of efficacy in the SPW restricts
efficacy to the fundamental constituents of the world. They are by themselves able
to generate all the variety to be found in the world. The high-level descriptions
of the world we find so useful are just that: usefulindispensably useful even
epistemological aids to beings with a certain explanatory and predictive agenda who
are smart enough to be able to pick out and/or formulate such descriptions. It would,
for example, be insane to forbid gunnery officers or pilots from thinking in terms of
the Coriolis force. But that cuts no metaphysical ice.
The economy argument strikes me as a powerful reason to accept generalized
epiphenomenalism. But there are other arguments as well.
8.5 The Screening Off Argument 169

8.5 The Screening Off Argument

It is often difficult to tell the difference between a cause and mere correlation.
Suppose we notice a correlation between, say, A and B, one in which B is a candidate
for causing A. A probabilistic symptom of such a situation is where P(A | B) >
P(A). That is, the presence of B increases the chances of obtaining A (as for example
the presence of cigarette smoking increases the chances of lung cancer). One way
to distinguish a mere correlate from a cause is the statistical property of screening
off (for a general discussion of the importance of this concept in issues of causality
see Salmon 1984). Essentially, the idea is to see if one can discover a feature distinct
from B that accounts for the correlation between A and B, and accounts for it in
such a way as to undercut Bs claim to efficacy. We say that C screens off B from A
whenever:
(C1) P(A | C B) = P(A | C), and
(C2) P(A | C B) = P(A | B)
What the first condition asserts is that the extra factor C destroys the statistical
relevance of B on the occurrence of A. Once we take into account C, B no longer
makes any difference to the probability of A happening. The second condition asserts
that C makes a difference, relative to B, for the occurrence of A.
We might say that, when (C1) and (C2) are met, C usurps the putative causal role
of B. The test is at best good evidence for denying efficacy to B, for it seems possible
that P(A | C B) might end up equal to P(A | C) just by accident. In such a case,
B might well be non-efficacious but the screening off test could not reveal this (there
would be a standoff with respect to the test).
Nonetheless, the test is often effective. A classic example is an apparent link
between cancer and consumption of coffee (this textbook example can be found
all over but I have no idea whether anyone was ever fooled by this20 ). This spu-
rious correlation resulted from not distinguishing coffee drinkers who smoke from
those who do not smoke. Since coffee drinkers tend, more than non-drinkers, to
be smokers a merely statistical correlation between coffee drinking and cancer
arose. The screening off test reveals this since the statistics end up as follows:
P(cancer | smoking coffee) = P(cancer | smoking) > P(cancer | coffee).
The weakness of the test is nicely revealed in this example too, for it is evidently
possible that absolutely every coffee drinker should be a smoker and vice versa which
would falsify (C2) despite coffees harmlessness.21
The screening off test can be applied to the question of the causal efficacy of
high-level features, with perhaps surprising results.22 Lets begin with a toy example.
Suppose I have a pair of dice. Let the low-level or micro features be the exact values
of each die upon any throw (for example getting a 2 and a 6). A host of high-level
or macro features can be defined as patterns of low-level features, for example, if
we take the sum of the individual outcomes, getting an even number (e.g. by rolling
a 3 and a 3), getting a prime number, getting a result divisible by six, etc. Despite
its simplicity the model has some suggestive attributes. There is obviously a relation
170 8 Generalized Epiphenomenalism

of supervenience of macro upon micro features, that is, micro features determine
macro features. But at the same time there is multiple realizability (there are many
micro-ways to roll a prime number for example). And there is a weak analogue of
non-reducibility in the sense that there is only a disjunctive characterization of macro
features in terms of micro features (it is the simplicity of the example that renders
the disjunctions tractable and easily comprehensiblefor contrast suppose we were
rolling 1023 dice).
The point of the model is that micro features can clearly screen off macro features
from various outcomes. Consider the following probabilities. The probability of
rolling a prime number is 5/12. The probability of rolling a prime given that one has
rolled an odd number is 7/9. These are macro descriptions. Consider now one of the
ways of rolling a prime number, say rolling a 3 and a 2. The probability of rolling a
prime given a roll of a 3 and a 2 is, of course 1. Thus (C2) of the screening off relation
is fulfilled. It is also evident that (C1) is met. Thus rolling a 3 and a 2 screens off
rolling an odd number from obtaining a prime number. Since it is impossible to get
an odd number except via some micro realization or other, and since each such micro
realization will screen off the macro feature we should conclude that it is the micro
features that carry efficacy. I need the preceding scare-quotes since there is, of
course, no causation within my purely formal model, but that is not a serious defect.
In fact, we could add genuine causation simply by imagining a prime detector
which scans the dice after they are rolled and outputs either a 1 (for prime) or 0
(for non-prime). If we assume for simplicity that the detector is perfectly reliable
(probability of proper detection is 1) then the probabilities of the simple example
carry over directly and we get genuine causal screening off of the macro features by
the micro features.
Notice that the probabilities of the outcomes contingent upon macro features are
determined by some mathematical function of the probabilities of those outcomes
contingent upon all the possible realizing micro features (in this case the function
is simply the number of prime outcomes divided by the number of odd outcomes,
14/18). This function, as it were, destroys the detailed information available in the
micro features, changing probabilities that are all 1 or 0 to the smudged out values
characteristic of the macro feature probabilities (in our example, a bunch of 1s and
0s transform into 7/9).
We can apply the lessons of this simple model to more interesting cases. Consider
whether basic physical features screen off chemical features. Take as an example,
the chemical feature of acidity or being an acid. This is multiply realizable in quite
distinct physical micro features (viz. the physical realization of HCl versus that of
H2 SO4 ). It seems fairly clear that micro structure will screen off macro structure
relative to the effects of acids. We might, for example, claim that substance Ss
dissolving in X was caused by Xs acidity. But consider that there are acids too weak
to dissolve S so the probability of dissolution is not 1, but each realization is an acid
either strong enough to dissolve S or not, and each realization guarantees we have an
acid, so screening off will occur in a way quite analogous to our simple dice example.
It is important to remember in this discussion that we are concerned with the
metaphysical facts about efficacy here, not the explanatory merits of the alternatives.
8.5 The Screening Off Argument 171

It is easy to imagine cases where the appeal to the macro features will be much more
explanatorily useful and where in fact appeal to the micro features would be entirely
opaque. That does not matter if the question is about the root of efficacy in the world.
Of course, the most interesting case is that of mentality. The same lessons can be
drawn. We might claim that Fred raised his arm because he believed he knew the
answer. Of course, believing that one knows the answer does not guarantee that one
will raise ones arm. That is, the probability of any behaviour, given an explanatory
mental state is not 1. But the micro structure in which beliefs are realizedthe specific
fundamental physical features of subject and (more or less local) environmentcan
plausibly be thought either to guarantee that Freds arm goes up or to guarantee that
it does not (at the very least to provide probabilities different from those attendant
upon the original psychological description). And since the realizing state by hypoth-
esis subvenes the psychological state, we fulfill the two conditions of screening off.
We expect that the probability of behaviour given a more or less gross psychological
characterization is a mathematical function of the probabilities of that behaviour con-
tingent upon the possible realizing states. These latter probabilities will, in general,
be different as information is smudged out or lost in the macro psychological state
relative to the determinate physical realization that will obtain in every instance of a
psychological state.
Yablo (1992) has pointed out that the debate on causation looks different from a
point of view that takes the determinable and determinate distinction as its model
of inter-level connection. To take a common example, if a bull gets mad because
it sees a red cape, the causal efficacy of the redness of the cape does not seemat
first glanceto be undercut by noting that the cape must actually have been some
determinate shade of red. Although I dont think that it is very plausible to regard
the physical realization states as determinates of a mentalistic determinable,23 the
screening off argument seems to work against the causal efficacy of determinables
versus their determinates. Regard the bull as a red-detector (I hear that bulls are
really colour blind, but lets not spoil a traditional example with pedantry). Detectors
are more or less efficient; that is the probability of detection varies with the shade
presented. Thus the probability of the bull getting mad given that it is presented with
red is not 1, and is in fact determined by the particular probabilities of getting mad
given presentation of various shades. Thus perhaps the bull is very good at detecting
crimson but weak at vermilion. Since both crimson and vermilion are shades of red,
we will fulfill the two conditions for screening off. We ought to attribute efficacy
to the determinates (the shades or red) over the determinable (red). The fact that
bull, so to speak, doesnt care about or even know anything about these determinates
is irrelevant to assigning efficacy. The bulls behaviour is governed by them, and
correlations between colours and behaviours stem from the power of the determi-
nates rather than the diffuse and derivative power (as expressed in the probabilistic
correlations) of the determinable. (Of course, we can go further, and demote the
shades as well in favour of the ultimate, fundamental physical determinants of the
bulls behaviour.) As always, for the purposes of intelligible explanation the appeal
to redness would usually trump appeal to the particular shade, which might in fact
be positively misleading.
172 8 Generalized Epiphenomenalism

Another way to approach the screening off of macro by micro features is in terms
of Dennetts stances (Dennett 1971). Recall that Dennett outlines three stances, that
is, viewpoints from which to make predictions or explanations of behaviour (broadly
construed to include more than that of biological organisms). We have first the inten-
tional stance in which the ascription of, most basically, beliefs and desires accord-
ing to dictates of rationality serves to predict and explain behaviour. The intentional
stance is undoubtedly highly successful and widely applicable throughout, at least,
the animal kingdom and certain subsets of the machine kingdom. But sometimes the
application of the intentional stance to a system for which it is generally successful
fails. If there is no intentional stance (i.e. psychological) re-interpretation that plau-
sibly accounts for the failure we must descend to a lower stance, in the first instance
to the design stance.
The design stance provides the sub-psychological details of the implementation
of the systems psychology, but in functional terms. (Dennett, along with most cog-
nitive scientists, typically imagines the psychology is implemented by a system of
black boxes with a variety of sub-psychological functions, viz. short term memory,
visual memory buffers, edge detectors, etc.) Certain features of the design of a sys-
tem can account for why intentional explanations fail. For example, it may be that
unavoidable resource constraints force the design to be less than fully rational. Any
real world system will surely fail to deduce all the consequences of its current infor-
mation, and the way it winnows out the wheat of useful information from the chaff
of irrelevant noise will always be susceptible to failure or, frequently, exploitation by
enemies in particular cases (as in the famous eye-spots found on the wings of certain
moths). The details of how relevance is assigned may lead to bizarre, psychologically
inexplicable behaviour in certain circumstances. A well known example is the dig-
ger wasps (Sphex ichneumoneus) seemingly purely mechanical, stereotypical and
mandatory nest provisioning behaviour which cannot be explained in terms of ratio-
nal psychology but, presumably, follows from features of its neurological design
(see Dennett 1984, pp. 10 ff.; for some kinder thoughts on Sphex see Radner and
Radner 1989).
But there will be failures of design stance predictions and explanations too. A
random cosmic ray may chance to rewrite a bit in your computers memory leading
to behaviour that is inexplicable from the design stance (even at the level of the indi-
vidual memory chip there is a design stance, and the action of the cosmic ray forces
us out of it). Intrusions of brute physical malfunction are by definition inexplicable
from the design stance but can never be completely designed away. In such cases,
we must descend to what Dennett calls the physical stance to account for the failure
of the system to abide by its design. The possibility of the failure of the intentional
stance as well as the design stance will be reflected in the probabilities of behaviour
contingent upon intentional states or design states. The physical stanceat the level
of fundamental physical realitywill however provide the rock bottom probabilities
of behaviour contingent upon particular physical realizations of either intentional or
design states. In general, these probabilities will differ from those at the intentional
or design level, for their maximal information content will be smudged out across the
very large disjunctive fundamental physical characterizations of the intentional or
8.5 The Screening Off Argument 173

design level states. Thus the existence of and relationships between the three levels
results in the screening off of high-level features by the low-level features in just the
way I have outlined above. The low-level, physical stance features are what gener-
ate the correlations that show up at the higher levels. It is they that deserve to be
granted efficacy. This screening off reveals that no efficacy needs to be assigned to
the high-level features to obtain the correlations observed at those higher levels.
Thus it seems that a good probabilistic test of efficacy further suggests that efficacy
resides at the most basic level of physical reality. Yet another argument serves to
reinforce this conclusion further.

8.6 The Abstraction Argument

A good high-level description manages to explain and predict the behaviour of very
complex physical systems while somehow ignoring most of their complexity. Perhaps
it is not a necessary truth, not even a nomologically necessary truth, but only a lucky
cosmic accident that fundamental physical features should conspire to constitute well
behaved structures that obey laws which abstract away from the details of those very
structures. But we live in a world where such behaviour is ubiquitous. That is what
makes high-level theories (and theorizers like us) so much as possible.
The way that high-level theory abstracts away from the physical details undercuts
any claim that high-level features have causal efficacy, as opposed to explanatory or
predictive usefulness. Begin with a simple and famous example. While developing
his theory of universal gravitation, Newton postulated that every particle of matter
attracts every other particle with a gravitational force that obeys an inverse-square
relation. But it would be obviously impossible to calculate any behaviour resulting
from gravitational forces if one faced an extreme version of the many-body problem
before one could even begin. And, naively, it seems that one does face such a problem
because the Earth, and all other celestial bodies for which we might be interested in
computing orbits, are manifestly made of a very large number of material particles
(evidently held together by gravity itself among other possible forces). However,
the structure of the gravitational force law and the newly invented calculus enabled
Newton to prove that a spherical body will have a gravitational field (outside of
itself) exactly as if all the matter of the body were concentrated at a single pointthe
geometric centre of the body. This is a beautiful piece of mathematics and makes
calculating the orbits of the planets at least possible if not easy. Newton could treat
a planet (at least from the point of view of gross gravitational studies) as a single
gravitating point, although it was not long before the non-spherical shape of the
Earth had to be taken into account. He had developed a high-level description of
the gravitational field of material bodies that abstracts away from the details of their
material composition, to our great advantage. But would anyone want to deny that the
efficacy of gravitation still resides entirely within the mass of each particle that makes
up the body in question? Although Newtons theorem would provide an irresistible
shortcut for any real world simulation, our imaginary superduper simulation will
reveal the effects of gravitation while blithely ignoring the fact that, so to speak, all
174 8 Generalized Epiphenomenalism

of a bodys gravity can be viewed as emanating from its geometric centre (even if, by
chance, the body has a hollow centre and there is no matter at all at the gravitational
centre). Such a pure mathematical abstraction cannot participate in the go of the
world however useful it might be in the business of organizing our view of the world.
Another example to the same effect is that of the temperature and pressure of a
gas. The reduction of thermodynamics to statistical mechanics is one of the glories of
physical science and we can see it as the discovery of how a particular set of high-level
features are grounded in lower level physical entities. We now know how temperature
and pressure are realized in a gas. The temperature of a gas (at equilibrium) is the
average kinetic energy of the particles which constitute the gas. The pressure of a gas
is the average force per unit area which these particles exert on the gass container.
But averages are mathematical abstractions that in themselves cannot cause anything
at all. If one is inclined to think otherwise, consider this example: a demographer
might say that wages will go up in the near future since the average family size fell
twenty odd years ago (and so now relatively fewer new workers are available). There
is not the slightest reason to think that average family size can, let alone does,
cause things although I think we easily understand the explanation to which such
statistical shorthand points.24 By their very nature, pressure or temperature are not
one whit less statistical fictions than is average family size. The ascription of causal
efficacy to, say, pressure is only a faon de parler, a useful shorthand for the genuine
efficacy of the myriad of micro-events that constitute pressure phenomena. It is
entirely correct to use the overworked phrase, and say that pressure is nothing but the
concerted actions of the countless particles that make up a gas. Within our thought
experimental simulation, pressure and temperature will emerge without having been
coded into the simulation program. In this case, our high-level features have to be
conservatively emergent by their very nature for they are simplyby definition
mathematical abstractions of certain properties of the low-level features which realize
them. As such, no genuine efficacy can be granted to them.25
Now, we dont know what sort of realization function will be appropriate for men-
tal states. It may be that the way that thermodynamical properties emerge out of the
molecular dance may have more relevance to psychology than mere metaphor. There
are interesting, perhaps deep, analogies between thermodynamics and the dynam-
ics of neural networks (see for example Churchland and Sejnowski 1992, Chap. 3;
Rumelhart et al. 1986, Chap. 7), and if the latter underlie psychology then some
psychological properties may be surprisingly closely analogous to thermodynamical
properties. Be that as it may, whatever mathematical structure underlies the transition
from neural to mental states, it will be a mathematical abstraction from the underlying
properties. (And the underlying neural structures will in turn be an abstraction from
the more fundamental physical structures that realize them.) Insofar as mental states
are seen as such mathematical abstractions they cannot be granted causal efficacy.
They will be mathematically convenient ways of thinking about the mass actions
of the fundamental physical constituents, just as temperature is a mathematically
convenient way to work with the mass action of the molecules that make up a gas.
Chapter 9
The Paradox of Consciousness

9.1 Mind Dependence

The general structure of the SPWmost especially the properties of closure, com-
pleteness and resolution, plus the three arguments just advanced (which merely draw
out or highlight certain consequences of the scientific picture)makes a strong case
for generalized epiphenomenalism. It remains to consider if the doctrine has any
significant consequences for our view of the world and our place in it.
I will argue that in the realm of consciousness there are dire consequences, ones
that on the face of it will require a rethinking of the spw from the ground up. Given
such a grandiose claim I need to make clear that my goal here is limited. The topic
of consciousness is vast, already possessed of a staggeringly large literature rang-
ing over a huge number of disciplines. My project falls within the purview of the
philosophical problem of consciousness, the general outline of which encompasses
two main difficulties: what is the nature of consciousness and how is it generated
in the world. It would take us very far afield to survey and assess all the various
theories of consciousness on offer, even if we never strayed beyond the bounds of
philosophical accounts, and I will not attempt this here (for an effort in this direction
see Seager 1999). The second question of how consciousness is generated does of
course link to the topic of emergence, but my discussion is limited with respect to
this issue as well. The generation problem has received intensive and extensive
investigation over the last fifty years and remains completely unresolved (for a now
classic articulation of the problem see Chalmers 1996, especially Chaps. 14).1 I
will not tackle the generation problem here but rather I want to focus on a particular
implication of the spw for consciousness which yields a problem connected to but
distinct from the more general problem of generation. This problem begins with
generalized epiphenomenalism.
The spectre of classical epiphenomenalism about consciousness is frightening
because it turns us into mere spectators, utterly unable to affect the world no matter
how much we might desire to. Classical epiphenomenalism makes our position in
the world analogous to that of someone on a ride in Disneyland, able to watch whats

W. Seager, Natural Fabrications, The Frontiers Collection, 175


DOI: 10.1007/978-3-642-29599-7_9, Springer-Verlag Berlin Heidelberg 2012
176 9 The Paradox of Consciousness

happening and feel real fear or elation in accord with those events, but unable to get
out of the vehicle in which they are inexorably carried along and grapple with the
world outside.
Generalized epiphenomenalism does not seem quite so frightening. It grants no
more, but also no less, efficacy to our mental states than it does to any other high-level
feature of the world. We can understand perfectly well the place of high-level phe-
nomena in the scientific picture. Such phenomena are integrated into the causal struc-
ture of the world in virtue of their being realized by structures that are indisputably
efficacious (structures that really are the go of the world). So even if high-level fea-
tures are, so to speak, metaphysically epiphenomenal, we can still coherently make a
distinction between genuine and merely apparent causes at high levels of descriptions.
We discover that coffee drinking does not cause cancer, but not because it, like every
other high-level feature, is epiphenomenal, but because the realizers of coffee drink-
ing do not drive the world into a state which can be given the high-level description:
cancer. We find the opposite story for smoking. Try to imagine the chances of success
of Big Tobacco executives arguing that generalized epiphenomenalism shows that
smoking cannot cause cancer.
The SPW is austerely beautiful, and seems to find everything it needs to construct
the entire structure of the world, in all its complexity, in a handful of elementary fea-
tures that constitute those structures and drive their interactions entirely from below.
And it seems that just as high-level theories provide useful and in fact indispensable
abbreviations for certain well disciplined complexes of elementary aspects of the
world, so too we could speak of high-level causation as a useful abbreviation for
the twin stories of realization and basic causation; high level efficacy would be more
or less as efficacy is defined in Chap. 7.
That would be a happy ending to my story, if it was the end. Unfortunately it is
not. It is impossible to regard all mental states as high-level features of the world
in the same way that chemical, geological, etc. states are high-level features. Thus
it is impossible to regard the epiphenomenalism of the mental as unproblematic
in the way I have conceded that the epiphenomenalism of high-level features in
general is unproblematic. The reason, baldly stated, is that the high-level features for
which generalized epiphenomenalism can be regarded as a harmless metaphysical
curiosity are, in a certain sense, mind-dependent.2 High-level features are elements
of patterns, and patterns are, so to speak, invisible to the world. Their relationships,
including those we like to call relations of high-level cause and effect, are only
visible, sometimes literally visible, to conscious creatures possessed of certain ways
of looking at the world, and they serve only as aids to explanation, prediction and
understanding for those beings, rather than the metaphysical go of the world.
Can that be right? And, if it is right, what does that tell us about the world and the
place of consciousnesses within it?
Mind dependence comes in grades. We might define the absolute mind depen-
dence of certain entities as requiring that if there were no minds there would be no
such features. For example, money is absolutely mind dependentwithout minds
there could be no money. However, most of the high level features of reality which
the SPW admits as conservatively emergent are not absolutely mind dependent. It is
9.1 Mind Dependence 177

not the case that a world devoid of mind would necessarily fail to manifest features
which could be recognized as high level features to appropriately prepared minds.
According to the SPW, it is very likely that there was a time when there were no
minds. Though it is impossible to say when the first minds appeared in the universe,
the early universe of the SPW (maybe the first half-billion years or so) seems pretty
clearly to preclude the kinds of organization necessary to underpin thought and expe-
rience. It would be extremely odd to say that chemical processes were not occurring
at that time. While true, this does not entirely undercut the mind-dependency of
chemical processes as high-level features. For to say that chemical processes were
occurring before the advent of mind is just to say that the world was running in ways
such that the application of chemical concepts would have been explanatorily fruitful
(had there been minds) before minds had in fact appeared on the scene. The world
had, of course, no sense of chemistry and no need of it just because chemistry is, by
its high-level nature, epiphenomenal.
To put the point another way, it is impossible to understand the nature of high-level
features without a prior understanding of the epistemic needs and goals of conscious
thinkers. They are the sort of thing partly constituted out of those epistemic needs and
goals. Consider an example. The stars visible from the Earth (say from the northern
hemisphere) form literal patterns which are apparent to anyone (a little preparation
or guidance helps one to see the patterns favoured by ones own culture). These
constellations are useful in a variety of ways, of which the most basic, and ancient,
is probably navigation. It is, I hope, abundantly clear that these patterns are mind-
dependent, even though the configurations of the stars might have been the same as
it has been throughout recent human history at any time in the past, even long before
any consciousness of any kind was around to notice the stars (as a matter of fact, on
a long time scale constellations are relatively ephemeral but thats irrelevant to the
example).
One might object that constellations arent real. But why not? In support of their
unreality, the most natural insult to heap upon the constellations is that they dont
do anything except via the mediation of mind. This will not distinguish chemistry
from constellations; the point of generalized epiphenomenalism is to show that high-
level features are uniformly without causal efficacy as such. Chemistry only looks
efficacious from the point of view of a mind prepared to notice and exploit chemical
patterns. In the loose sense countenanced above, in which efficacy is granted to a
high-level feature via the efficacy of its low-level realizers, the constellations again
come out on a par with chemistry. As in chemistry, the low-level realizers of the
constellations have lots of effects, including effects on (the realizers of) human senses.
One might complain that the patterns of the constellations are purely arbitrary. But
that is just not true. There are natural groupings visible to all and recognized by
diverse cultures (albeit with very different interpretations and divergence about exact
boundaries). Ancient Europeans and the Iroquois of North America both noted the
big dipper and pictured it as a celestial bear.
The sort of mind dependence that such conservatively emergent high level features
possess is what might be called epistemic mind dependence. No one disputes that
high level features are indispensable to our understanding of the world. The issue is
178 9 The Paradox of Consciousness

whether high level features are, so to speak, part of natures way of structuring the
dynamics of the world. So far as the SPW can discern, the drivers of the world are all,
and only, the fundamental features. So while it would be wrong simply to say that
high level features are absolutely mind dependent it is correct to say that high level
features play no role in the world except as apprehended by minds. Until and unless
some high level feature is taken up in conscious perception and thought by some
cognitive agent, that high level feature stands as a merely potential epistemological
resource. To paraphrase Kroneckers famous remark: God created the particles, all
else is the work of man. To use another popular philosophical metaphor, the SPW
allows that nature has joints to carve, but there are few of them and they are all
down at the fundamental level. Once basic science has done the carving, we can
make whatever sandwiches strike our fancy, aesthetic or utilitarian as the case may
be, constrained always but only by the nature of the fundamental properties and
relations.3
Consider for example the sad story of Pluto, recently demoted from planethood to
become a mere dwarf planet. It is entirely up to us what counts as a planet, though
we are obviously constrained by a host of real considerations of utility, elegance and
coherence in our decision making. There is no Platonic Form of Planet against which
we seek to assess Pluto independent of such desiderata.

9.2 Abstractive Explanatory Aids

What is distinctive about high level features is their explanatory salience. High level
features are those combinations of low level features that happen to fit into fruitful
theoretical schemes. Fruitful, that is, to some intelligence which aims to predict,
explain and understand the world. Generally and abstractly, the best of the high
level features fall under systems of concepts to which Mills methods and their
more sophisticated statistical descendants can be applied to outline high level causal
relationships.
Another important feature of the best of the high level features is that they conform
to the strictures laid out by Ian Hacking which delineate a natural kind. These
demand that kind ascription will with high reliability indicate possession of a range
of other properties or suitability for various human purposes, both quotidian and
more strictly scientific. A caveat is that the reliability of natural kind ascription is not
conditional on the recognition of the social role of the kind itself, as in the example
of money, which in general respects admirably meets the reliability condition (see
Hacking 1991, especially pp. 116 ff.). It is possible that some people somehow fail
to appreciate the nature of social kinds such as money and may then come to regard
monetary value as intrinsic or natural. We might even imagine a group that begins to
exchange tokens as proxy for directly valued goods and does so via something like
a conditioned reflex or mere habit so that, for them, in some sense, money would be
akin to a natural kind. Interestingly, chimpanzees can be trained to use plastic tokens
9.2 Abstractive Explanatory Aids 179

in something like this way without, presumably, fully understanding what is going
on (see Matsuzawa 2001). Crucially, they do not try to eat the tokens.
The SPW denies that there is any deep distinction between sorts of high level fea-
tures save ones that serve the epistemic purposes of classification and manipulation
that underlies the whole system of high level concept application. It has no problem
with their being meta-high level features. As a fascinating aside to our concerns,
Hacking also points out Peirces criticism of Mills definition of real kinds which
depends on the idea that the real kinds share innumerable novel properties unpre-
dictable from what is known about the kind. Peirce, in his entry on kinds in Baldwins
Dictionary of Philosophy and Psychology (1901) notes that the whole enterprise of
science is contrary to Mills notion: Mill says that if the common properties of a
class thus follow from a small number of primary characters, which, as the phrase
is, account for all the rest, it is not a real kind. He does not remark, that the man
of science is bent upon ultimately thus accounting for each and every property he
studies (as quoted in Hacking 1991, p. 119). The SPW sides with Peirce in denying
the existence of any of Mills real kinds. With respect to Peirces criticism of Mill
however, I think Peirce may be missing Mills radical emergentism. For Mill the
inexhaustible set of properties associated with a kind could be linked to the primary
properties, or fundamental properties, of the kind by what he called heteropathic
laws, which are essentially the rules by which radical emergence, as defined in
Chap. 7, is supposed to operate (see Mill 1843/1963, Bk. 3, Chap. 6).
As we have emphasized, high level features emerge because of the propensity of
fundamental features to aggregate into self-sustaining systems.4 Crucial to aggre-
gation are the fundamental laws of nature and the conditions under which these
laws are operating. Thus very shortly after the big bang, although the laws of nature
permitted aggregation, the conditions then extant prevented it (essentially, the tem-
perature was so high that no stable aggregation could persist). In our world, these
conditions ameliorated over time so that ordinary matter, chemical structures, stars
and galaxies, planets and eventually life could emerge. But it is easy to imagine
conditions that forever preclude the creation of many high level features (see Barrow
and Tipler (1988) for a host of cosmological constraints upon the emergence of
high level features relevant to our own existence).
There is a huge variety in the sorts of high level features available for thinking,
working and theorizing. One class is important for my purposes because its relation
to the fundamental features of the world is much more abstract and perhaps in a
way deeper. The preconditions of the emergence of these features are not tied so
closely to the details of either the laws or the initial conditions of the worlds in
which they appear, though to be sure each has necessary conditions. Because of
this, these high level features possess a high degree of multiple realizability. The
high level features I am thinking of are those associated with theoriesor folk
theoriesof thermodynamics, information theory, evolutionary biology, and folk
psychology (the first two, and perhaps also the last two, are extremely closely related).
Consideration of these theories will both reveal some core features of emergence,
but will also provide yet more, and quite fundamental, evidence that within the SPW
such emergence is always and only conservative emergence.
180 9 The Paradox of Consciousness

Consider how thermodynamical laws emerge out of statistical mechanics, espe-


cially the second law. Almost any system of interacting particles will obey the second
law simply because of the probability distribution of system states corresponding to
particular entropy values. This holds for water molecules in a bathtub or colored balls
being mixed in an urn. But nonetheless it is perfectly possible to define systems that
violate the second law. In our bathtub, if hot water is added at one end, we find that,
after a while, the whole bathtub is slightly warmer (and the entropy of the system has
increased). But if we imagine a bathtub in which all the velocities of the molecules
are exactly opposite to those of the warm tub, we will observe the hot water separat-
ing itself out and moving to one end of the tub (thus decreasing entropy). There is
nothing physically impossible about the imagined state (as noted in Chap. 7 this idea
goes back to Josef Loschmidt, who used it to argue against Boltzmanns attempted
mathematical derivation of the second law; see Sklar 1993).
Thermodynamical evolution is ultimately no less governed by the fundamental
features and laws of the world than any other process. There are no magic high level
thermodynamical properties that act on the low level features, disciplining them to
conform to abstract laws. Thermodynamics is one more conservatively emergent
phenomenon. What is especially interesting about thermodynamics however is that
it is a high level feature which abstracts away from the details of the fundamental
features to a very great extent. That represents a tremendous mental victoryit is a
testimony to the immense cleverness of human scientists who managed to find a way
to group together a vast array of physical systems under a common conceptual scheme
which happens to be instantiated in our universe. I am not denigrating the second
law. Its being a conservative emergent is compatible with it being an absolutely
inescapable fact about our world.
The dependence of such abstract features as thermodynamical properties upon the
fundamental features of the world can be revealed via an examination of the perennial
dream of the perpetual motion machine (pmm). We can define a pmm simply as a
device from which unlimited work can be extracted. Since work essentially involves
a system moving, or being moved, from a higher to a lower entropy state, the second
law forbids a pmm. Thus anyone who claims to have a pmm can be refuted simply by
appeal to the second law. However, analysis of any putative pmm will show why it
violates physical principles other than brute thermodynamics. Every candidate pmm
will fail because of a conflict with some or other lower level physical constraint
rather than simply conflict with thermodynamics (there is an analogy here with the
screening off argument discussed in Chap. 8in a way, low level features of the
putative pmm screen off the 2nd law5 ). This is a sign of conservative emergence.
For example, heres a schematic design for a capillary action pmm6 (see Fig. 9.1).
Place a narrow tube in a water reservoir. Capillary action will force water up the tube.
Cut a hole in the side below the level to which the water rises and let water that drips
out turn a small water wheel. The water then returns to the reservoir. We have thus
extracted energy from a system that returns to its initial state and thus is apparently
capable of perpetually generating energy.
The second law of thermodynamics requires that this device fail, but that law is not
what makes the device fail. The device fails because of the physics of atmospheric
9.2 Abstractive Explanatory Aids 181

Fig. 9.1 Capillary action


pmm

pressure, surface tension and adhesion between water and the particular material
of the tube. That is, low level features cause the pmm to fail (of course, adhesion,
pressure etc. are only relatively low level and are themselves not fundamental; they
stem from lower level features, right down to the truly fundamental). Why are there
no candidate pmms that fail only because of their violation of the second law? Because
thermodynamics is causally impotent, even though few principles in physics have
greater explanatory potency.
Information theory is very closely related to thermodynamics. The analogy
between the two theories is fascinating although less than perfectly clear. But the
information law analogous to the second law of thermodynamics is that information
degrades over time as entropy increases. It is important to recall here that informa-
tion simply means information capacity; there is no attempt to provide any theory
of the content or semantical value of this information.
We might liken the process of information decay to the fading of a printed page,
where the parallel between entropy and information is quite clear. And what makes
a physical process an information channel is just as abstract as thermodynami-
cal properties. Information, and information channels, are multiply realizable; they
represent an extreme abstraction from the fundamental physical processes which
ultimately constitute them. The information equivalent of a pmm would be a device
that generated unlimited amounts of information capacity. Such a device will fall
afoul of the second law however, since information capacity is directly linked with
relative low entropy. However, just as in the case of thermodynamics, no information
channel will fail to generate extra information for free simply because of a violation
of information theory. Rather, every information channels particular characteristics
will devolve from the fundamental physical processes constituting it.7
At what is perhaps a still higher level of abstraction lies evolutionary theory, whose
applicability depends upon only three exceptionally general principles: heritable
reproduction, variation and selection. Obviously these can be implemented in a huge
number of diverse ways, and need not be restricted to the realm of biology. Hence
182 9 The Paradox of Consciousness

we ourselves can and do produce a variety of implementations of artificial life, that


is, systems of computer code that metamorphose according to the above principles
(for a philosophically oriented discussion see Boden 1996). Gerald Edelman (1987)
has advanced the theory that neural structure is created via a Darwinian mechanism
of competition and selection of groups of neurons within the working brain. Richard
Dawkins (1989), Daniel Dennett (1995) and Susan Blackmore (2000) have pointed
out and explored how abstract mechanisms of evolution can be applied in the realm
of psychology, sociology and culture in the theory of the spread of ideas and other
cultural artifacts collectively known as memes. If the speculations of Lee Smolin
(1999) should turn out to be correct, perhaps the universe as a whole is a vast collection
of more or less causally isolated sub-universes which embodies a kind of evolutionary
system of which our local sub-universe is a product. Smolin hypothesizes that every
black hole is, somehow, the source of a new universe so that over vast cosmic time
(however that would be defined) there will be a kind of selection for universes which
produce black holes more copiously. Smolin also speculates that black hole fecundity
is correlated with general features which favor the existence of life and intelligence,
so his theory would be an important part of the explanation of the emergence of high
level structure.
The applicability of the basic concepts of evolution is so extensive because they
impose such small demands on their implementation. Thus many such implementa-
tions can easily spring into existence. And while it is of course a testament to our
intelligence that we are able to spot these implementations and put them to use in
our explanatory schemes, they remain nothing more than the kind of high level fea-
ture, or epistemological resource, we have been discussing. The notion of relative
fitness (defined in terms of a particular system of reproducing entities within a given
environment), for example, helps to explain innumerable features of the biological
world, but nature makes no use of it in the running of the world. Just as in the case of
the pmm, every time an organism does better than some other because of its fitness
in a certain environment, the causal determination of that event depends solely and
entirely on underlying mechanisms and never on fitness as such. Again, we see the
telltale signature of conservative emergence.
My final example of an extremely abstract high level feature is psychology, by
which I mean not particular scientific theories, but rather what philosophers call folk
psychology or commonsense belief-desire psychology. I suspect that the general
applicability of folk psychology is closely related to the similarly general applicabil-
ity of evolution in biology (see Dennett 1978b; Seager 1990, 2000b). This is because
an organisms desires are primarily focused on what is good for it from the biological
point of view and its belief system focuses on information that is likely to be impor-
tant for the satisfaction of these desires. Once organisms evolve sensory systems and
mobility we will find that folk psychology inevitably becomes applicable to them to
at least some degree. Thus I can predict that if you spill some sugar on the floor, the
ants will come and get it because, as we are prone to say, they like sugar and, via
information delivered by their sensory systems, they will come to believe that sugar
is available to them whereas if you spill some motor oil on the floor you are safe, at
least from ants. Of course, whether ants should be credited with genuine beliefs and
9.2 Abstractive Explanatory Aids 183

desires is, to say the least, a moot point, but it is undeniable that folk psychology is
pragmatically effective at a basic level even within the insect realm.
Within the realm of high level features we can see a host of structures ranging from,
at the abstract end of the scale, the thermodynamical, informational, evolutionary and
psychological to more concrete structures such as tectonic plates and chemical kinds.
But from the viewpoint of the SPW it is evident that all these features are nothing
more than patterns of events which are explanatorily salient to us. High level features
are thus for us, conscious subjects. Only through being consciously conceptualized
can they take any role in the world. High level features are, we might say, only
visible from certain consciously adopted explanatory standpoints. In this respect,
they are rather like the clouds that appear in the shape of animals as we watch them
drift overhead. Some clouds really do look like animals (to us, from where we lie) but
that does not matter to what happens in the sky. The fact that some other high level
features (such as animals) not only look like but act like animals does not change their
status as epiphenomena of the low level features unutterably complex dynamical
interactions.
But now, if patterns are mind-dependent and all high-level features are pat-
terns then, given that minds are high level features, they will themselves be mind-
dependent. There is an obvious trivial sense in which this is true: if there were no
minds there would be no minds. The serious problem is that the SPW requires minds
in order to integrate high-level features into a world that obeys completeness, closure
and resolution. But minds appear to be high-level features themselves. Therefore,
there is one high-level feature (at least) that cannot be integrated into the SPW in the
usual way. If we try to regard minds as a normal conservatively emergent high-level
feature of the world then instead of integration with the SPW, we simply are left with
another mind which serves as the viewpoint from which the target mind appears as
a pattern. This is either a vicious regress or an unacceptable circularity.
There is a real paradox lurking here. It is a paradox of consciousness. Looked upon
as a behaviour modifying and adaptive ability, conceptualization is just another high
level feature of certain rather special emergents. Conceptualization, and its precursor,
categorization, are psychological features which are crucial for the applicability of
belief-desire psychology to a system (this is trivial: no beliefs without content and no
content without, at least, categorization). There is no problem here until we consider
beings which are conscious of themselves and other high level features of the world.
Within the SPW, conscious awareness most certainly does not appear to be a basic
feature of the world but rather takes its natural place as another high level feature,
one that involves the mass action of at least millions, if not hundreds of millions
or even billions, of neurons. But if we accept that consciousness is another high
level feature, we must conclude that it, like all other high level features, is causally
impotent. Consciousness does not participate in the go of the world; it does not
add any constraints upon state evolution that are not already present because of the
fundamental physical features. That is, we must conclude that the SPW entails that
consciousness is epiphenomenal.
184 9 The Paradox of Consciousness

9.3 Paradox

But that is not the paradox. After all, we could bite the bullet of mentalistic epiphe-
nomenalism which is a coherent, if bizarre and implausible, doctrine. It has, in one
form or another, been defended by serious thinkers (for example, Thomas Huxley
1974; Frank Jackson 1982; William Robinson 2004). Generalized epiphenomenal-
ism reveals, however, that the SPW, and in particular, the scientific picture of the
metaphysics of high-level features, is incoherent. It seeks to regard mind as a high-
level feature of reality, but then it finds a conscious mind is needed to make sense
of high-level features. It does not seem possible to evade this problem, for the only
place for mind within the current SPW is as a high-level feature. But in order for it
take such a place, there must be a viewpoint from which the patterns of mind are
apparent amidst the complex turmoil of the elementary features of the world. Such a
viewpoint is itself an aspect of mind, and so we have failed to understand mentality
in general as a high-level feature.8
For example, if, say, my consciousness is just another high level feature of the
world, it too stands as a mere potential epistemological resource, except insofar as it
is taken up into some actually and consciously deployed explanatory system. But the
conscious deployment of an explanatory system is one more high level feature which
itself is a mere epistemic potentiality unless it is taken up into a further consciously
deployed explanatory system. This seems to be a vicious regress which leaves con-
sciousness as nothing but merely potential, never actualized. Each standpoint, as a
high level feature, requires a conscious appreciation of it to transcend mere poten-
tiality, but if consciousness itself is a high level feature then each consciousness
requires a further consciousness to appreciate it. So the worry is that conscious-
ness could never get off the ontological ground in the first place. This undercuts
the SPWs understanding of consciousness, which it sees as emerging slowly and in
initially primitive forms, attaining the status of reflective, theorizing consciousness
only very late in its development.
ButI take this as self evidently truemy current consciousness, or indeed the
consciousness of any conscious being, is a fact about the world that is not at all
merely potential and could persist as a fact even if all other consciousnesses were,
this instant, obliterated. Furthermore, it seems completely implausible to suppose
that my consciousness depends in any sense on the explanatory activities of any
other beings (whereas I think there is little difficulty in seeing the sense in which
objects such as trees, planets or galaxies do have a kind of dependence upon the
activity of beings who think of the world in terms of trees, planets and galaxies).
The worry can be sharpened if we consider the Earth of some millions of years ago,
before there were any beings capable of taking up an explanatory standpoint from
which consciousness would appear to be at work in the world. Nonetheless, the world
was full of conscious animals, suffering all the torments of mortal existence, and all
the pleasures as well. It is hard to deny that their pains caused them to avoid certain
things, their pleasures led to the avid pursuit of other things. These conscious states
were not mere potential epistemic resources, awaiting recognition by more powerful
9.3 Paradox 185

minds bent on explanation and prediction. That is not a standpoint dependent fact.
It seems absurd to suppose that, absent beings who consciously explained animal
behavior in terms of those animals experiences, animal consciousness was a mere
epistemic potentiality awaiting the distant future birth of human beings.
The paradox developed here can perhaps be better understood if it is contrasted
with a less serious difficulty that besets a number of philosophical accounts of high
level properties, most especially mental properties. This latter difficulty is a failure
in the project of naturalization. This project is the attempt to show that all high level
features are conservatively or epistemologically emergent. This is not the same as
showing that everything reduces to the fundamental physical features, although it is
compatible with strict reductionism. It will suffice for naturalization of some domain
if we can show how its features conservatively emerge from more fundamental fea-
tures of the world, or at least provide good evidence that there is such an account
available in principle. We have seen abundant evidence for the wide ranging if not
universal power of the naturalistic outlook in the first three chapters of this book.
Successful naturalization of any domain must abide by a few simple rules. Roughly
speaking we can characterize these as follows.
X has been naturalized if and only if

(1) The emergence of X has been explained in terms Y.


(2) Y is properly natural.
(3) Y does not essentially involve X.

This notion of naturalization has several virtues. It is reasonably clear and is directly
and quite properly aimed at the scientific integration of the naturalizations targets.
After all, it would seem very strange if not perverse first to embrace the SPW and then
boast about how there are some phenomena that defy all attempts to give an account
of how they fit into that world-view. In terms of the idea of supervenience, the need
for an explication of the supervenience relation between target and base domains is
obvious (otherwise, to generalize a remark of Simon Blackburns supervenience is
part of the problem rather than part of the solution; see Blackburn 1985).
To illustrate how naturalization can fail lets examine a contemporary example:
Daniel Dennetts instrumentalist theory of intentional mental states (as outlined orig-
inally in Dennett 1971). It is somewhat curious that Dennett, who by and large writes
from a perspective that clearly endorses the scientific view of the world and which
is supposed to be non-eliminativist, espouses a theory of intentionality which blocks
the naturalization of the mind. Although Dennetts theory of the intentional stance is
by now intricate and subtle, it remains essential to it that the mental states of a subject,
S, be understood as states (no doubt physical states, probably of Ss brain) whose
mentality resides in their underpinning an intentional interpretation of S. Now, the
commonsense view is that mental states have the job of generating behaviour which
can be interpreted from the intentional stance, but the mentalistic interpretation is
parasitic upon the mental properties of these states. It is because they are mental that
we can successfully interpret their subject as having a mind. Fundamentally, Dennett
sees things the other way around: it is because we can interpret subjects as behaving
186 9 The Paradox of Consciousness

in accord with a more or less rational ascription of intentional mental states that they
count as having mental states at all.
But, of course, the notions of interpretation, intentional stance, predictive pur-
poses, etc. are one and all notions which are themselves fundamentally mentalistic.
This is formally obvious, but lets be clear how the problem arises. It is not that, as a
matter of fact so to speak, mental state ascriptions are parasitic upon behaviour which
can be interpreted mentalistically; the problem of naturalization is that we cannot
explain what mental states are without appeal to notions shot through with their own
mentalistic implications. You cant understand what a mind is unless you already
know what a mind is, since you cant understand mentality without understanding
the intentional stance, which requires you to already understand a host of essentially
mentalistic concepts.
Another approach to this is via the comparison of the case of the mind with that
of chemistry; the two are entirely dissimilar. One can imagine learning chemistry by
learning its naturalization along with a host of defined termsat the end one would
really know what chemistry was. Of course, after this beginning, because of typical
problems of complexity, one would have to learn to think chemically to really get
anywhere in chemical studies. According to Dennett, you cant do this for the mind,
since youd already have to know what a mind was to get the intentional stance.9
Dennetts approach in Real Patterns (Dennett 1991) suggests that interpretability
can stand as the objectively existing, mind independent reality of minds. It is just a
fact that certain systems can be interpreted via the intentional stance and this is true
whether or not any creature takes up the intentional stance itself and engages in any
real time interpreting.
But the paradox of consciousness that worries me stems from the fact that high
level features are mere epistemic resources the play no role in nature save insofar as
they are conceptualized by some conscious being. This too leads to a failure of natu-
ralization which threatens the SPW but it also leads to the much worse consequence
that consciousness cannot appear in the world save as recognized by a conscious
being. This consequence makes the SPW incoherent if it insists that consciousness
is a standard high level feature of the world.

9.4 Reflexive Consciousness

Could the problem be evaded if consciousness somehow served as its own epistemic
standpoint, looping back on itself and thus generating itself without necessarily
violating its own merely conservative emergence?
The idea that consciousness is reflexive or in some way necessarily self-referential
goes back a long way, at least to Aristotle who formulated an argument based on the
fact that we are conscious that we are conscious. In Book 3 of De Anima, Aristotle
writes
9.4 Reflexive Consciousness 187

Since we perceive that we see and hear, it is necessarily either by means of the seeing that
one perceives that one sees or by another [perception]. But the same [perception] will be both
of the seeing and of the colour that underlies it, with the result that either two [perceptions]
will be of the same thing, or it [sc. the perception] will be of itself. Further, if the perception
of seeing is a different [perception], either this will proceed to infinity or some [perception]
will be of itself; so that we ought to posit this in the first instance. (Translation from Caston
2002)

The argument is somewhat elusive, but Aristotle evidently thinks that if conscious
states are not reflexive or self-representational then a vicious regress will ensue. This
follows only if every mental state is a conscious state in the strong sense that its
subject is aware of it. If some mental states are not conscious in this sense then no
regress is generated.10 Although the idea that mental states are essentially conscious
is also a very old idea with illustrious defenders, such as Descartes, we do not
nowadays balk at the claim that there are unconscious mental states. Nonetheless,
there is some intuitive pull to the idea that consciousness is essentially reflexive. After
all, whenever we notice that we are conscious we have an awareness of our mental
state and when we do not notice our own consciousness we of course have no sense
that we are conscious. But for all of that, we could beI believe we aremostly
conscious without having any awareness of our own consciousness. I think this is
also the perpetual state of most animals many of whom are conscious beings but very
few have any sense that they are conscious.
Still, it is worth thinking about whether the reflexivity of consciousness would
have any mitigating effect on the paradox of consciousness. It seems to me this
is very unlikely because of a number of considerations. First, the kind of reflexiv-
ity which conscious states are supposed to possess enables, or embodies, a special,
indeed metaphysically unique, relation between the subject and the subjects own
consciousness, whereas the relation between subjects and their explanatory posits is
entirely neutral between self and other ascription. Thus the reflexivity of conscious-
ness and the system of conservative emergence are entirely distinct. In other words,
even if consciousness is essentially reflexive that will not show how it is conserva-
tively emergent from neural processes. The situation actually threatens to get worse,
because this mysterious feature of reflexivity puts an added explanatory burden on
the system of conservative emergence. One can easily imagine theories of mental or
neural representation which involve self-representing representations, and it does not
seem too difficult to imagine these self-representers being conservatively emergent.
But these wont dissolve the paradox since they are themselves stance dependent
explanatory aids to conscious beings bent upon understanding cognition. To solve
the paradox we need some kind of magic reflexivity which generates conscious-
ness. Perhaps it exists, but it will not fit into the system of conservative emergence
characteristic of the spw.
Second, and related, the reflexivity of consciousness is supposed to be a primitive
phenomenological fact about subjectivity. The explanatory stance of emergence has
no particular relation with the phenomenology of consciousness and there is no reason
to suppose that simply taking up such a stance via the hypothesis that consciousness
is essentially reflexive could generate any kind of phenomenology.
188 9 The Paradox of Consciousness

Third, the only way reflexivity could evade the paradox of consciousnesses is if the
reflexive qualities of conscious states included the full machinery of an explanatory
framework in which conservative emergence makes sense. But this is extremely
implausible. The concepts of this framework are sophisticated, multifaceted and
numerous. To make their possession a necessary condition for consciousness would
put such a cognitive burden on conscious beings as to be quite incredible. That is,
even if we accept that consciousness is in some way reflexive this must be understood
so not to rule out rudimentary forms of consciousness, such as presumably exist in
many animals and human infants.
It is important to note here a subtlety in the argument. The paradox arises because
of the way the SPW integrates conservatively emergent features into its overall
view of the world. This integration requires there exist a standpoint of a conscious
explainer in order for emergents to have a role in the world. However, this argument
puts no constraints on the nature of consciousness itself. It does not, for example,
entail that all consciousness is sophisticated, curiosity driven theoretical thought or
that all conscious beings must possess concepts enabling such thought.
Finally, and in any case, from the point of view of the SPW the reflexive feature
of consciousness is simply another high level feature. As such, it stands as merely a
potential epistemic resource awaiting the intellectual gaze of some conscious being
bent upon understanding the world to have any role in the world. So even if reflex-
ivity could hold consciousness up by its own bootstraps, the reflexive aspect of
consciousness itself would remain unrealized until taken up by an explanation seek-
ing consciousness. That is, the reflexive loop which is supposed to solve the paradox
has no being save as an epistemological resource. This engenders a vicious regress.
And, unlike in the case of consciousness, there is no reason at all to suppose that
the reflexive loop itself is some kind of self-representational state and no evidence
whatsoever for this from our own experience of consciousness.
I think that ultimate reason for the paradox comes down to the apparent fact
that consciousness is an intrinsic, non-relational property whose instances are
real time existents which stand as ontologically independent (though not causally
independent).11 In contrast, the properties which are conservatively emergent are in
a certain sense relational. What I mean is that conservatively emergent properties are
essentially structure dependent as befits their status as patterns which can stand as
epistemological resources in various epistemic projects. Consciousness has, or is, an
intrinsic nature which no conservative emergent possesses as such. Recall Galileos
words when he first observed the Milky Way by telescope: the Galaxy is nothing
else than a congeries of innumerable stars. In general, conservative emergents dis-
solve into their fundamental constituents remaining only as patterns ready to serve
the epistemological and explanatory purposes of conscious beings bent upon under-
standing the world. It seems obvious that consciousness itself cannot be similarly
dissolved without destroying its real time, phenomenal, intrinsic and immediately
accessible reality.
Chapter 10
Embracing the Mystery

10.1 Watchful Waiting

Broadly speaking, there are only a few possible responses to the paradox of
consciousness. I cannot say I have much confidence in any of them but permit myself
some hope that the list of options considered in this chapter is complete, so that one
of them will provide the escape route. Three of the four routes are familiar, the fourth,
perhaps the most radical of them all, is more novel.
The first option is simply to conservatively insist that consciousness is a standard
conservatively emergent phenomenon that will eventually be fitted into the SPW in
the usual way. On this view, it is far too early to junk the scientific picture of the
world. If the problem persists and grows worse more drastic steps can be taken but
the history of science suggests that eventually the problem of consciousness will
resolve itself. Call this the option of Watchful Waiting.
Our discussion has focused on the paradox of consciousness but of course the phe-
nomenon of consciousness raises a host of additional worries. The general problem
of consciousness is that there seems to be no way to explain exactly how matter gen-
erates or embodies phenomenal consciousness given our present, and likely future,
understanding of the nature of the physical world. That is, the conservative emer-
gence of consciousness is entirely opaque to us. This issue has attracted sustained
philosophical attention for the last fifty years or so,1 under various labels, most
recently that of the problem of the explanatory gap (Levine 1983, 2001) and the
hard problem (Chalmers 1996, 1997). An enormous literature has ensued but there
is no sign of any consensus on a solution, on the exact specification of the problem
or even whether it is not some sort of philosophically induced pseudo-problem. But
one clear response in the face of epistemic opacity is to plead that our ignorance is
excusable and/or explicable. The most straightforward plea is simply that more time
is necessary for the study of the brain mechanisms responsible for consciousness,
as well as for the advancement of various requisite attendant disciplineswhatever
these might be. The depth of our ignorance is perhaps indicated by the fact that
candidate disciplines range all the way from physics to philosophy.

W. Seager, Natural Fabrications, The Frontiers Collection, 189


DOI: 10.1007/978-3-642-29599-7_10, Springer-Verlag Berlin Heidelberg 2012
190 10 Embracing the Mystery

A more radical line is to endorse essential ignorance. This position has been
defended by Colin McGinn (1989) who claims that while the emergence of con-
sciousness is conservative it is impossible for us (humans) to understand the particular
mechanism of emergence at work in this case. We are, in McGinns phrase, cogni-
tively closed with respect to the problem of consciousness in much the same way that
a chimpanzee is cognitively closed with respect to calculus. McGinn stresses that
there is nothing metaphysically mysterious about the emergence of consciousness.
There might be creatures for which it is intelligible how consciousness appears as a
standard emergent from fundamental physical processes. Unfortunately, we lack the
intellectual chops to join that club.
A third way to endorse Watchful Waiting is to envisage some fundamental change
in our understanding of the physical world, or consciousness, which will permit new,
but recognizably legitimate, mechanisms of conservative emergence. There can be
no doubt that the explanatory structures within science have changed, sometimes
radically, over the course of its development. A salient example is the transmutation
in the doctrine of mechanism brought about by the acceptance of Newtons theory of
gravitation in the 17th century. The introduction of a non-contact force which acted
instantaneously over any distance was regarded with extreme skepticism. Newton
could never bring himself to believe in it and wrote in a famous passage: that one
body may act upon another at a distance through a vacuum, without the mediation
of anything else, by and through which their action and force may be conveyed from
one to another, is to me so great an absurdity that I believe no man who has in
philosophical matters a competent faculty of thinking can ever fall into it (Janiak
2004, p. 102).
Thus one might think that the problem of consciousness awaits some new, rev-
olutionary development in science which will unveil the unmysterious mechanisms
of conservative emergence. Perhaps this is what Noam Chomsky intends to suggest
with this analogy:
Suppose that a nineteenth century philosopher had insisted that chemical accounts of mole-
cules, interactions, properties of elements, states of matter, etc. must in the end be continuous
with, and harmonious with, the natural sciences, meaning physics as then understood. They
were not, because the physics of the day was inadequate. By the 1930s, physics had radically
changed, and the accounts (themselves modified) were continuous and harmonious with
the new quantum physics. (Chomsky 2000, p. 82)

Perhaps some similarly radical transformation in physics will allow for consciousness
to take its place as a standard conservative emergent. Although Chomsky is in the
end noncommittal about this with regard to consciousness, and the mind in general,
he does note that [c]ommonly the fundamental science has to undergo radical
revision for unification to proceed (p. 106).
The philosopher John Searle sees the problem of consciousness as akin to the
difficulties encountered when the fundamental science lacks the necessary resources
to explicate the mechanisms of standard conservative emergence. He says the mys-
tery of consciousness today is in roughly the same shape that the mystery of life was
before the development of molecular biology or the mystery of electromagnetism
was before ClerkMaxwells equations (Searle 1992, pp. 1012). This seems to
10.1 Watchful Waiting 191

endorse the idea that we need a revolution in our existing sciences to enable us to
understand the emergence and nature of consciousness.
In a similar vein, Thomas Nagel writes that the status of physicalism is similar
to that which the hypothesis that matter is energy would have had if uttered by a
Presocratic philosopher. We do not have the beginnings of a conception of how it
might be true (Nagel 1974, p. 177). Famously, Nagel goes on to express grave
doubts that any advance in science will ever make it possible to understand how
consciousness could be a conservative emergent.
Although these thinkers have considered the idea that revolutionary advances in
science are needed to solve the problem of consciousness, one scientist has posi-
tively endorsed the view. In The Emperors New Mind, Roger Penrose argues that
consciousness can only be scientifically understood via a radical change in physics
which will do no less than introduce some features that transcend Turing computabil-
ity (see Penrose 1989). The reason Penrose offers for this is not the usual one which
regards the generation of phenomenal or qualitative aspects of consciousness as sci-
entifically inexplicable but rather, and notoriously, that Gdels incompleteness result
entails that the intellectual power of the conscious mind somehow goes beyond the
power of algorithmic computation. The connection to consciousness arises because
only in the case of conscious thoughts in which the Gdel problem is appreciated
does the problem appear. Although the position is far from clear, writings with Stuart
Hameroff sometimes suggest that something like a quantum mechanical entangle-
ment based emergentism or, as they sometimes write, panpsychism is the correct way
to link consciousness with scientific theory (see e.g. Hameroff and Penrose 1996).
Note that it is not particularly strange to link emergentism and panpsychism. The
introduction of fundamental mentality opens the door to the conservative emergence
of more complex forms while avoiding the problem and paradox of consciousness
as we shall see below.
As Niels Bohr is reputed to have said, making predictions is difficult, especially
about the future. The option of Watchful Waiting is a hostage to the fortunes of future
science. Will physics undergo a revolution that will introduce a consciousness-like
feature in its foundations? No one can say, but it seems far from likely. Nor do I
think defenders of the idea that consciousness is a standard conservative emergent
would be satisfied if physical science had to undergo that sort of reformation. Will
some conceptual revolution reveal what has been missed for so longthe key to
understanding that consciousness is a bog standard conservative emergent? This
cannot be ruled out, but of course we have no inkling of how this could work at
present. What is interesting is the sense that science as we know it is at a loss to
explain how consciousness could be a conservative emergent of purely physical
processes.
But there is something of a red herring quality to this issue. Although the hard
problem and explanatory gap are, in my view, deadly serious and very far from
being resolved I do not want to add to their discussion here. This is because I think
that the paradox of consciousnesses stemming from the account of conservative
emergence within the SPW creates a distinct problem for the defenders of Watchful
Waiting. No matter what new science or conceptual revolution occurs, the nature
192 10 Embracing the Mystery

and role of conservative emergents remains clear. They are causally (i.e. kausally)
inefficacious and exist solely as potential epistemic resources for conscious beings
intent upon understanding, explaining and predicting the world. As discussed in
the previous chapter, this means that the claim that consciousness itself is simply
another conservative emergent leads to incoherence. Only the total rejection of the
SPW can evade this problem. The nature of conservative emergence ensures this
since it requires that all higher level features depend upon fundamental features
which completely determine their natures.

10.2 Embrace Emergence

If Watchful Waiting is a non-starter, we should turn to options that in one way or


another reject the SPW itself. The obvious first choice is to Embrace Emergence, that
is, to reconsider the possibility of radical emergence.2 We can leverage the discussion
in Chap. 7 to provide a succinct characterization of the difference between conser-
vative and radical emergence. Recall that the doctrine of supervenience requires
that all high level features be determined by low level or fundamental features. The
basic form of a supervenience claim is expressed in the following formula (D5 from
Chap. 7) in which the property family U is supervenient upon the family T :

( )(F U )(F (G T )(G ( )(G F )))

What is crucial here is the second necessity operator () which expresses the strength
of the necessity relation which holds between the low level, subvenient domain and
the high level domain (the initial operator serves only to express the modal status
of the entire claim). For present exposition we can simplify this formula. Consider
a single representative property, F, and neglect the outer necessity operator (plus
let us take for granted that G is an element of the appropriate subvening family of
properties). The simplified form is then:
Fa (G)(Ga ( )(G F ))

Now I need to re-complicate the formula to express explicitly the way emergence
is supposed to be a function of the constituents of the supervening feature. That is,
the canonical form of emergence is that a system depends on the nature and inter-
relationships of some more fundamental features or entities which together generate
the systems structure and properties. Bear in mind that the notion of constitu-
tion here is very loose and is not restricted to so-called part-whole or mereological
relations. For example, the property of being an uncle presumably supervenes on fun-
damental physical features but these features are not restricted to those fundamental
entities which are, in any ordinary sense of the term, constituents of the particular
uncle in question.
The needed complication is simply to bring in reference to the relevant con-
stituents, as follows:
10.2 Embrace Emergence 193

Fa ()(z 1 , z 2 , . . . , z n )(C z 1 , z 2 , . . . , z n , az 1 , z 2 , . . . , z n x (x1 , x2 , . . . , xn , y)


(C z 1 , z 2 , . . . , z n y z 1 , z 2 , . . . , z n F y))

What does this say? In its unfortunately but unavoidably prolix fashion it says that
there exists a bunch of constituents of the system a (these are the z i ). C stands for
the appropriate relation of constituency and  is the relation in which the z i stand
which subvenes F (we assume that, taken together, C and  express the laws of the
low level entities involved, that is, the z i ). The supervenience claim is recast in the
latter half of the formula to say that any system with constituents arranged according
to the relation  will have the higher level property F. Again, the crucial point is the
strength of the necessity operator, whose openness is suggested in the formula by use
of x . There are two settings which matter: straight on full logical necessity versus
causal or nomological necessity. Conservative emergence aligns with the former,
radical emergence with the latter.
Why is that? Conservative emergence requires that the underlying fundamental
base totally determine the high level features in such a way that no modification of
the low level laws is required to generate all the behaviour in which the high level
features can participate. For example, if we fix all the atomic facts then the chemical
facts are thereby also fixedthere is no need to add any new distinctively chemical-
level principles, laws or activities in order for all the characteristically chemical-level
behaviour to be determined. That is what it means for the chemical to be a conservative
emergent from the underlying physical basis.
Those who Embrace Emergence tell a starkly different story. They agree that the
low level features have a set of laws corresponding to that domain. But they add
that under certain conditions, systems made up of low level entities will bring about
the emergence of entirely new features that will influence the behaviour of the total
system in ways not accountable purely from the low level.
Recall the thought experiment of the superduper computer simulation. Radical
emergentists deny that the simulation would perfectly duplicate any real world system
that possesses radical emergence, no matter how well the simulation mimics the lower
level. The novel emergent features will have their own causal efficacy which will
interfere with the low level processes, forcing them to drift away from the simulated
version whose behaviour is, by design, entirely governed by low level processes and
low level processes only.
Therefore, radical emergentism maintains there are possible worlds that differ in
their laws of emergence without supposing that there is any difference in the sub-
venient level laws governing these worlds. In terms of our formula this is expressed
entirely in the grade of the necessity operator so that two natural forms appear, one
where we use l , standing for full logical necessity and the other which features
n , standing for nomological necessity. The former, corresponding to conservative
emergence, entails that there are absolutely no possible worlds in which a set of
constituents stand in some operative subvening relation but fail to have the associ-
ated supervening high level property. The latter, corresponding to radical emergence,
allows for worlds in which the various laws of emergence are different, or absent
altogether. The emergentist who believes in only conservative emergence thinks the
194 10 Embracing the Mystery

actual world is in the class of worlds that lack any laws of emergencethey are
completely unnecessary. The radical emergentist thinks that a world lacking laws
of emergence will not support all the high level features we actually find, notably
including consciousness. Their world is in essence like the time dependent Life
world we examined in Chap. 5 where the fundamental laws of Life will not be able
to generate all the phenomena that occur and they believe the behaviour of the actual
world similarly outruns what the fundamental laws of physics (plus of course the
fundamental physical state of the world) can generate strictly on their own.
Radical emergence seems to be a coherent doctrine (see McLaughlin 1992),
although there have been attempts to show otherwise. For example, Thomas Nagels
(1979) argument in favour of panpsychism depends on the denial of the possibility of
radical emergence. Nagels argument is very succinct. The first premise is that the
properties of a complex system must derive from the properties of its constituents,
plus they way they are combined (Nagel 1979, p. 185, my emphasis). All emer-
gentists can agree with this (modulo the caveat about the liberal interpretation of
constituent).
The second premise is that true causes do necessitate their effects: they make them
happen or make them the case (p. 186). Here Nagel has to assume that the kind of
necessitation involved is full logical necessity. But this is highly implausible. What
can cause what depends on the laws of nature and various contingent factors. The
value of the fine structure constant in in quantum electrodynamics does not appear
to be necessitated by any (other) law of nature and it has a powerful influence on the
causal powers of atomic structure and hence chemical kinds. As noted by Barrow
and Tipler it transpires that the gross properties of all atomic and molecular systems
are controlled by only two dimensionless physical parametersthe fine structure
constantand the electron to proton mass ratio (1988, pp. 2956). It seems pretty
obviously true that, one, there are causal processes in the actual world that involve
these parameters and, two, there are other possible worlds where the causal processes
are different because of variation in the values of these parameters.
Nagel is right that once we fix the laws of nature and the state of the world then it is
a matter of pure logical necessity what is going to happen and what high level features
will appear (if the laws are intrinsically indeterministic then the range of outcomes
is logically determined). But this does not show that all emergence is conservative
emergence unless we assume that the laws of physics exhaust the fundamental laws of
nature.3 Since the radical emergentist explicitly denies this Nagel is simply begging
the question against this form of emergence. And, in principle, there does not seem
to be anything incoherent in the idea that there are irreducible laws of emergence
that go beyond the laws of fundamental physics.
Although the heyday of radical emergentism centered on the early 20th century,
the idea that there are irreducible laws of emergence has not entirely disappeared.
A very interesting recent example is Anthony Leggetts suggestion that complexity
of material constitution can modify or override the laws of quantum mechanics.
Leggett writes that it is quite conceivable that at the level of complex, macroscopic
objects the quantum superposition principle simply fails to give a correct account of
the behavior of the systemany concrete proposal of this type would require us to
10.2 Embrace Emergence 195

introduce new physical laws which are not contained in the quantum formalism itself
(Leggett 1987, p. 98). This can be read as endorsing a kind of radical emergentism,
and Leggetts further anti-reductionist remarks reinforce this interpretation. Note also
that, in line with our general discussion of radical emergence, Leggetts idea would be
empirically testable. In fact, Leggett has engaged in and spurred highly interesting
experimental efforts to realize macroscopic, or almost macroscopic, devices that
can be put in superpositional states. Thus far, emergentism has not fared well. In
the year 2000, a superconducting ring about the size of a human hair was put in a
state with superposed bi-directional electric currents in accordance with quantum
mechanics (see Friedman et al. 2000). Leggett admitted to the New York Times that
the experiment provided reasonably foolproof evidence you do have a superposition
of macroscopic quantum states (Chang 2000).
Galen Strawson has also argued against the coherence of radical emergentism
(Strawson 2006). His argument depends on the claim that it is built into the heart of
the notion of emergence that emergence cannot be brute in the sense of there being
absolutely no reason in the nature of things why the emerging thing is as it is (p. 18).
This is true if the nature of things includes the laws of emergence which the radical
emergentist posits. If the nature of things is however restricted to the fundamental
physical laws then emergence will be brute but not in any way that would or should
disturb a radical emergentist. Is there something incoherent about a brute law of
nature? If there is then there are no fundamental laws at all, but there seems to be
nothing unintelligible about basic, non-derivative relationships between things.
Strawson also argues against the conservative emergence of phenomenal con-
sciousness on the basis that the experiential and non-experiential are so disparate
that there is no explanatory relationship between them which could intelligibly reveal
exactly how consciousness is generated by matter. Although such an argument is not
relevant to our present concerns, one might wonder whether Strawsons tactic could
be adapted in an argument against radical emergence. Could one argue that even
irreducible laws of emergence must relate features which are, in some sense, not
too different from each other? There is then nothing wrong with brute relations
between certain elements of physical reality but, one might argue, there cannot be
irreducible relations of emergence across ontological categories. Or, to put it another
way, there would be no inter-level brute emergence; intra-level primitive determina-
tion relations could then be regarded not as emergence but causation. Bear in mind,
however, that in light of the discussion of Chap. 7 there is little difference between
causation regarded as temporal supervenient determination and radical emergence.
Perhaps there is some interest in this idea, but absent a clear account of the nature
of ontological categories and an independent argument that ontological borders
are necessarily closed, the suggestion once again begs the question against radical
emergentism.
However, there is a very great distance from the claim that radical emergence is
coherent to the claim that the actual world exemplifies it. The first three chapters
of this book stand as powerful evidence that there are no empirical signs of radical
emergence. One can regard the history of science as a failed search for such emer-
gence or, equally, as a triumphant progress towards ontological unity coupled with
196 10 Embracing the Mystery

ubiquitous conservative emergence. Chap. 6 then argued that there is no theoretical


reason to think the world exhibits any radical emergence. These lines of evidence
are not, of course, anywhere near conclusive. Our ignorance of the natural world
still completely dwarfs our knowledge. Nonetheless, the scale of our knowledge is
staggeringly large and comprehensive. So its passing strange that radical emergence
should exist but be completely invisible to us despite the depth and range of our sci-
entific investigation of the world. Although emergence has become something of a
buzzword, it does not take much study to see that its current scientific defenders one
and all mean to champion the conservative form (for a sample see Anderson 1972;
Schweber 1993; Laughlin 2005; Holland 1998; Morowitz 2002).
There are more fundamental reasons to doubt the existence of radical emergence.
In Chap. 7 we saw that if a theory is total then it can only allow conservative emer-
gence. Recall that the hallmarks of a total theory, T, were the trio of features called
completeness, closure and resolution which were defined as follows: Completeness
is the doctrine that everything in the world has a non-trivial T-description and as such
abides by closure and resolution. Closure entails that there are no outside forces
everything that happens, happens in accordance with fundamental T-laws so as to
comply with resolution. Resolution requires that every process or object be resolv-
able into elementary constituents which are, by completeness, T-entities and whose
abidance with T-laws governing these constituents leads to closure. It seems clear
that physical theory at least aims to be a total theory. It has been constructed so as
to home in on a system of fundamental entities and forces which certainly appear
to provide completeness, closure and resolution. It has had persistent success at this
endeavour even in the face of regular setbacks, some of which required revolutionary
reconstruction of its foundations. The outcome of these episodes of rebuilding has
always been renewed strength and a strengthened sense of completeness, closure and
resolution.
It is true that the one time dream of an accessible epistemic reduction of all
phenomena to basic physicsif it was ever even half-heartedly entertainedhas
long since been relegated to the dustbin of the history of philosophy. But that is not
the issue on which radical emergentism stands or falls. What matters is the possibility
that in (the future of) physics we can attain to a total theory.
Radical emergence faces a series of basic problems insofar as it conflicts with the
totality of physics. Physics says that energy is conserved in closed systems but radical
emergence requires that systems complex enough to initiate emergence deviate from
the behaviour the system would exhibit if governed solely by physical law applied to
the systems constituents and their interactions. This alteration in behaviour is going
to require some redistribution, creation or destruction of energy that will appear at
odds with the fundamental physics of the situation. Again, this is not a problem of
logical coherence. Even if energy conservation is sacrosanct4 this would not imply
that radical emergence is false. For it is possible to imagine that the laws of radical
emergence add and subtract energy here and there so as to retain the overall balance
(see McLaughlin 1992 for a discussion of this issue). But there is no reason for
the emergence generated changes in energy distribution to be subtle or to be hard
to measure, yet there have been no traces of such phenomena. The same problem
10.2 Embrace Emergence 197

could arise for any conserved quantity that affects the motion of physical matter
(momentum, angular momentum, charge and so on).
Radical emergentists are placed in the uncomfortable position of either denying
that these conservation laws hold or rather gratuitously hypothesizing that radical
emergence operates so as to miraculously patch up any discrepancies with the distri-
bution of conserved quantities as determined by the fundamental physical processes
alone. Both horns of this dilemma are extremely unattractive.
Yet another difficulty facing radical emergence is that it appears as an ad hoc
hypothesis invoked solely to explain the presence of consciousness in the world.
This is not how the progenitors of emergentism saw things. Emergentists from Mill
onwards saw radical emergence as a pervasive feature of the world operating at the
very first level, so to speak, above physics. To the extent that it is now very difficult
to argue that chemistry or biology exemplify radical emergence, the only phenom-
enon which remains a viable candidate for emergence is consciousness. But why
should the world have waited so long to exploit its power of radical emergence?
Consciousness, with its subjective, first-person aspect, is special. However, this spe-
cialness does not in any way suggest that radical emergence is the unique or best way
to account for consciousness. It seems that by its nature, radical emergence should
appear throughout nature but so far as we can tell it appears nowhere except in an
ever shrinking domain where we feel at a total explanatory loss. This makes radical
emergence look like a dodge invoked only to plug a gap in understanding with no
independent evidence for it whatsoever.5
Despite all the foregoing, the option of Embracing Emergence does have the
advantage that it can solve the problem of consciousness and avoid the paradox of
consciousness. The basic emergentist story is that when certain physical systems
attain sufficient and appropriate complexity, as in the human brain, there emerges a
novel feature, consciousness, with all its attendant phenomenal qualities. This new
property of the brain is lawfully related to its subvening brain state but is not deter-
mined solely by the underlying physical state and purely physical law. It has its own
causal powers which can act back on the physical world. This is genuine downward
causation, not the anodyne simulacrum allowed by conservative emergence. There
is no sense in which this new feature is a mere epistemic potential or patternit has
its own stance-independent being. Thus it seems to me that the option of Embracing
Emergence is preferable to that of Watchful Waiting whose allegiance to the SPW
precludes any real solution to the paradox of consciousness.

10.3 Favour Fundamentality

Still, there might be better alternatives equally able to solve the problem without
suffering from the rather large number of substantial difficulties which face radical
emergence. A venerable strand of thought about the mind opposes emergence without
endorsing the SPW by making consciousness fundamental. To adopt this option is, I
will say, to Favour Fundamentality. A number of familiar metaphysical approaches to
198 10 Embracing the Mystery

the mind-body problem can be grouped under this umbrella option: neutral monism,
dual aspect theory and traditional panpsychism. Aficionados will balk at such a crude
assimilation and indeed there are important differences among these views as well
as many possible sub-varieties of each (see Stubenberg 2008) but such details will
not matter to our discussion here.
Panpsychism is the most straightforward theory. It maintains that every basic
entity has some mental properties and these properties are ontologically fundamen-
tal. Generally speaking, panpsychism accords the same status to the physical, leaving
mind and matter as equally fundamental. Panpsychism is the natural foil of radi-
cal emergentist views. As we shall see, panpsychism is in some ways the minimal
modification of the SPW which provides a solution to the problem and paradox of
consciousness.6
Contrary to the strict dictates of panpsychism, neutral monism holds there is an
underlying neutral foundation of reality to which both mind and matter reduce in
some sense but which is itself neither mental nor physical. The doctrine is well
expressed by one of its early champions, Bertrand Russell:
The stuff of which the world of our experience is composed is, in my belief, neither mind
nor matter, but something more primitive than either. Both mind and matter seem to be
composite, and the stuff of which they are compounded lies in a sense between the two, in
a sense above them both, like a common ancestor. (Russell 1921, p. 7)

The neutral remains ineffable and utterly mysterious, for we have no concepts save
those which apply to the way the neutral appears to us and our instruments. According
to this view neither mind nor matter are truly fundamental but they are so to speak
relatively co-fundamental (nothing we have access to explains them and neither
is nearer to the neutral than the other). In the face of the need to provide some
further theoretical articulation of the view and some characterization of the neutral
itself, neutral monism faces the constant danger of slipping into either materialism
or panpsychism. For example, the self proclaimed neutral monist William James
eventually remarked in a notebook the constitution of reality which I am making for
is of the psychic type (see Cooper 1990 for provenance and discussion). And Russell
declares, to my mind mystifyingly, that sensations are examples of the neutral.7
Dual aspect theory regards mind and matter as two equally co-fundamental aspects
of an otherwise ineffable and inaccessible reality. Although this sounds much like
neutral monism the theories should not be confounded. There is no canonical for-
mulation of either view and they are frequently lumped together but despite being
obviously closely related, we can, following Stubenberg, usefully distinguish them.
Dual aspect accounts maintain that every element of fundamental reality is repre-
sented or expressed in both the mental and physical aspect, and deny that there is a
reduction of the mental or the physical to this more basic ur-reality.
It seems to me that dual aspect theory solves the problem of consciousness by
merely sidestepping around it. The pure form of dual aspect theory is a parallelism
akin to that of Spinoza in which mind and matter are serenely and utterly independent
of each other.8 No problem of consciousness can arise because all linkages between
the physical and mental realms have been severed. On this view, we necessarily have
10.3 Favour Fundamentality 199

access only to the mental realm and this in turn suggests that the whole physical
side of the equation is an idle hypothesis. It is then hard to stop short of some form
of idealism in which the physical realm is re-integrated with the mental as a mere
construction out of mental building blocks. I do not mean to suggest that idealism is
not a legitimate metaphysical position but it fails to pay sufficient attention both to
commonsense belief in a robustly non-mental realm of being and to the incredibly
beautiful, vastly expansive and explanatorily powerful scientific picture of the world.
A solution to the problem of consciousness that disengages mind from the world
that science presumes to study fails to get to grips with what makes that problem
interesting in the first place.
Neutral monism and panpsychism hold out the hope of more honest solutions,
but in my opinion the latter more so than the former. Neutral monism posits an
unknown something-or-other out of which emerges both matter and consciousness.
Some versions, often labeled Russellian in honour of their inspirational source, posit
some unknown intrinsic properties of the world as revealed by physical science which
account both for the structural relations to which our physical science is limited and
from which consciousness can also emerge (see Lockwood 1989 and Stoljar 2006
for modern discussions). As mentioned above, such views have a tendency to either
privilege the physical over the mental insofar as the unknowable intrinsic feature can
be regarded as a kind of physical property, or to slide towards panpsychism insofar
as the neutral is taken to be more akin to mentality.
Panpsychism abjures the hypothesis of the unknowable background of the neutral.
Reality is instead exhausted by the two known co-fundamental categories of being:
matter and consciousness. Panpsychism agrees with traditional neutral monism that
neither mind nor matter is more fundamental than the other. But it avoids positing
the mysterious relation of reduction by which both consciousness and the physical
realm somehow emerge from the neutral.
There is a straightforward argument in favour of panpsychism whose form can I
think be traced back at least to the Presocratic philosophers who faced the general
problem of emergence in the 5th century BCE (see Mourelatos 1986). More recently,
Thomas Nagel has presented a succinct and elegant version which I adapt here
(Nagel 1979; the argument is also endorsed in Strawson 2006). Nagels argument is
naturally integrated into our discussion of emergence. It asserts, first, that the only
sort of emergence which is possible is conservative emergence, but, second, that it
is impossible that consciousness should be a conservatively emergent feature of the
world. In support of the second premise, Nagels basic strategy is to appeal to the
explanatory gap that lies between matter and consciousness. Conservative emergence
implies that there is, in principle, an explanation of the mechanisms of emergence,
but there is no such explanation in the case of the transition from brute matter to
consciousness. With respect to the first premise, why does Nagel also believe that
radical emergence is impossible? He does not provide any argument for this and,
as we have seen above, it is hard to see why radical emergence is incoherent or
metaphysically impossible (even if we cannot point to a single non-controversial
example).
200 10 Embracing the Mystery

If we grant for the sake of the argument that the only coherent doctrine of emer-
gence is restricted to conservative emergence and if we follow Nagel in the claim
that consciousness does not or cannot conservatively emerge from physical struc-
tures and processes then the conscious mind cannot be an emergent from the physical
world. Yet, Nagel proceeds, mental properties are properties of organisms which are
physical entities. Thus, if the mental properties of an organism are not implied
by any physical properties but must derive from properties of the organisms con-
stituents, then those constituents must have nonphysical properties from which the
appearance of mental properties follows when the combination is of the right kind
(Nagel 1979, p. 182). If these propertiesthe properties of the constituentsare not
themselves mental then the problem of emergence will simply reappear for them.
Thus the constituents properties must be mental. Furthermore, Nagel claims, since
we can build an organism out of any kind of matter, all matter must possess these
elementary mental properties.9 That is, panpsychism is true.
This vision of panpsychism accepts the SPW almost to the letter, with the one
caveat that science has missed some of the basic properties which characterize its
own fundamental entities. It accepts the general story of emergence which the SPW
propounds with, again, the single reservation that some emergents are the result of
the elementary mental features of matter via, in Nagels words, a kind of mental
chemistry (1979, p. 182). Panpsychism thus pays its respects to the SPW by making
the minimal change needed to accommodate consciousness. It is odd for a view that
denies physicalism, but in essence both the methodology and the general world view
of the SPW is accepted by panpsychism. The overall coherence, power, elegance and
motivation of the SPW can be accepted with only a small, albeit crucial, alteration
by the panpsychist.
Despite these virtues, panpsychism gets little respect from philosophers. For
example, John Searle describes panpsychism as an absurd view and (question beg-
gingly) asserts of examples such as thermostats that they do not have enough struc-
ture even to be a remote candidate for consciousness (Searle 1997, p. 48). Colin
McGinn labels panpsychism either ludicrous, for a strong version which asserts
that everything has full fledged consciousness, or empty, for a weak form which
asserts only that everything is made of physical stuff which has the capacity to be
a constituent of a system with mental properties (McGinn 1999, p. 95 ff.). Leaving
aside the fact that given the view that complex consciousness emerges from elemen-
tal consciousness there is no need to hold that everything possesses consciousness
(and very few panpsychists have asserted this), why is panpsychism the object of
such scornful, low-content, ridicule?
While there are many objections to panpsychism, which is of course a highly
speculative doctrine, I think the principal reason that philosophers make fun of it
is methodological. Panpsychism has, in the cutting phrase of Thomas Nagel, the
faintly sickening odor of something put together in the metaphysical laboratory
(Nagel 1986, p. 49). Once the SPW is reasonably well articulated there is a natural
philosophical project of completing it and deploying it in the overall project of meta-
physics, which is, roughly to understand how things in the broadest possible sense of
the term hang together in the broadest possible sense of the term (Sellars 1963b, p. 1).
10.3 Favour Fundamentality 201

Given the cultural prominence of science, philosophers have strong motivations to


devise a world view fully consonant with the SPW. To endorse panpsychism is to
admit the failure of this project, somewhat like a climber giving up and taking a
helicopter to the summit.
The project of completing the SPW is an undeniably worthy one and no one
can say with certainty that it is unattainable. But I have argued that the option of
Watchful Waiting is not going to be able to solve the paradox of consciousness. This
at least suggests that any effort to save the SPW will require a complete overhaul
of its metaphysics which will, first, somehow undercut the argument that high level
phenomena are epistemic or explanatory potentia without any independent being
and, second, find a way to understand high level phenomena as genuinely efficacious
without undermining the totality of physics (that is, its commitment to completeness,
closure and resolution). I think achieving all this will be a very tall order.
To take one obvious difficulty, most if not all high level features are vague. It
is hard to see how vagueness could be an objective property out there in the world,
especially if the underlying ontology, given by fundamental physics in whatever form
it will eventually take, is fully determinate.10 On the other hand, it is equally difficult
to think about high level objects without imputing vagueness to them. Consider the
property of being a mountain. How could it be that there is some critical piece of
information which settles whether some object at the borderline between mountain
and hill is really one or the other? It seems intuitively obvious that we could know
everything about the objects height, the heights of surrounding objects, all social and
linguistic facts relevant to the word mountain and still be faced with a borderline
case of mountainhood.11 It is easier to believe, in accord with the standard SPW,
that our concepts are indeterminate, a natural inheritance from their source as mere
explanatory aids to be applied as needed and strengthened or weakened at our whim
(as in the example of Plutos demotion to dwarf planet).
But even supposing there were vague objects we also need to provide them with
genuine causal efficacy to avoid generalized epiphenomenalism. It is very hard to
believe that there is some actual causal power which mountains possess but which
borderline non-mountains do not. What could it be? On the other hand, if there were
such a litmus test for category inclusion we could use it to definitively answer the
question whether some large lump of matter was a mountain or not thus presumably
handing victory to the epistemicists (see note 11 above) about vagueness after all.
This possibility strikes me as ludicrous.
The central point is that without the standard SPW this issue and a myriad of others
turn into bizarre scientific questions. One is at a complete loss as to exactly how they
could be tackled by science. Not only does there seem to be no way to articulate any
scientific project of determining what is a genuine mountain, it is a project which
is obviously scientifically completely worthless. Questions of this nature may well
be genuine questions. There is no doubt that the proper understanding of vagueness
is an extremely difficult problem. But on the standard SPW, they are properly left
to philosophy with the tacit understanding that the philosophical solution will fit
smoothly into the SPW and may draw inspiration and crucial empirical data that bears
on the issue from ordinary science. It is not a scientific question, for example, whether
202 10 Embracing the Mystery

ordinary objects such as chairs, mountains and tectonic plates exist or not. Notice
that I include a scientific concept in this list. There is, of course, a scientific question
whether tectonic plates exist which has already been answered in the affirmative.
These are questions that come up once the epistemic resources of modern geology
have been marshaled. The philosophical question is of a different order, one rather
more directed at the status of the conceptual system as a whole.
If the Watchful Waiting option thus requires a foray into radical metaphysics then,
in a way, Favouring Fundamentality actually holds truer to the spirit of the SPW than
Watchful Waiting. But can it solve the problem and paradox of consciousness? Aside
from panpsychisms supposedly obvious implausibility it faces some serious objec-
tions of which only one need concern us here (for a more comprehensive discussion
see Seager and Allen-Hermanson 2008).
According to this objection panpsychism is simply emergentism in disguise. The
problem goes back to William James in his criticism of a form of panpsychism which,
with characteristic aptness, he called the mind dust theory. The worry is that even if
we suppose that consciousness is a fundamental feature of the world we still have
to explain how complex mindsthe only sort we have any acquaintance with
come into being and we have to explain their relationship with the elemental mental
features. This looks like a job for emergence, but how? As James complains: Take
a sentence of a dozen words, and take twelve men and tell to each one word. Then
stand the men in a row or jam them in a bunch, and let each think of his word as
intently as he will; nowhere will there be a consciousness of the whole sentence.
(James 1890, p. 160). It is worth quoting at some length Jamess specific worries
about consciousness:
Where the elemental units are supposed to be feelings, the case is in no wise altered. Take a
hundred of them, shuffle them and pack them as close together as you can (whatever that may
mean); still each remains the same feeling it always was, shut in its own skin, windowless,
ignorant of what the other feelings are and mean. There would be a hundred-and-first feeling
there, if, when a group or series of such feelings were set up, a consciousness belonging to
the group as such should emerge. And this 101st feeling would be a totally new fact; the
100 original feelings might, by a curious physical law, be a signal for its creation, when they
came together; but they would have no substantial identity with it, nor it with them, and
one could never deduce the one from the others, or (in any intelligible sense) say that they
evolved it. (p. 160, my emphasis)

Evidently, James has doubts that Nagels mental chemistry even makes sense in the
case of consciousness. But notice he also considers the option of radical emergence
wherein the new emergent feeling is directly caused to occur by the conglomeration
of the simple feelings. James does not say this is impossible but that it will not provide
any way to deduce the one from the other (this is one of the hallmarks of radical
emergence). That is, there is no way to see this process as a case of conservative
emergence.
This is the objection: the only way to put together a panpsychist theory on the
model of the SPW requires the admission of radical emergence, which is directly
contrary to that model. The way that panpsychism is supposed to mimic the opera-
tion of the SPW turns out to be illusory. Furthermore, if panpsychism requires the
10.3 Favour Fundamentality 203

admission of radical emergence then why not simplify the theory and let complex
consciousness radically emerge directly from the physical?
It may well be possible to counter this argument. James has a very narrow con-
ception of conservative emergence which is very distant from what is recognized in
modern science and especially quantum mechanics (as discussed above in Chap. 6,
pp. 150 ff.). Jamess view of conservative emergence stems for a picture of the phys-
ical world more akin to C. D. Broads conception of extremely austere mechanism,
as discussed in Chap. 5, than that of the SPW. I have argued elsewhere that the role
of information in quantum entanglement might serve as a model for a kind of con-
servative emergence of complex mental states out of elemental forms (see Seager
1995).
But even supposing we accept that there is a viable form of conservative emergence
available to the panpsychist, this is just to jump from the frying pan into the fire. For
conservative emergence is all that is required to rob complex consciousness of its
efficacy and to generate the paradox of consciousness. The panpsychist hypothesizes
that there are elemental mental properties which belong to the fundamental physical
entities of the world. If these elemental features have their own causal powers (that
is, are not themselves epiphenomenal) then by the logic of conservative emergence
they will usurp efficacy from the complex conscious states which they subvene.
Furthermore, if complex consciousness conservatively emerges in anything like the
standard way endorsed by the SPW, albeit via mental chemistry, then the paradox
looms, since the natural way to regard conservative emergents is as stance dependent
conceptual/epistemic resources.
There is an elegant way around this problem available to the panpsychist. Recall
in our discussion of conservative emergence we allowed for the possibility of the
emergence of new large simplesentities which are the lawful consequence of the
interaction of initial constituents but which absorb or supersede these to become
self standing entities in their own right (see Chap. 7, n. 23).12 This possibility does
not violate the strictures of conservative emergence nor those of the totality of the
underlying theory (completeness, closure and resolution). Such large simples would
show up in a simulation restricted to the state and laws of the fundamental account.
Perhaps within the realm of classical general relativity black holes are examples.
They arise via physical law from a set of elementary progenitors but take on their own
identity as fundamental entities characterized by only three basic physical properties:
mass, charge and angular momentum. Now, this is nothing more than an example
because general relativity is not the final physics and it seems very likely that whatever
the final physics is it will assign constituent structure to black holes (strings on the
black hole boundary, or something). Nonetheless the example is highly illuminating
insofar as it shows there is nothing incoherent about the conservative emergence of
large or macro simples.
The panpsychist can then argue, very speculatively indeed, that the elementary
mental features associated with the elementary physical features of the world can,
when combined in the appropriate way, as in for example human brains, generate
a new large simple mental entity. It is such entities which are the foundation of the
sort of complex consciousness with which we are familiar in introspection. As a new
204 10 Embracing the Mystery

simple, with no mental constituents (though with a history of formation from a set
of elemental mental features), this entity can have its own efficacy without violating
the strictures of conservative emergence.13
But note that Watchful Waiting cannot avail itself of this argument to save the
SPWs requirement that all emergence be conservative emergence. It is true that
large simples can conservatively emerge but without any elemental mental features
to draw on no mentalistic large simple can emerge save by radical emergence.
Of course, it remains true that the behaviour of the world envisaged by the panpsy-
chist will be in principle empirically distinguishable from that of a purely physical
world. In terms of our simulation thought experiment, the purely physical simulation
would fail to duplicate the actual worlds evolution if this sort of panpsychist account
is correct. On the principle that one might as well be hung for a goat as a sheep, I
regard this as a virtue of the account and in fact it seems to be virtually a require-
ment to avoid the paradox of consciousness. Without empirical distinguishability the
arguments for generalized epiphenomenalism will reappear in the mentalistic realm
posited by the panpsychist. This is not to say that there is any practical prospect of
an experiment which would test panpsychism. We should not expect the elemental
mental features to necessarily have any measurable effects at the micro-level, any
more than we expect to be able to detect the gravitational field of a single electron.
At the macro-level, there is no way to definitively tell whether the human brain, for
example, ever violates any physical laws such that its state evolution diverges from
that which would result from the operation of a purely physical model of it. There is
all the abundant indirect evidence that the brain is a physical system like any other,
albeit of immense complexity, but it would be hard to imagine any experimental test
which could directly verify this short of demonstrated non-physical parapsycholog-
ical effects.14
Both the option of Favouring Fundamentality and Embracing Emergence have
the consequence that, in addition to any perceived metaphysical defects of the SPW,
physical science provides an empirically false account of the world which, in principle
at least, could be revealed via standard experimentation. This is to their credit as
genuine hypotheses about the nature of the world. But they face up against the
long and surprisingly smooth history of explanatory success which the SPW has
enjoyed. After some five centuries of concentrated effort our science has come to
encompass virtually all of reality with which we can empirically grapple. There is
no sign whatsoever that this pattern of success will not continue and deepen as we
unravel how the cosmos began under the dictates of a fairly small set of basic physical
entities, laws and processes. Even more impressive is the hierarchy of conservatively
emergent higher level features: properties, entities and laws. Each element of this
hierarchy appears to connect to the basic level in intelligible ways with multiple lines
of explanatory interconnection. Although far from complete and facing impossible
problems of sheer complexity, the overall picture of the world which science has
developed and which was outlined in Part I gives every appearance of a seamless
whole. One would have to be very brave or perhaps foolhardy to bet against the
continued smooth growth of the SPW.
10.4 Modify Metaphysics 205

10.4 Modify Metaphysics

It is thus especially interesting that there is a way to accept the explanatory power of
the standard SPW without endorsing Watchful Waiting. This way avoids the paradox
of consciousness and sidesteps the problem of phenomenal consciousness. Label this
approach: Modifying Metaphysics.
The modification in question stems from the observation that there is an implicit
assumption lurking within all the other options and the SPW itself. The assumption is
scientific realism, which is, roughly, the thesis that science aims at and is providing the
truth about the deep structure of the world. It is obvious that the option of Watchful
Waiting endorses scientific realism. It goes so far as to identify the scientific project
of the SPW with the metaphysical quest to discover the ultimate nature of reality.
According to it, is only a matter of time before all features of reality will take their
rightful place within the system of conservative emergence founded on the ultimate
reality which is the basic physical structure of the world.
But why do I say that Embracing Emergence and Favouring Fundamentality also
endorse scientific realism? Because both these views take it as a given that con-
sciousness has to be integrated with the SPW with as little disturbance as possible.
It is admitted that both of these options deny that science tells us the whole truth
about the world. According to them science has either missed the existence of radi-
cal emergence or the existence of an extra set of fundamental mental features of the
world. But they both nonetheless set themselves up as ways to integrate conscious-
ness into a minimally tweaked SPW. At bottom, both options agree with the core of
the philosophical response to the SPW, which essentially involves, in the words of
Bas van Fraassen a strong deference to science in matters of opinion about what
there is (van Fraassen 2002, p. 48).
It is possible instead to interpret the difficulties the SPW has with consciousness
as indicating not the need for some extra parameter we can bolt onto it but rather
as hinting that the core assumption of scientific realism is dangerously overstressed
and cannot bear the theoretical weight the problem of consciousness loads upon it.
Obviously, I cannot lay out a fully developed anti-realist or arealist account of science
here. But drawing on the substantial work of Bas van Fraassen (1980; 2002), with
more indirect help from John Dupr (1993) and Nancy Cartwright (1999), we can
outline how the problem of consciousness is transformed when we reject scientific
realism.15
The most fully worked out anti-realist position is the constructive empiricism
of van Fraassen. Fundamentally, his account of science replaces the quest for truth
with the search for empirical adequacy which is the epistemic virtue of saving the
phenomena. Empirically adequate theories make correct claims about observable
reality, notably the readouts of our instruments, but also about the whole range of
phenomena that underpin modern technology. Typically, scientific theories also make
claims about unobservable entities but the anti-realist puts no credence in these, or at
least does not hold that the empirical success of the theory in the observable domain
should necessarily lead one to accept claims about the unobservable.
206 10 Embracing the Mystery

Such an account of science does not rule out belief in unobservables, nor does it
deny that the empirical success of science might provide evidence in favour of its
claims about the unobservable. In fact, it is obvious that science does provide such
evidence. Within philosophy there is a long standing debate about the upshot of this
evidence. To my mind there is a pre-existing intuition that unobservable entities exist
and this means that the empirical success of science which postulates these entities
deserves considerablebut limitedepistemic respect.
I dont think however that this forces us to embrace the SPW. Instead, I think one
should regard the scientific enterprise as one of model building with the object of the
model being to generate correct descriptions and predictions of observable and mea-
surable phenomena. These models are usually idealized and simplified theoretical
representations of some portion of reality. The scale of such models is breathtak-
ingly wide, ranging from ultra-microscopic vibrating string-like entities to the entire
universe.
The SPW lays claim to the idea that there is what might be called a total model
one in which, as its defenders see things, every facet of reality is included. Of course,
this is not the claim that there is any practical prospect of our deploying such a model
to represent any significant region of space and time. We know that the overwhelming
complexity of even very tiny systems absolutely precludes any such thing. But that
is not the point of the exercise, which is philosophical and not scientific. The idea is
that the concept of a total theory is coherent and this underwrites the metaphysical
possibility of a complete model.16
The option of Modifying Metaphysics as I see it need not and does not deny the
coherence of the idea of a total model in this philosophical sense, that is, a model
which in principle is entirely empirically adequate. What it denies is the step from
the model to the claim that it accurately represents every feature of reality or captures
the metaphysics of genuine efficacy. The paradox of consciousness shows that it does
not and cannot.
The SPW seems to be very similar to what Nancy Cartwright calls fundamental-
ism about physics. Im not completely sure about this because, as we have observed
with other thinkers, it is actually hard to tell whether she is making an epistemological
or ontological point. Only the latter could threaten the SPW. Cartwright character-
izes fundamentalism as the view that all facts must belong to one grand scheme and
moreover that this is a scheme in which the facts in the first category have a special
and privileged status. They are exemplary of the way nature is supposed to work.
The others must be made to conform to them (Cartwright 1999, p. 25). Although
Cartwright describes the first category of facts rather broadly, as those that are legit-
imately regimented into theoretical schemes (p. 24) and does not spell out exactly
what she requires for one fact to conform to the set of first category facts, there is a
clear affinity with the SPW. What is exemplary of how the world works is the way the
world works according to fundamental physics and all other facts must conform to
the fundamental physical facts in the sense that the former are completely determined
by the latter via the completeness, closure and resolution of the total theory of final
physics.
10.4 Modify Metaphysics 207

The option of Modifying Metaphysics emphatically agrees that this is the core
mistake which both engenders the problem and paradox of consciousness and makes
them completely intractable. But denying this mistaken understanding of the nature
of science is compatible with the possibility of an in principle model which is empir-
ically adequate.
It is thus somewhat curious that Cartwright sometimes appears to deny the in
principle existence of the total model which the SPW endorses. She uses an example
of a thousand dollar bill dropped out of a high window. Obviously, there is no prac-
tical, usable model in any science, or any combination of sciences for that matter,
which will predict where the bill will land with precision. She says that the funda-
mentalist will insist that there is in principle (in Gods completed theory?) a model
in mechanics for that action of the wind, albeit probably a very complicated one that
we may never succeed in constructing (p. 27). The defender of the SPW certainly
agrees with the fundamentalist here. We can use our usual metaphor here: is there
an in principle possible computational simulation (under relaxed computational con-
straints) of the physical system in question which will accurately track the trajectory
of that thousand dollar bill?
Does Cartwright actually deny the possibility of the total model? Her discussion
is maddeningly elusive. Of fluid mechanics (in relation to the thousand dollar bill
problem) she says it does not have enough of the right concepts to model the full set
of causes, or even all the dominant ones (p. 27). But the SPW uses Gods ultimate
model, that is, fundamental physics, from which fluid mechanics conservatively
emerges and is quite distant from the basic level.
Is it possible to read Cartwright as endorsing either the Favouring Fundamentality
option or that of Embracing Emergence? Both entail that fundamental physics is not
a total theory. It either fails to include primitive causally efficacious features that need
to be added to the fundamental level of reality or fails to include causally efficacious
radically emergent high level features that would not appear as consequences of the
pure model generated by basic physics. Cartwright explicitly accepts the existence
of unobservable entities postulated by scientific theory. She has written that her book
The Dappled World (Cartwright 1999) defends scientific realism; well-supported
claims of science have as good a claim to truth as any (Cartwright 2001, p. 495). But
she goes on to add that while there are ways to reconcile how successful science is
practiced with the metaphysics of the single unifying theory[t]he one great theory
is not incompatible with the evidence; it is just badly supported by it (p. 495).
The practice she has in mind is the way scientists deploy a grab bag of disparate
theories and technologies in the design, setup and explanation of their experiments. I
think that the reconciliation strategies alluded to involve the commitment to conser-
vative emergence of the high level features to which these disparate theories appeal
coupled to the uncontroversial appeal to the staggeringly vast complexity of the high
level phenomena when considered from a low level viewpoint. If so, Cartwright is
implying that the evidence for conservative emergence is weak.
Much of this book disputes this. The evidence for conservative emergence is
indirect but I dont think it is either weak or insignificant. Save for the problem
and paradox of consciousness, where is there any evidence against the metaphysical
208 10 Embracing the Mystery

position which holds that the entities described by basic physics form a complete,
causally closed system into which all other features can be resolved? Once Cartwright
admits that we have good evidence for the existence (and nature) of the entities
postulated by fundamental physics it will be hard to resist the slide towards the
totality of physics. The only way to stop the slide is to deny that the structure of the
world posited by basic physics possesses completeness, closure and resolution. This,
in turn, seems to require adopting some form of our two radical options of Favouring
Fundamentality or Embracing Emergence.
It may be that in the work of John Dupr (e.g. Dupr 1993) we find something like
this dialectic worked out to its natural conclusion. Like Cartwright, Dupr is keen to
deny that any level in the ontological hierarchy is to be privileged and he launches
a detailed and even passionate attack on reductionism. However, the reductionism
he attacks is one which focuses on the epistemic issue of the in principle possibility
of identifying the entities of one level with those of other levels, whereas the core
issue here is whether physics describes (or at least aims to describe) that level of
reality which determines every other level according to the constraints of complete-
ness, closure and resolution. Physics does not need to maintain that there is some
way to identify collections of sub-atomic constituents with, for example, particular
mountains; it need only make the claim that things like Mt. Everest depend entirely
upon low level physical entities for their existence and properties.
The upshot is that, as in Cartwrights case, I am not completely sure what Duprs
attitude is towards the SPW in its pure form. But he does make some highly suggestive
claims. With respect to the condition of closure he says that the central purpose of
the ontological pluralism I have been defending is to imply that there are genuinely
causal entities at many different levels of organization. And this is enough to show
that causal completeness at any one level is wholly incredible (Dupr 1993, p. 101).
Of course, many initially incredible things turn out to be trueespecially in the
domain of modern science, so this is a somewhat odd, or at least very careful, choice
of phrase. Nonetheless, I think the natural reading of this is the denial that all the
causal powers of high level entities are completely determined by those of the lower
level entities and processes. Since Dupr, like Cartwright, professes to be a scientific
realist, it appears that his position will have to endorse one of our radical options
and, it would seem, the favoured option for both of them would be that of Embracing
Emergence.
We can see that it is the acceptance of scientific realism that leads Cartwright
and Dupr down the garden path towards radical emergentism. The denial of realism
does not say anything about how close to empirical adequacy the model suggested by
basic physics can get. However, it does suggest that interest in this question should
largely evaporate. Obviously, there is absolutely zero prospect of ever developing
a working version of the model which could make any predictions. Thus, without
the sense (which many physicists almost instinctively possess) that one is laying
bare the ultimate nature of reality the significance and appeal of the total model
quickly fades away. Without the underlying commitment to scientific realism there
is no way to leverage the indirect support for conservative emergence into support
for the existence of the total model. Instead, this evidence simply forms a web of
10.4 Modify Metaphysics 209

interconnected theories and models which serve to generate theoretical explanations,


empirical predictions and, of course, new sources of ideas for novel technologies.
Furthermore, the anti-realist explains the interconnectedness of all our models in
terms of the need for science to generate empirical content. We can borrow the
evocative metaphors of Cartwright: the anti-realist sees a dappled world of diverse
sorts of entities which can be somewhat organized under a patchwork of laws of
nature. And we can follow Dupr in denying that there is a complete hierarchical
structure to reality in which some levels are ontologically privileged.
Despite the coherence of a stance which rejects scientific realism, it is hard to
articulate in any way that seems plausible. This is in part due to the undeniably
immense power and elegant attractiveness of the metaphysical picture suggested by
the SPW. In addition, there is a kind of cultural inculcation of the idea that science is
in the business of searching for ultimate reality.17 After all, is it not simply obvious
that all material objects are constituted out of components which determine their
properties? And if that is false then must it not be the case that certain assemblages
of matter have radically emergent properties?
The option of Modifying Metaphysics requires that if these questions are under-
stood as aimed at metaphysical goals then they must be rejected. Ultimately, existence
is a mystery. Pursuing the dream of the SPW in the hope of solving this mystery is a
hopeless quest. The most perfect philosophy of the natural kind only staves off our
ignorance a little longer wrote David Hume in 1748 (2000, p. 28) and in some areas
we must be content to remain very close to this irresolvable mystery. This does not
preclude researchers in neuroscience from investigating and perhaps discovering the
so-called neural correlates of consciousness nor the potential development of devices
to read minds or gauge a subjects conscious state. But there will be no model of
the conservative emergence of consciousness. The success of science in staving off
mystery does not get us one step closer to integrating consciousness into the SPW,
for the very methodology which funds its success leaves it unable to grapple with
consciousness.
Perhaps the option of Modifying Metaphysics enjoins a kind of quietism about
consciousness: take it as given, a part of the natural world and, in a way, meta-
physically primitive. It is open to standard scientific investigation; we can find the
neurological conditions which underpin it. But we cannot fit it into the SPW as a con-
servative emergent and if we abandon scientific realism we are under no obligation
to do so.
I cannot bring myself to endorse any of the options we have studied. It seems
obvious that the mainstream response is and will be to go with Watchful Waiting and
insofar as this primarily enjoins extending science in the normal way it has some
obvious virtues. Perhaps with enough accumulated knowledge of the brain and its
environment and measurable links to a host of mental states, the scales will fall
from our eyes and consciousness will take its place alongside chemistry and life as a
standard conservative emergent. But I think the problem of consciousness and most
especially the paradox of consciousness makes it very hard to see how this could
happen.
210 10 Embracing the Mystery

What of the two radical options of Embracing Emergence and Favouring Funda-
mentality? These seem to me to suffer from the urge to minimize the disturbance
consciousness introduces into the SPW yet both end up adding elements to the sci-
entific picture for which there is no empirical evidence.
The final, most radical, option avoids saddling science with radical emergence or
new elementary features, but its cost is to demote science from a metaphysical guide
to the nature of reality to an epistemic project of better dealing with the observable
world. The scientific project has a distinctive methodology which is that of building
a hierarchy of conservative emergence. The Modifying Metaphysics option asserts
this methodology cannot successfully grapple with consciousness. So it enjoins us
not to attempt the impossible, to accept consciousness as a part of the natural world
which is a metaphysically primitive mystery leaving us free to link consciousness
with our rich system of scientific models, regarded as nothing more than models.
It seems that, broadly speaking, the four options outlined here exhaust the possible
responses to the problem of consciousness. I wish I could show that one of these
options was definitively correct. Failing that, I can only leave to the reader the job of
picking the winner.
Notes

Chapter 2
1
Because of the precession of the Earths axis of rotation, Polaris has been located
reasonably close to the celestial pole only for the last 2000 years or so. In ancient
Egypt, Thuban was the north star but it now resides some 25 from the celestial
north pole.
2
Our imaginary astronomers would have another method at their disposal: the
transits of the sun by the inner planets, Mercury and Venus. By measuring the
apparent position of, say, Venus against the backdrop of the Sun from different
locations on Earth, it is possible to use parallax to determine the EarthVenus
distance. Keplers laws then allow one to deduce the distances of all planets from
the sun. But such measurements are difficult to make and require a very accurate
determination of the orbital periods of the planets. How could this be determined
in the absence of a fixed backdrop of stars? Incidentally, a search of the Internet
will discover a fascinating movie of the 1882 transit of Venus which has been
assembled from still photos taken at the Lick Observatory.
3
The hypothesis was anticipated by Newton in his first letter to the theologian
Richard Bentley (see Janiak 2004, Chap. 4; Newtons correspondence with Bentley
can also be found at The Newton Project (http://www.newtonproject.sussex.ac.uk);
see Larson 2003 for an overview of current accounts of star formation).
4
But as of 2009 Pluto has been officially demoted to mere dwarf planet, initiating
a novel method of solar system exploration: complete the project of visiting all
planets by pure semantics! So much cheaper than building spacecraft after all.
5
Protoplanetary disks are now routinely studied. See Lagage et al. (2006) or
visit the Subaru telescope web page: http://www.naoj.org/Pressrelease/2004/04/
18/index.htm
6
Over seven hundred as I write this, but more are discovered almost daily. See
the Internet Extrasolar Planets Encyclopedia (http://exoplanet.eu).
7
Our knowledge of stellar ages results from a wonderfully clever combination
of the astrophysics of star composition and the statistical analysis of the

W. Seager, Natural Fabrications, The Frontiers Collection, 211


DOI: 10.1007/978-3-642-29599-7,  Springer-Verlag Berlin Heidelberg 2012
212 Notes

correlations between the luminosity and color of stars, as codified in Hertzsprung


Russell diagrams. Such diagrams of globular clusters, which can be assumed to
contain stars of roughly the same age (whatever it might be) are highly distinctive
and suggest all by themselves that the clusters are very old.
8
See http://www.aip.de/People/MSteinmetz/E/movies.html for some specta-
cular simulation runs.
9
Measurement of the distances to galaxies is another fascinating aspect of
astronomy. Galaxies are much too far away for the parallax technique. But it is
possible to measure the luminosity of some individual stars in nearby galaxies and
compare them to similar stars in our own galaxy where we have a better handle on
distance measurements. Astronomers have constructed an intricate ladder of
overlapping distance indicators which let them assign distances to a host of
galaxies, from which they can establish the Hubble law and then use it to assign
distances to galaxies yet further away.
10
In a curious, if faint, echo of the Lonely Earth thought experiment, our ability
to detect the background radiation may be a relatively parochial feature of our
present temporal location in the universe. Given the accelerating expansion of the
universe imposed by the so-called dark energy (see Glanz 1998; Carroll 2004)
there will come a time when the CMB will be so stretched out that it will be
undetectable and this time is, on a cosmic scale, not too far in the future. Perhaps
in a mere 150 billion years or so the denizens of our galaxy will see only the
galaxy drifting in an otherwise empty universe (see Krauss and Scherrer 2008). A
truly optimistic spirit would point out that future cosmologists would have the
history of cosmology available to them. The contention of Krauss and Scherrer
also suggests a new use of Brandon Carters infamous doomsday argument (Carter
1983). Suppose, first, that we represent a random selection from all the creatures
capable of undertaking scientific cosmology. For reductio, suppose second that
such intelligent creatures will exist throughout the life of the universe (when, of
course, conditions allow for the existence of such observers at all). If, following
Krauss and Scherrer, during almost all of the life of the universe there will be no
evidence available to cosmologists about the overall state of the universe, in
particular that it is expanding, then we would expect that weas random
observers within that historywould not be in a state where universal expansion is
detectable. Since we can detect this expansion it follows that cosmology capable
observers will not survive for very long (in cosmic terms). For a thorough
philosophical discussion of Carters argument see John Leslie (1998).
11
See http://lambda.gsfc.nasa.gov/product/cobe/firas_image.cf. The authors
state The FIRAS data match the curve so exactly, with error uncertainties less
than the width of the blackbody curve, that it is impossible to distinguish the data
from the theoretical curve.
12
Arp is no less infamous for the controversial and disturbing account of his
professional ostracism occasioned by apostasy from Hubble orthodoxy. See Arp
(1987) or the later (but equally inflammatory) Arp (1998).
13
See http://www.lightandmatter.com/html_books/7cp/ch04/ch04.html for a
nice example.
Notes 213

14
Incidentally, the value of this ratio puts constraints on the overall structure of
the universe. The relative amount of deuterium is a function of the density of
baryonic or ordinary matter in the very early universe, and the observed ratio
implies that the density of the universe is only 10% of the critical valuethe
value that would geometrically close the universe. Recent observations suggest
that the universe is not closed, but that the density is something like one third the
critical value. Hence there must be a lot of missing mass, an inference that fits in
with much other data suggesting that the universe is mostly unseen dark matter
and the so-called dark energy responsible for the apparent acceleration in the rate
of cosmic expansion (see Glanz 1998; Seife 2005).
15
It is important to bear in mind, however, that this apparent simplicity could be
an artifact of our inability to closely observe the details of the early universe. The
further we move away from rich sources of data the more dependent we become on
the models we devise of the target phenomenon. It may well be that it is our models
of the early universe which are (relatively) simple rather than the universe itself.
16
If the half-life of a proton is, say, 1031 years then a tank containing that many
protons should reveal about one decay every year. Thats not a hugely big tank of
water (approximately 100 cubic metres). The actual observations are conducted in
vast chambers far underground and in them one would expect to see proton decays
every few days. No unequivocal events of proton decay have ever been observed.
17
Essentially, inflation steals this energy from the gravitational field energy of
the universe which is negative. Hence the overall energy balance of the universe
remains constant. Curiously, it is quite possible that the total energy of the
universe is exactly zero! See Guth (1997), Appendix A, for a nice qualitative
argument that the energy of a gravitational field is negative.
18
The experiments at Brookhaven involve smashing gold nuclei together at very
high energies and have the curious distinction of threatening the existence of the
entire universe. At least, a few people suggested before the heavy ion collider
began its runs that there was a chance that the experiment could produce
strangeletsa form of matter composed of quarks with the quantum mechanical
property of strangenesswhich could set off a chain reaction converting all
matter in the universe to strange matter, destroying all life etc. in the process. This
hypothesisworthy of the Hitchhikers Guide to the Galaxywas stated at the
time to have an extremely low probability, a reply which, if the probability is non-
zero, leads to interesting speculations about the expected utility of the
experiments. The worry was apparently set to rest in 2000 by the calculation
that Brookhaven could, at worst, produce positively charged strangelets while only
negative charged ones could pose a danger (see Madsen 2000).
19
Peter Higgs (among others) worked out the mathematics to show how particles
could acquire mass via interaction with a special kind of field in the 1960s. No one
yet knows whether nature really uses this mechanism, but the particle quantum
mechanically associated with the field, called the Higgs boson, ought to be
detectable by next generation accelerators such as the large hadron collider (LHC)
at CERN is now in operation. There were some hints in 2000 that CERNs large
electron positron collider (LEP) had detected the Higgs boson, but the claim, based
214 Notes

on the statistics of certain quite limited experiment outcomes, could not be verified
before the LEP was shutdown to make way for construction of the LHC (see Abbott
2000). In December 2011, CERN announced preliminary data that, if confirmed,
would represent the discovery of the Higgs boson (see http://press.web.
cern.ch/press/PressReleases/Releases2011/PR25.11E.html). In July 2012, further
data have made the discovery of the Higgs boson virtually certain.
20
There have been conjectures that certain physical constants are not really
constant. If these turn out to be correct then the basic physical properties of protons
and other quite fundamental particles may well have changed over time. Perhaps,
that is, the mass, or the charge, of the proton in not quite the same as it was shortly
after the big bang. But obviously such conjectures do not support in any way the
idea that individual protons have acquired biological properties sometime in the
last 12 billion years.

Chapter 3
1
Now available in a CDROM edition, Hooke (1665/1998).
2
One cant help but note the analogy between this puzzle and the more
fundamental asymmetry of matter over anti-matter in the universe even though
physical processes almost universally do not favor production of the one over the
other.
3
For a review of the relevant brain mechanisms and an interesting philosophical
application of them in the understanding of pleasure and desire, see Schroeder (2004).
4
Some day the machines may not have to be so ungainly. Recent work on
imaging with very weak magnetic fields (on the order of 30 milli-Teslas for the
main magnet) where the measurement is effected by ultra sensitive
superconducting quantum interference devices (SQUIDs) has already produced
rudimentary medical images (see Zotev et al. 2007).
5
There are two distinct relaxation times in fact, labeled T1 and T2, which reflect
different physical processes (basically interaction of proton spins with each other
and with the atoms in the subject). Image extraction exploits the differences
between T1 and T2 and how they vary with tissue type and imposed magnetic fields.
This is extremely complex applied physics, but it is nothing more than physics.
6
A much older imaging method, positron emission tomography (PET), also
tracks metabolic processes (see Brownell 1999). It is another anchor point, if you
will. PET works by injecting a radioactive tracer into a subject which insinuates
itself into the chemical processes that constitute cell metabolism. The tracer
element decays by emitting positrons-the anti-matter version of the electron-which
are almost instantly annihilated when they encounter nearby electrons. The
destruction of the positron/electron pair yields a pair of gamma ray photons that
travel in exactly opposite directions (so the net momentum is zero as it must be).
The detector need only watch for coincidental pairs of gamma rays. An image can
then be constructed from information about where the coincident pairs of gamma
Notes 215

rays are detected (essentially, the locus of activity will be where the trajectories of
the gamma rays intersect). Various tracers are best suited for studying different
processes. Brain PET scans use a radioactive isotope of oxygen and indirectly
measure brain activity by measuring oxygenation. In this, they are similar to fMRI.
PET scans are of relatively low resolution, somewhat invasive and very expensive
(a cyclotron is required at the facility to produce the short lived radioactive
tracers). MRI is now the much more common technology.
7
This is severely oversimplified. There are a number of interacting parameters
at work here that have to be disentangled during image acquisition. For a
comprehensive overview see Noll (2001).
8
There is a veritable flood of new findings in this area, all of which point
surprisingly directly to the conclusion that mental processes can actually be
discerned in the activity of the brain. Some notable recent results include the partial
reconstruction of visual perceptual imagery from fMRI data (Nishimoto et al. 2011)
and the reconstruction of heard words from fMRI imaging of human auditory cortex
(Pasley et al. 2012). If, as seems likely, internal cognition and imagination uses
some of the same brain mechanisms that underlie visual and auditory perception
such studies suggest the eventual possibility of listening in on thought and
imaging via prospective highly advanced brain imaging technology.
9
See also Buckner (2003).

Chapter 4
1
In Figures 4.1, 4.2 and 4.3 brain imagery courtesy of Mark Dow and can be found
at http://lcni.uoregon.edu/*dow.
2
Extremely high level and abstract cognitive modules have been postulated, the
most notable of which is perhaps the so-called theory of mind module, the
absence of which, or defects within, is conjectured to underlie autism (see Leslie
1992, Baron-Cohen 1995). A fairly recent critical discussion can be found in
Nichols and Stich 2003, pp. 117ff.). Another high-level cognitive module
involving social cognition is the cheater detection module which is supposed
to aid us specifically in situations of more or less logically complex social
exchange (see Cosmides 1989; a general philosophical attack on a number of core
theses of evolutionary psychology, and the cheater detection module hypothesis in
particular, can be found in Buller 2006).
3
More precisely, sensory input on the left (right) generally is processed in the
right (left) hemisphere and motor signals generated in the right (left) hemisphere
actuate in the left (right) side of the bodythe anatomical property of nerve
crossover is called decussation. Why this is so remains mysterious, but for some
reason lost in the depths of our evolutionary history, perhaps a mere accident or
spandrel forced by other constraints, perhaps a crucial if obscure evolutionary
advance, the vertebrate nervous system is generally organized in this crosswired
form. The pioneering neuroscientist Ramn y Cajal hypothesized that decussation
216 Notes

was dictated by the need to rectify the inverted retinal image. Nowadays it is
suggested that the crosswiring in primitive vertebrates allowed for a quicker
response to predators via a direct activation of the muscles on the side opposite the
threat (see Sarnat and Netsky 1981).
4
It may be worth pointing out that something a little bit similar happens in
ordinary subjects who are subjected to two sound streams, one to each ear (an
experimental paradigm called dichotic listening). Though subjects are conscious of
only one streamthe one they are attending tothe interpretation of ambiguous
phrases or words in the attended stream is influenced by disambiguating
information from the non-attended stream (see Hugdahl 1988 for an overview
of dichotic listening experimentation and theory). Note however that the dichotic
listening case is a phenomenon of divided attention, a feature totally lacking in
blindsight where subjects are in fact directed to attend to potential visual stimuli.
5
I cannot resist noting that Alan Cowey and Petra Stoerig (Cowey and Stoerig
1995) report on a fascinating experiment that opens a small window into the visual
consciousness of monkeys. It gives a hint that the kind of nothing blindsighted
monkeys experience is quite similar to that of human beings. Monkeys which have
had the visual cortex of just one hemisphere of the brain removed exhibit
blindsight for the opposite side visual field. But Cowey and Stoerig ingeniously
trained the blindsighted monkeys to respond to the absence of any visual stimuli in
their good visual fieldthe monkeys learned to push a special button when they
saw a blank presentation screen, and other buttons for more normal targets (such as
light spots on the screen). What happens when this experiment is performed on the
monkeys blind visual field? Despite the fact that in other experiments the
monkeys can and do respond to light spots in their blind visual fields, under the
conditions of this experiment the monkeys press the key corresponding to no light
spot present (normally sighted control monkeys have no trouble indicating when
they see a light and when they do not see a light). It is hard to avoid the conclusion
that the monkeys are, just as humans are, not visually conscious of any light spots.
6
Damasio uses Gages accident and its aftermath to illustrate and bolster his
own theory about the significance of emotion in cognition and may sometimes go
further than the evidence strictly warrants. For an extremely detailed and more
nuanced account of the Gage incident see Macmillan (2000).
7
A more recent study of brain response to the ultimatum game involved
accomplished Buddhist mindfulness meditators. Its result is rather peculiar:
while confirming the general outlines of earlier research it found that meditators
will accept unfair offers at a much higher rate than non-meditators (they are, in a
certain sense, more rational); see Kirk et al. (2011).
8
A remarkable study (Sahin et al. 2009) recently carried out on epilepsy
patients with a number of electrodes implanted within Brocas area (among other
regions of the brain) reveals several basic processing stages of speech production,
localized in both space and time. The presence of the electrodes allows for spatial
and temporal resolution far superior to that of MRI techniques and supports the
idea that lexical, grammatical and phonological features are processed in sequence
in a process typically taking about one second.
Notes 217

9
At the moment I will only mention in a footnote that the philosophical big game in
this area would be a link from neural organization to the subjective character of
conscious experience. Of course there is a huge difference between the
phenomenology of vision and hearing and anything that could shed light on how
differential neural structure bears on sensory phenomenology is of the greatest interest.
Certain phenomenological traits of some aspects of vision have been investigated
by the philosopher Paul Churchland (see Churchland 1986), but his approach is
limited to the relational features of color experience (e.g. such features as that orange
is closer to red than blue) rather than the origin of subjectivity from neural activity
(a difficulty I call the generation problem). For a discussion of Churchlands
approach in relation to the generation problem, see Seager (1999), Chap. 2.
10
This experiment may provide some small measure of empirical support for David
Chalmerss principle of organizational invariance by which the subjective qualities of
conscious experience supervene upon the functional architecture rather than more
basically physical properties of whatever system happens to be implementing
or generating the relevant states of consciousness (see Chalmers 1996, Chap. 7).
11
The area is called the right fusiform gyrus (a gyrus is one of the characteristic
mounds created by the folding of the cortex; the valley between two gyri is called a
sulcus); the region is sometimes labeled the FFA (fusiform face area). Its function
is somewhat controversial, at least if one was inclined towards the perhaps naive
view that this region of the brain is specifically and uniquely geared towards
recognizing human facial features. In fact, the FFA is recruited by, no doubt
among many, many others, expert bird-watchers and car lovers to help
discriminate birds and automobiles (see Gauthier et al. 2000). Its function might
better be described in terms of subtlety of categorization based upon very fine
differences. Thus, conditions analogous to prosopagnosia have been observed
impairing recognition of birds and makes of cars as well as faces.
12
The rapid pace of innovation in this area puts such remarks in jeopardy. Real
time fMRI is under current development and promises to open brand new windows
on the brain and, most definitely, the mind (see Bagarinao et al. 2006). For still
more recent work see Monti et al. (2010).
13
See Menon et al. (1998) for discussion of a case of observed activation, via
positron emission tomography (PET), of the face area in a patient in the so-called
vegetative state. The face area responded appropriately when the patient was
exposed to pictures of people familiar to her. It would be interesting to discover if
the rivalry effects observed by Tong et al. (1998) also occurred in such patients
(but I know of no such work). With regard to the warning given in the text above,
we should also bear in mind that we dont know with much certainty that such
patients are actually unconscious; perhaps some are and some are not depending
upon the nature of their illness (to see how disturbingly inaccurate diagnoses of
persistent vegetative state may be, see Andrews et al. 1996). In fact, it now seems
quite clear that a significant number of vegetative state patients have a high level
of awareness. Recent work using fMRI has revealed that such patients can respond
to instructions to imagine certain activities (for example playing tennis and
walking about ones house) which can be clearly distinguished in the resulting
218 Notes

brain scans (see Owen 2008). It has now been shown that this methodology can be
adapted to open up a slow and difficult line of communication with people
diagnosed as being in a vegetative state (see Monti et al. 2010).
14
The presence of Marc Hauser in the list of authors of this article casts an
unfortunate but unavoidable shadow on its credibility, but I can find no evidence
that the work here was in any way academically suspect.
15
For a video demonstration of the effect of TMS on speech, see http://www.
newscientist.com/blogs/nstv/2011/04/how-a-magnet-can-turn-off-speech.
16
The possibility of manipulating the brain also generates a number of ethical
issues. Many of these are obvious, having to do with state intrusion into the brains of
their citizens. Perhaps a set of more interesting ethical questions arises from the
voluntary use of TMS and other neuroscientific interventions for medical, cognitive
enhancement or entertainment purposes. As such technology advances its price
declines and before too long some of it will be readily available outside of the
laboratory (in fact there is already a small community of DIY neural enhancers
exploring some of these techniques). Issues of safety and unfair advantage are the
most obvious here (for some discussion see Kadosh et al. 2012; Hamilton et al. 2011).
17
Perhaps this is not universally true. Some would argue that severely negative
experiences can be, maybe even regularly are, repressed as a kind of defense
mechanism. It is not clear that this really occurs although long term severe stress
can lead, because of an oversupply of stress hormones, to damage of the
hippocampus (see Bremner 1999).
18
Our knowledge in this area depends largely on the surgical tragedy of Henry
Molaison, who had sections of his brain, including both hippocampi, removed to
cure severe epilepsy in 1953. Known in the medical literature only by the initials
H. M. he died in 2008, leaving his brain to science. Project H. M. aims to preserve
his brain by sectioning it into thin slices which are digitally photographed at very
high resolution. The project intends to open the resulting database to the world via
the internet (see http://thebrainobservatory.ucsd.edu/hmblog/).
19
There is a surprisingly large number of syndromes that involve more or less
wild and free confabulation which resists rational correction, even in patients who
are apparently fairly rational in general. For an extensive investigation of the range
and causes of such afflictions, and a rich philosophical discussion of them, see
Hirstein (2005).
20
Very curiously, however, their performance is actually mathematically more
correct than that of normal subjects, better following the dictates of Bayes theorem
which is a fairly recondite mathematical statement of how the chance of a
hypothesis being true changes as evidence accumulates (for discussion see Stone
and Young 1997, p. 342).
21
There is some data that suggest that a suspicious cast of mind is associated
with Capgras syndrome, whereas a closely related delusional syndrome, Cotards
delusion, in which a subject believes that he or she is dead, is associated with
depression (see Stone and Young 1997, pp. 343ff.).
22
For some research on culture and schizophrenic manifestations see Tateyama
et al. (1998) or, for a review, Stompe et al. (2003).
Notes 219

23
Moniz, already in his sixties with a very distinguished career, began operating
in 1935 but did not perform many lobotomies. His major study encompassed
twenty patients with poor follow up. The real driving force behind this kind of
psychosurgery was the American neurologist Walter Freeman (see El-Hai 2005),
who invented the ice-pick method. This procedurethe trans-orbital lobotomy
involved literally hammering an ice-pick like device into a patients skull, through
an eye socket, and then rapidly swiping the instrument from side to side to grossly
lesion the brain. Freeman personally performed some 3500 operations and with his
encouragement many tens of thousands of people were lobotomized with at best
very marginal and equivocal outcomes and at worst the horrible destruction of
many human minds.
24
The precise nature of the executive function of the anterior cingulate is not
very clear. For a brief sampling of some hypotheses in the area see Awh and
Gehring (1999).
25
As usual, it is not so clear whether the insula specifically underlies feelings of
disgust themselves (though it seems certainly involved in them); for diverging
experimental results see Schienle et al. (2002) and Wright et al. (2004).
26
In fact, in rats and presumably in humans as well, certain sorts of stimulation
of the insular cortex can produce lethal cardiac arrhythmia, and this may help
explain why some stroke victims suffer heart attacks (see King et al. 1999).
27
There is indeed a representational theory of consciousness which is the
subject of some current excitement among philosophers. For an introduction see
Seager and Bourget (2007); for influential theory development see Dretske (1995)
and Tye (1995).

Chapter 5
1
For other discussion of the philosophical significance of cellular automata in the
issue of emergence see Bedau (1997), Dennett (1991), Rosenberg (2004).
2
There are any number of Life implementations available as downloadable
software (such as gtklife for Linux or Life32 for windows) or as web pages. A
good one can be found at http://www.ibiblio.org/lifepatterns.
3
Zuse is a very interesting figure. He was a pioneer in the construction of digital
computers, starting in Germany in the early 1930s and continuing through the war.
He developed one of the first programming languages and built a calculating machine
that could be programmed from memory and was Turing complete (his adventures
racing around Germany at the very end of the war to preserve his incomplete Z4
computer are like something out of Gravitys Rainbow, see Zuse 1993).
4
Additional recent work which attempts to reconcile some aspects of quantum
physics with cellular automata models can be found in t Hooft (2009, 2003).
5
A very interesting approach to the nature of computation taken by Rolf
Landauer leads to a view ultimately rather similar to Fredkins. Landauer
postulates that in some way physical law itself will have to respect the limits of
220 Notes

physical computation: I am proposing that the ultimate form of the implementable


laws of physics requires only operations available (in principle) in our actual
universe (Landauer 1991, p. 28). Landauer sees the laws of nature and
computational limitations as somehow (one has to add, mysteriously) co-
dependent. Thus, given the apparently finite actual computational possibilities
the laws themselves will describe a discrete universe that gives rise to those very
computational possibilities with those very limitations.
6
I do not mean to imply here that only the CA approach can intelligibly make
space and time emergent features of a radically non-spatial and non-temporal
substrate. This is one of the hopes of many of those working on the presumed
successor theory which will unify quantum and relativistic physics.
7
It is also important to bear in mind that the physical theory which will
incorporate all phenomena in one unified description of nature, gravitational and
quantum alike, will very likely dispense with continuous space and time. This is
because the successor theory is almost bound to be some kind of quantum field
theory which will incorporate space and time, or spacetime, in some quantized
form, perhaps as emergent features of some still more fundamental aspect or
structure of nature. The scale at which quantized spacetime would become
apparent is extremely small; the so-called Planck length is about 10-35 meters. It
is thus somewhat surprising that current technology may be able to provide
evidence of the graininess of spacetime but that is the goal of the holometer being
constructed at Fermilab (see http://holometer.fnal.gov/index.html) which may be
able to verify the existence of Planckian holographic noise (Hogan 2010) which
is a kind of jitter induced in position measurements by the presumed quantum
nature of spacetime and which might just be detectable by highly sensitive linked
pairs of interferometers. Of course, discrete spacetime does not imply that nature is
describable as a CA but it is consonant with that hypothesis.
8
The literature on what Einstein called spooky action at a distance is immense
(see Aczel 2002; Hughes 1992, especially Chap. 6). Entanglement will be dis-
cussed as a possible form of radical emergence in Chap. 6 below.
9
That cellular automata can be Turing universal had been shown long before by
von Neumann (his universal machine dates back to 1952 but the proof that it is
universal was not presented until after his death, and was completed by Arthur
Burks; see von Neumann 1966).
10
One caveat. If the generation rules of a real world finite CA should somehow
essentially depend upon uncomputable numbers then they could not be simulated
by a Turing machine (see below for how this might work).
11
The halting function, H (x,y), takes two numbers as input. First, the
identifying number of some Turing machine as indexed according to some
cataloging system (there are lots of ways to do this, the point being that the Turing
machines match up one to one with the integers) and, second, an integer. The
function returns a 1 or 0 depending on whether the indexed Turing machine halts
with that input or not. That is, H (x, y) = 0 if Turing machine number x halts when
given the number y as input and H (x, y) = 1 if TMx does not halt when given y as
its input. Alan Turing famously showed that no Turing machine could compute the
Notes 221

halting function (strictly speaking, Turing provided the resources for this proof but
did not present the halting problem as such; see Copeland 2004, which includes a
reprint of Turings article and illuminating introductory remarks). The halting
function is perfectly well defined but uncomputable by any Turing machine, or
computationally equivalent system, such as cellular automata.
12
Is there any hope of actually building some kind of accelerating Turing
machine? Work by Itamar Pitowsky (see Pitowsky 1990) and Mark Hogarth
(Hogarth 1992) suggests that there are mathematical models consistent with
general relativity that might permit a Turing machine to perform an infinite
number of operations in a finite time. The world line of the machine would
encompass infinite proper time but would be entirely in the past lightcone of an
appropriately situated outside observer. It seems very unlikely that these models
could be physically realized in our world (see Earman and Norton 1993).
13
There is a long history of logical discomfort arising from assumptions of
continuity, going back at least to Zenos paradoxes of motion. Even apparently
commonsense notions can lead to severe trouble. Consider the natural idea that
matter is continuous, solid and impenetrable plus the idea that matter interacts by
collision. Are such collisions possible? Consider two spheres approaching each
other. Arguably, they cannot contact each other, for if they are in contact they share
a point (which is impossible since they are each impenetrable). But if they do not
share a point then they are not in contact (an infinity of points exist between their
two surfaces). For a nice discussion of such oddities see Lange (2002), Chap. 1.
14
Such a system would be something like Turings oracle machines in which a
standard Turing machine is augmented with a device which can, on demand, supply
answers to what may be called uncomputable questions, such as the answer to the
halting problem for any given Turing machine plus input pair. The mode of
operation of the oracle is unspecified although Turing somewhat cryptically says
that it cannot be a machine (Turing 1939/2004, p. 156). According to Max
Newman (see Newman 1955, p. 259) Turing conceived of the oracles operation as a
kind of representation of the mathematical intuition required for creative theorem
formulation and proof. This is especially interesting since Turing endorses quite a
different view in his famous paper on machine intelligence (Turing 1950). There he
argues that machines computationally equivalent to Turing machines will be capable
of creativity and at least give all the appearances of intuitive thought. In a curious
echo of the oracle concept, Turing explains a fallacious argument against machine
creativity thus: One could say that a man can inject an idea into the machine, and
that it will respond to a certain extent and then drop into quiescence (p. 454).
15
This means that there is a standard Life configuration whose evolution can be
interpreted as simulating the CA world corresponding to my new rules since, as
mentioned above, Life is itself Turing universal (which raises an important issue
about emergence). Of course the Life configuration which emulates my new rules
will not look like a glider plowing through any pattern of cells it comes across.
There are some niceties to be taken into account as well. What happens if two
invulnerable gliders meet? Whatever we like of course, but I suggest that both be
annihilated (or they could just pass through each other).
222 Notes

16
The idea of temporal supervenience is explored in greater depth in Chap. 7 below.
17
A sympathetic examination of Morgans views can be found in Blitz 1992; for
an excellent general discussion of classical British emergentism see McLaughlin
1992; for an interesting exploration of the relation between supervenience and
emergentism see Kim 1993, Chap. 8.
18
That is, there is no Turing computable way to efficiently predict the future
state of a CA. If we could harness some of the speculative hypercomputational
powers discussed above, perhaps we could transcend this barrier. In this respect,
predictability remains a relative notion (closely linked here to the notion of
relative computabilitysee Copeland and Sylvan 1999). In the absence of any
real hypercomputational machine, we are limited to human computing power, as
emulated and amplified but not essentially transformed by the digital computer.
Incidentally, the much vaunted quantum computer promises a radical speed up of
our computations but, it seems, no hypercomputational powers (see the stark
pronouncement in the bible of quantum computation, Nielsen and Chuang 2000,
p. 126: quantum computers also obey the Church-Turing thesis).
19
We can only say probably here because, first, there could be
hypercomputation methods to which we could conceivably attain access and,
second, there is no hard proof yet there is not some clever but otherwise normal
algorithm that will make these hard problems efficiently solvable. This is the P=NP
question. Roughly speaking, P problems are solvable in a time that is merely a
polynomial function of the input size, so if n measures the size of the input, an
algorithm that solves a problem in time proportional to, for example, n2 is in P. NP
problems are ones that have solutions that can be verified efficiently (i.e. in
polynomial time). Thus factoring an integer into its prime factors is relatively
difficult compared to verifying that a given set of possible factors are indeed
correct and it may be that factoring has no efficient algorithm (the evident
difficulty in factoring large integers is the current basis of Internet security). No
proof of this exists (in the computational complexity business in general, proofs
are distressingly scarce). Furthermore, it is known that an efficient quantum
computational algorithm (Shors algorithm) exists for prime factorization. This
does not quite imply that factoring is in P since the quantum algorithm only
provides an answer with some probability (which can be made as small as we like).
In any event, it strongly appears that NP includes problems that are fundamentally
harder than those in P, whose algorithms can do no better than a time proportional
to an exponential function of n, as say 2n which quickly dwarfs n2 as n increases
(see Harel 2000 for a gentle introduction to computational complexity). Although
this is an open mathematical problem and indeed perhaps the most significant issue
in theoretical computer science, almost all mathematicians and computer scientists
think that P is not the same as NP. For what its worth, a particularly bizarre (and
still unpublished in any journal so far as I know) proof that P=NP is based upon
Fredkins digital physicssee Bringsjord and Taylor (2004). The proof moves
from the idea that since some natural processes compute NP problems efficiently
(as the soap film that forms over a set of pins solves the so-called Steiner Tree
Problem of finding the shortest set of links between a set of points) and the digital
Notes 223

physics idea that the universe is running on a Turing machine equivalent cellular
automaton, then there must be an efficient algorithm for solving NP problems,
hence P = NP. Offhand, it seems the proof fails since the physical process at issue
in the Steiner Tree Problem case only produces an approximation to the optimal
solution, which gets worse as the number of points increases. Also, the proof that
nature solves the Steiner Tree Problem assumes that nature is continuous and thus,
from the digital physics point of view, has a false premise and hence is unsound.
20
There are much smarter ways to tackle the problem than an exhaustive look
through every possible route, but they gain efficiency by trading generality: either
they are not guaranteed to discover the optimal solution or have more or less
limited applicability. One approach of the latter sort, called the cutting plane
method, has found an exact solution for a particular set of almost 25,000 cities (all
24,978 cities, towns, and villages in Sweden to be precise), using somewhat fewer
resources than the total universe (only the equivalent of about 92 CPU years on a
single Intel Xeon 2.8 GHz processor). For more information visit the TSP web
page at http://www.tsp.gatech.edu/.
21
There is a very lively current debate about the exact nature of this sort of
explanation, focused on the issue of a priori entailment. The question is whether or
not it is possible (as always, in principle) to deduce the emergent given only the
ideal description of the submergent base and mere possession of the concepts
descriptive of the emergent domain (this latter restriction explains the choice of the
a priori label). The debate is especially significant for that most difficult of
emergents: consciousness, and leads into pretty dense thickets of technical
philosophy in the area of two dimensional modal logic. See Jackson (1998),
Chalmers and Jackson (2001) for a defense of the a priori entailment thesis; see
Block and Stalnaker (1999) for an attack upon it. We can safely ignore the
intricacies of this debate in our present discussion of emergence.
22
The significance and nature of conservative emergence can itself be debated.
While Batterman takes the very strong line that physical understanding positively
requires reference to explanatorily emergent features, Gordon Belot (Belot 2005)
regards such reference as basically heuristic in natureit does not by itself render
the world inexplicable in the terms of fundamental theory. Belots argument, in
part, seems to be that there must be in principle deducibility of the less
fundamental theory from the more fundamental theory and that this will provide
again, in principleunderstanding of the former from the presumed
understanding of the latter. To the extent that we can understand how emergents
appear, Belot must be right; but it is not given that that such understanding is
attainable. It is not given that we could come to understand such phenomena as
turbulence in the absence of the concepts of the explanatorily emergent features of
the world. However, it is very interesting that in his reply to Belot, Batterman
(Batterman 2005) states that in many cases the explanatorily emergent features (to
use my term) simply do not exist. They are nothing but explanatory aids but ones
which are indispensable for our understanding (perhaps something in the way that
one could not understand Christmas in North America without reference to Santa
Claus). So this points to a real difference between what I mean by conservative
224 Notes

(i.e. explanatory or epistemological) emergence and Battermans ideas. I am


thinking: tornadoes (they are real); Batterman is thinking of certain mathematical
structures in continuous fluid dynamics (there arent any instantiated in nature,
since the atmosphere is not a continuous fluid). The two ideas come back together
when we recall the fact that the mathematical object serves as an excellent model
for the real-world object. It seems Belot and Batterman are debating about Broads
mathematical archangel. Batterman thinks the angel would need some concepts
from the emergent domain in order to understand what was going on, even in a
world entirely governed and determined by its underlying fundamental features.
23
It is easier to engender confusion about this than you would think. Silvan
Schweber writes that the reductionist approach that has been the hallmark of
theoretical physics in the 20th century is being superseded by the investigation of
emergent phenomena (Schweber 1993, p. 34) and that fields such as condensed
matter physics investigate genuine novelties in the universe (p. 36) which depend
upon emergent laws. One could be forgiven for entertaining the exciting hope
that here we have a forthright endorsement of radical or ontological emergence.
But no. Schweber means only that the theories which describe the emergent
phenomena are not simple consequences of underlying theory, that condensed
matter physics is not just applied elementary-particle physics, nor is chemistry
applied many-body physics (p. 36). And as for the genuine novelties, Schweber
is being completely literal: modern science produces things that have not existed in
the universe before, just like Paris Hilton produces phenomena, e.g. television
shows and the like, that are a genuine novelties in the world. The metaphysical
import of both observations is about the same. Schweber actually goes somewhat
out of his way to make it clear that ontological emergence is not the issue,
emphasizing that one may array the sciences in a roughly linear hierarchy
according to the notion that the elementary entities of science X obey the laws of
science Y one step lower (p. 36). Schweber goes on to write that recently
developed methods have changed Andersons remark the more the elementary
particle physicists tell us about the nature of the fundamental laws, the less
relevance they seem to have to the very real problems of the rest of science, from
a folk theorem into an almost rigorously proved assertion (p. 36). Curiously,
Noam Chomsky reports this passage as stating the refutation of reductionism,
eliding the difference of interest between radical and conservative emergence
entirely (Chomsky 2000, p. 145).

Chapter 6
1
The concepts of realization and multiple realization have been the subject of much
philosophical discussion since at least the birth of the functionalist theory of mind
with Hilary Putnams early papers (see for example Putnam 1960) and have
generated a huge literature. For a good overview see Bickle 2008. Some recent
important works on the topic include Melnyk 2003, Kim 2005 and Shoemaker 2007.
Notes 225

2
Written in dimensional form, this expression appears as T2LM/T2LM thus
canceling all the dimensions as promised.
3
It is not clear from Poes words whether or not the flow is irrotational. Poe
writes: She [the trapped vessel] was quite upon an even keelthat is to say, her
deck lay in a plane parallel with that of the waterbut this latter sloped at an angle
of more than forty-five degrees, so that we seemed to be lying upon our beam-
ends. I could not help observing, nevertheless, that I had scarcely more difficulty in
maintaining my hold and footing in this situation, than if we had been upon a dead
level ; and this, I suppose, was owing to the speed at which we revolved. Does this
perhaps suggest that the observer is always facing the centre of the vortex? If so, it
is a rotational flow (one revolution of the observer per one rotation around the
centre). Some small support for this interpretation can be found in a brief remark in
Milne-Thomson (1957) who likens the maelstrom to the Rankine combined
vortex which is rotational in its core, irrotational outside the core. It turns out
that if the velocity of the water around the centre of the vortex varies as the inverse
of the distance from the centre, there will be irrotational flow. This condition is
closely approximated in many real world vortices.
4
Vortices figure in a fascinating footnote to the history of science. In the late
nineteenth century William Thomson, aka Lord Kelvin, championed the theory
that the ultimate nature of matter was knotted vortices in some fundamental
continuous, homogeneous and inviscid fluid (see Thomson 1867). Thomson was
impressed with Helmholtzs demonstration of the stability and vibratory properties
of vortices. For example, he thought the then recently discovered atomic spectra
could be explained in terms of characteristic oscillations of the knotted vortex
which constituted each species of matter. Although the theory has not survived, it
inspired George Tait (who became somewhat famous for his parlour room
demonstrations of smoke rings), in collaboration with Thomson, to investigate the
fundamental properties of knots which led to whole new branch of topology, knot
theory.
5
Eugene Wigner provided a now classic discussion of this mystery: why
mathematics is so successfully applicable to the natural world (Wigner 1960). An
extended philosophical clarification and discussion of Wigners puzzle can be
found in Mark Steiner (1998). There are at least two sides to this question. Why is
mathematics applicable to the world? Here one might follow Leibniz who noted
that any data will be describable by some mathematical function. In the Discourse
on Metaphysics (Leibniz 1686/1989) Leibniz noted that in whatever manner God
might have created the world, it would have been regular and in accordance with a
certain order because all structures or sequences of data are subject to
mathematical description. As he puts it, using an example: let us assume
that someone jots down a number of points at random on a piece of paper I
maintain that it is possible to find a geometric line whose motion is constant and
uniform, following a certain rule, such that this line passes through all the points in
the same order in which the hand jotted them down (p. 39, note I am following
Gregory Chaitins tiny emendation of the translation with motion replacing
notion; see Chaitin 2004). Of course, Leibniz makes it easy on himself, taking a
226 Notes

finite set of points, but I think his point is well taken. But the other aspect of the
question is why humanly contrivable mathematics is so well able to describe the
world (on this point Steiner tends to think that the universe is somehow friendly
to us). It is tempting to try out an anthropic explanation. Perhaps any world too
complex or too weird for us to grapple with mathematically (and bear in mind our
grapplings are only partially successful) would not be a world in which could
evolve and persist. I have no idea how one would go about proving such a thesis.
6
It is worth stressing that this issue looks completely different from the point of
view of any remotely feasible scheme of real world simulation of dynamical
systems. There we find that the intrinsic limitations on our knowledge of the initial
states of real world systems as well as their necessarily incomplete models
severely limit predictability. One very interesting issue that arises here is the
natural consideration that, in the face of our epistemic limitations, we should and
would be content with gaining a merely qualitative understanding of how a certain
system will tend to evolve. The simple approximation algorithms we have
examined have a bad fault in this respect; they fall into the class of non-
symplectic integrators, which means that they do not respect the conservation
laws which will inevitably stem from various symmetries embodied in the systems
under study. This inevitability is purely mathematical, as demonstrated in 1918 in
the amazing theorem of Emmy Noether (for a brief history and discussion of her
results see Byers 1999) which links continuous symmetries with conservation
laws. Thus the symmetry of the laws of physics with respect to time implies and is
implied by the conservation of energy. The failure to respect conservation
principles in a method of numerical simulation means that as the evolution of a
model drifts further from the evolution of the system it is intended to represent it
will tend to enter regions of phase space which are simply inaccessible to the target
system. Symplectic integrators can be constructed that will respect conservation
laws despite accumulating error (one can envisage this as a restriction of the
evolution of the model to an abstract surface in phase space). This can be much
more revealing about the qualitative behaviour of the system under study. Of
course, such considerations are of less significance in the airy realm of relaxed
computational constraints.
7
There are deep philosophical (and scientific) issues in this area. For an
excellent philosophical discussion see Sklar (1993).
8
For a taste of the intricacies of climate modeling see Peixoto and Ort (1992).
9
Here is the calculation. Multiplying our length scale, 3000 metres by e4 we get
a value which measures acceptable predictability. It happens to be about 163,795.
We need then to solve this equation: 3  1010 ex 163; 795: Thus, x
 163;795 
ln 310 10 which is about 34 days.
10
Recall the story of how Columbus, trapped on Jamaica in 1504 and threatened
by the natives, used his knowledge of a lunar eclipse predicted to the minute (using
a geocentric model) long before by Johannes Mller, who also went by the Latin
nickname of Regiomontanus, in his Ephemerides of 1475. Columbuss display of
power over the heavens prompted a change of heart in the locals, and he and his
Notes 227

crew were rescued some months later. You can never tell when some piece of
seemingly esoteric knowledge might come in handy.
11
In Newtons own words: ... blind Fate could never make all the Planets move
one and the same way in Orbs concentrick, some inconsiderable Irregularities
excepted, which may have risen from the mutual Actions of Comets and Planets
upon one another, and which will be apt to increase, till this System wants a
Reformation (Newton 1730/1979, p. 402). Famously, Leibniz pounced on this
seeming admission that God almighty Himself was incapable of building a watch
that kept good time.
12
However, for a less sanguine view of the long term stability of the inner Solar
System see Laskar and Gastineau (2009).
13
An extremely interesting possibility is that quantum mechanics could impose
an emergence wall which would be hit much sooner than we might expect.
Wojciech Zurek (Zurek 1998) calculates that the application of the uncertainty
principle to the motions of the planets ought to lead to predictive breakdown at a
timescale of just a few million years. This outrageous conclusion suggests to Zurek
the need for some extraneous factor which classicizes the solar system:
decoherence via interaction with the dust making up the background galactic
environment. In any event, the lesson is clear. The underlying structure of the
system imposes an emergence wall. This example will be discussed further below.
14
Proponents of the dynamical systems approach to cognition like to see
themselves as radical revolutionaries battling the stultifying hegemony of the
computationalist paradigm. For an interesting assessment of how radical (or not)
the approach is see Grush (1997).
15
The calculation is based on this formula: t ac sinh vc; where c is the speed of
light, v is the final velocity and a is the acceleration (both latter given in the
problem); sinh is the hyperbolic sine function.
16
This remains somewhat controversial. David Bohms version of quantum
mechanics remains a contender in the race for an acceptable interpretation of
quantum mechanics and it restores full determinacy of position and momentum
and provides an account of why our knowledge is restricted in accord with the
Heisenberg relations (see Bohm and Hiley 1993). This possibility makes no
difference to the discussion here.
17
It is interesting to compare Sperrys thoughts on emergence with those of his
student, Michael Gazzaniga, who some have also seen as endorsing radical
emergence with downward causation. Gazzaniga (in 2011) recognizes a difference
between what he calls weak and strong emergence but his discussion leaves it
unclear whether his strong emergence is really radical, as defined here. The
problem is that Gazzaniga pays insufficient attention to the crucial distinction
between accessible and inaccessible explanatory structures and thus tends to see
inexplicability or unpredictability as directly entailing something like radical
emergence whereas, as we have seen, inaccessible explanations and
unpredictability are compatible with conservative emergence.
228 Notes

18
Thus we treat the quantum uncertainty engendered by Heisenbergs principle
as fixing a minimum range of initial conditions from which error will inevitably
grow, and grow very quickly in chaotic systems. An amusing example of this is the
problem of balancing a pencil on its point. The uncertainty principle interpreted as
placing a limit on the initial condition of the pencil forbids getting the pencil into a
state with zero motion and perfectly balanced through its centre of gravity.
Assuming nothing else prevents balancing the pencil (e.g. an errant breeze,
imperfections of the pencils point, inhomogeneities in the Earths gravitational
field, etc.), how long could the pencil remain standing under the most ideal
conditions? Seemingly, only about four seconds! (For the problem and
calculations, see Morin 2004).
19
The associated video can be viewed at http://www.hitachi.com/rd/research/
em/doubleslit.htm.
20
The term was first used to describe this property of quantum systems in 1935
by Erwin Schrdinger where he opines that I would not call that one but rather the
characteristic trait of quantum mechanics, the one that enforces its entire departure
from classical lines of thought (Schrdinger 1935, p. 555).
21
It seems the mere registration of information about the particles in the detector,
whether or not it affects the particles themselves, suffices to destroy the interference.
This suggests the following bizarre question: what if we erase the information from
the detectors? Quantum mechanics says that the interference then returns! The way
this works is somewhat complex and so counterintuitive as to drive one physicist to
complain that quantum mechanics has more the character of medieval necromancy
than of science (Jaynes 1980, p. 42). For an analysis of the quantum eraser see
Scully and Drhl (1982); for a brief philosophical commentary see Seager (1996).
22
There have been some efforts to understand the role of decoherence in brain
processes. Max Tegmark has argued in an influential article that neural processes
are subject to severe rapid environmental decoherence (see Tegmark 2000); for a
rejoinder from the camp of quantum consciousness theories see Hagan et al.
(2002). Some recent work suggests a surprising degree of coherence even at room
temperature in the central biological process of photosynthesis (see Collini et al.
2010). However, the time scale at issue is not very different from Tegmarks
theoretical value and seemingly much too short to be of significance in the neural
processes underlying mentality. But you never know.
23
The decoherent histories approach thus provides a promising answer to one
long standing problem with the many worlds interpretation of quantum mechanics,
the so-called preferred basis problem. In quantum mechanics there are many
equally valid choices of attributes which can be used to describe a systems state
only some of which will meet the condition of generating a more or less classical
looking world history. Environmental decoherence may be able to explain why
one choice is preferred. There are many other problem with the many worlds
interpretation, most especially the difficulty of recovering a robust sense in which
events have definite probabilities when, in a sense, the interpretation holds that
every event that can happen does happen. Suffice it to say that the many worlds
theory remains controversial and unorthodox (for an overview see Vaidman 2008).
Notes 229

24
It may be that Teller is only endorsing the weaker claim that quantum systems
exhibit a kind of irreducible ontological holism. I think that discussions of
emergence in quantum mechanics tend to miss the distinction between holism and
radical emergence. This is often coupled with the assimilation of conservative or
epistemological emergence with part whole reductionism. But while
conservative emergence is compatible with mereological reductionism it is not
equivalent to it.
25
The minus sign in singlet is irrelevant to our concerns. There is another joint
state with a plus sign, called the m 0 triplet that has subtly different properties
than singlet which emphasize the holistic character of these sorts of states (see
Maudlin 2007, pp. 53 ff.).

Chapter 7
1
Non-trivial is added here and above to prevent properties like having charge
p
?1 or not rendering anything and everything a physical entity 2 has this
property).
2
As discussed above in Chap. 5, a very clear and austere characterization of
mechanism is given in Broad (1925, Chap. 2). Modern views which espouse the
idea that physical realitys fundamental description is some kind of cellular
automaton provide a different characterization of mechanism (see Wolfram 2002;
Fredkin 1990).
3
This is an extremely lax notion of efficacy. For example, it completely ignores
obvious problems that arise from overdetermination, finkish dispositions (see
Martin 1994) or other philosophical chicanery. But it will serve my purposes here.
4
For my purposes we can generally understand counterfactual dependency in
commonsense terms. A counterfactual conditional is true if its consequent is true
in the possible world most like the actual world except that in that world the
antecedent is true (this minimally different world of evaluation is frequently, if
loosely, called the possible world nearest to the actual world). For example, we
evaluate the counterfactual if the Supreme Court of the United States had not, in
effect, ruled that Bush won the 2000 election, then Gore would have become
President by considering the situation which includes the Supreme Court ruling
against Bush but which is otherwise as similar as possible to the actual world. We
judge this counterfactual to be true if Gore turns out to be President in that world.
If, say, we think that in that case the Florida recounts would have left the Bush
victory intact, then we think the counterfactual is false. Philosophers have
formalized this understanding of counterfactuals with the machinery of modal
logic but we need not delve into such technicalities here (the pioneering
philosophical work can be found in Stalnaker 1968 and Lewis 1973).
5
An obvious imperfection glossed over is the existence of indexical terms. With
a little suppression of ones critical, or is it pedantical, faculties the point of the
example should be clear.
230 Notes

6
A nice way to define physicalism stems from considering physically
indistinguishable possible worlds. Call a minimal physical duplicate of a world,
w, to be a world physically indistinguishable from w but which contains nothing
else in addition to the physical (to use a theological metaphor, the minimal
physical duplicate of w is created if God copies all the physical features of w into a
new world and then stops). Physicalism can then be defined simply as the claim
that any minimal physical duplicate of the actual world is a total or complete
duplicate of the actual world (for details see Lewis 1983 and Jackson 1998).
7
A weak version of local supervenience can be expressed in terms of worlds
as: 8w8r8p8F 2 U8G 2 TGrw  Gpw ^ Frw ! Fpw: It is a
trivial consequence of local supervenience. Also, one can regard the entire world
as one system thus encompassing global supervenience within this definition if it
should turn out that unrestricted global supervenience is the appropriate relation
needed to tackle certain properties.
8
A direct translation of strong supervenience in possible worlds form is easy to
produce, but is of very little interest: 8w8r8F 2 UFrw ! 9G 2 T
Grw ^ 8w8pGpw ! Fpw:
9
The definition of efficacy given above wont necessarily capture such details.
That depends on how certain counterfactuals turn out. Suppose, for example, that
someone bends over to pick up a piece of paper on the road because its a twenty
dollar bill. Would they have done so if that paper had not been real money? If we
suppose that in the nearest world where this piece of paper is not money it is a
good counterfeit (physically very similar to its actual counterpart) then we get the
result that the subject still picks it up. So the property of being money is not
efficacious according to our definition, as seems intuitively correct. But it is
possible to disagree about how to evaluate such counterfactuals.
10
Obviously, the restriction to a single system could be relaxed but I want to
focus on the case of the evolution of one system for the moment (in any case, there
is no reason we could not treat the case of two systems as that of a single
composite system).
11
A possible illustrative example of de-randomization in the micro to macro
relationship is given by Ehrenfests equations (as briefly discussed above in
Chap. 6), which assert that the expectation value of an observable such as position
or momentum will evolve in accordance with classical laws of mechanics. In a
macroscopic system made of huge numbers of microsystems we might expect (or
hope) that such statistical features will exhibit a stability sufficient to allow us to
identify the expectation value with the values obtained by particular observations,
thus resulting in de-randomization and providing a reason to expect top-down
discipline. But note that in general, the issue of the retrieval of classical physics
from quantum physics is extremely complex, incompletely researched and still
poorly understood (see Ford 1989, Belot and Earman 1997).
12
There are other grounds for suspicion that such disjunctions of subvening
states can support any robust sense of reduction, for which see Owens (1989),
Seager (1991), Kim (1993, Chap. 16).
Notes 231

13
This thought experiment goes back to the inception of statistical mechanics in
the controversy between Ludwig Boltzmann and Josef Loschmidt about the
possibility of deriving the second law of thermodynamics from the mathematics of
statistical mechanics (see Sklar 1993, Chap. 2).
14
See Sellarss discussion of the postulation of two kinds of gold (Sellars 1963a,
p. 122) and van Fraassens commentary (van Fraassen 1980, pp. 32ff.).
15
One good reason for this lack of concern is the recognition of distinctively
lower-level intrusions into the high-level dynamics which are simply not within the
purview of the high-level theory. See Dennett (1971) for a classic discussion of this.
16
An interesting discussion of this constraint on theorizing which Hans Radder
calls the generalized correspondence principle can be found in Radder (1991).
17
Technically, the supervenience condition is included in the definition of top-
down discipline, but it is clearer to emphasize the role supervenience plays as a
separate part of the proof. Also I did not specify the grade of supervenience in the
definition of TDD but left it loose to implicitly form a family of relations of top-
down discipline.
18
Note also that this argument does not lead to efficacy inflation in the sense
that the U-state in question helps to bring about every T-state in any system. My
dream last night is not efficacious in producing an earthquake in Turkey even
assuming that earthquake has a unique physical predecessor. On the assumption of
full temporal supervenience, the nearest possible world in which I dont have my
dream is different from the actual world right back to the beginning of time, but
even so there is no reason to think it is different with regard to the earthquakes
occurrence. In testing for efficacy, we can pick any outcome state we wish so we
can find one for which my dream is efficacious. This does not lead to my dreams
having efficacy everywhere because the counterfactual change in my dream does
not necessarily lead to the non-existence of every possible realizer of the
earthquake (of course we cant absolutely rule out this possibility either).
19
Note we must assume strong T-temporal supervenience to get this result,
since in considering strong supervenience we have to consider other physically
possible worlds.
20
For an independent argument in favour of this assumption see Seager (1988).
21
For ease of exposition I am being somewhat sloppy in my notation here and in
what immediately follows. It would better follow our practice to write Fr to
designate the U-state in question (where F is a property of the system r).
22
Or, more strictly speaking, a set of possible T-realizers fs1 ; s2 ; . . .; sn g The
argument is not affected by this detail, which is thus omitted for simplicity of
presentation.
23
Here I assume that if there is a T-description of a system then there is a
description in T-elementary terms. This is an innocuous assumption since, by
itself, it does not imply that every T-state has a constituent structure formed out of
T-elementary features, for maybe some large T-states are themselves elementary
(call such things large simples). It is hard to think of genuine examples, but here
is a possibility. Classical black holes can have but three physical properties that
fully characterize them: mass, charge and angular momentum. These properties are
232 Notes

a function of the properties of the elementary constituents that have formed the
black hole. But, once formed, there is no sense in which the black hole is
composed of little bits and pieces that individually have various masses, charges or
angular momenta (string theory may alter our perspective on this, but, of course
and interestingly, in a way that makes black holes resolvable into a newbut still
physical of coursekind of elementary constituent structure). Thus the black hole
cannot be resolved into sub-components. This is no violation of the totality of
physics however, since charge, mass and angular momentum are themselves
allowable elementary features. A black hole is, so to speak, a kind of elementary
particle (and one that can, of course, take a constituent place within larger
physical assemblies such as multi-star systems, galaxies, etc.).
24
Notice we do not need to assume that U possesses top-down discipline for this
argument to work. The single case of rs divergence violates T-temporal
supervenience.
25
However, this at least suggests that there may be novel emergentist doctrines
that derive from global or local supervenience relations. Perhaps we can imagine
emergent properties that depend upon total world states for their existence. These
are emergent properties dependent upon the total state of the whole universe
even though they might be properties of individual things. I cant think of any
examples of such properties however, although there are clear cases of non-local
emergents. Being money is such a non-local (but hardly fully global) emergent,
but because of its lack of efficacy and our possession of some idea of how we
might explicate the existence of money in terms of non-monetary properties, we
tend and ought to regard this as a form of conservative emergence. Another
example of a very non-local but far from fully global emergent property might be
the value of the gravitational field at any point; it may well be that the state of the
entire universe figures in determining this value (though perhaps not, depending on
whether there are regions of the universe that are not in causal contact with each
other, which currently seems very likely). The important point made by these
examples is that even in non-local emergence, the emergent property depends
upon quite definite, if spread out features of the submergent domain.
26
This is why, I think, Morrison inserts into my ellipses in the above quote the
otherwise puzzling claim that the emergents cannot be explained in terms of
microphysics. There are lurking here large issues about the nature of explanation. The
discussion above in Chap. 5 of the views of Philip Anderson is relevant here as well.
27
A shadow of a doubt about this might arise from noting that such predictions
are in principle possible only if it is in principle possible to mathematically deduce
such predictions from a given state. As discussed in Chap. 5 some properties of a
system cannot be mathematically deduced, as for example whether a cellular
automata will ever get into a certain configuration, but it remains true that the
evolution of such systems is mathematically describable via simulation. Chap. 5
also delved into the question of whether it is perhaps conceivable that there are
fundamental mathematical impediments to such predictions (e.g. non-comput-
ability). Of course, in this eventuality it would still be true that the emergents were
completely determined by the subvening domains structures and laws.
Notes 233

The epistemological framework which emphasizes explanation in principle of the


determination relation is thus ultimately unnecessary but adds explanatory vivacity
to the account of conservative emergence.
28
As discussed in Chap. 5, simulatability is the feature of a theory that it is
possible to calculate, in principle, the state transitions of any system in terms of the
fundamental description of an initial state. Simulatability does not require that this
calculation be mathematically exact; approximations are allowable so long as we
can mathematically guarantee that the error of the approximation can be made as
small as we like. For example, while the equations governing an isolated pendulum
can be simulated by a mathematically exact representation of the system, the
problem of simulating even a three-body gravitationally bound system is mathe-
matically unsolvable. But the many-body problem can be approximated to what-
ever degree of accuracy we like (given arbitrarily large computing resources).
There may be systems which cannot even be approximated in this sense however.
29
This failure of the purity of empirical testing of complex physical theory is
emphasized by Nancy Cartwright, who argues that we can neither understand nor
build experimental apparatus without appeal to theories which are incompatible
with each other. For example, she maintains that any aspect of nature, most
especially scientific experiments, requires application of diverse laws from
separate and perhaps incompatible theories: neither quantum nor classical theories
are sufficient on their own for providing accurate descriptions of the phenomena in
their domain. Some situations require quantum descriptions, some classical and
some a mix of both. (Cartwright 2005, p. 194). In order to argue that this is merely
a practical necessity, one woud need a proof that the realm of classical physics
really does conservatively emerge from more fundamental quantum theory.
Cartwrights views will be discussed further in Chap. 10 below.

Chapter 8
1
Epiphenomenalism, the doctrine that mental states are causally impotent products
of physical processes, was first articulated and defended by Thomas Huxley
(Huxley 1874). The worry that epiphenomenalism is a consequence of modern
physicalist accounts of the mind has been reinvigorated by Jaegwon Kim (see for
example Kim 1998). The idea that Kims argument generalizes beyond the case of
the mind has also been explored (see Block 2003; Bontly 2002). Kims argument
depends on his exclusion principle which, roughly speaking, states that no event
has more than one total cause. My argument in this chapter makes no use of the
exclusion principle, though it could be seen as something of a defense of some
form of such a principle.
2
It may be worth reemphasizing here that this is not an endorsement of so-called
part-whole reductionism, though it is consistent with it. For example, we know
from quantum mechanics that the states of wholes are not simple functions of the
states of their parts but this does not tell against the characterization given in the
234 Notes

text. As discussed in Chap. 6, quantum mechanics is a celebration of how the


interactions of things can be understoodrigorously understoodto yield new
features. It is, if you like, the mathematical theory of emergence, but one that obeys
the strictures of resolution and abides by the strictures of conservative emergence.
3
This is expressed from a particle perspective. Perhaps it would be more
accurate to say that all these particles are quanta of underlying quantum fields
which have a better claim to be the truly fundamental physical structure of the
world.
4
For example, something as simple as the spin of a proton turns out to be the
product of an almost inconceivably complex interaction between the three
constituting (or valence) quarks, a sea of virtual particles within the proton as
well as additional components of orbital spin. The so-called proton (or nucleon)
spin puzzle, the problem of explaining where the proton spin comes from given
that measurement reveals that only about 30% of the spin can be attributed to the
constituent quarks, has bedeviled physics for over 20 years. New lattice QCD
calculations, recent observations and a deeper theoretical understanding of the role
of the quantum mechanical vacuum may point to its resolution (see Bass 2007;
Thomas 2008). Our ignorance about this is sobering and somewhat disturbing as is
the awesome idea that nature embodies such staggering complexity within every
minute portion of the world.
5
What follows is, so to speak, a purely metaphysical exercise, not intended to
provide any insight into the use or significance of computer simulation in science.
The topic of computer simulation and modeling has raised important questions in
the philosophy of science concerning the epistemological status of simulations and
the dangers of their interpretation, especially in light of the extra-theoretical
adjustments required to get them running and producing sensible output (some of
which were touched on in Chaps. 5 and 6 above). An excellent introduction to the
philosophy of scientific simulation can be found in Winsberg (2010).
6
Would it ever make sense to start such a project? Not if computer technology
progresses sufficiently quickly. Suppose the original length of the computation is n
years and technology advances so quickly that after d years have passed the
computation would take less than n  d years. If this level of technological
progress persisted, it would never make sense to start the computation! Of course
there are non-computer technical constraints on the time required for such
computations and presumably the pace of progress in computer technology must
eventually slow down rather than continue its heretofore exponential acceleration.
For some n, the computations make sense, as evidenced by the real world
examples given above, but the problem of this note is equally well illustrated (see
Weingarten 1996). Computers of the 1980s would have taken about one hundred
years to perform the reported computations. It was just not worth starting.
7
I am of course being somewhat playful here. There are very large issues
lurking here, most especially the so-called measurement problem. If one believes
that there are non-deterministic processes that drastically and uncontrollably alter
the wave function then we can resort to the multiple simultaneous simulation
model outlined immediately below.
Notes 235

8
Reflection upon the superduper computer simulation thought experiment
suggests a purely philosophical question. Could we define physicalism in terms of
this imaginary computer implementation of final physics? We might try something
like this: physicalism is the doctrine that everything that occurs/exists in the actual
world would have its exact counterpart in a final physics computer simulation of the
world, or that the simulation would be, in some appropriate sense, indistinguishable
from the actual world. Such a formulation has the advantage of automatically
including what Hellman and Thompson (1975) call the principle of physical
exhaustion. But it obviously requires a clearer specification. However, I think a
more direct characterization of physicalism in terms of minimal physical duplicates
of the actual world as pioneered by David Lewis (Lewis 1983; see also Jackson
1998) is preferable. The simulation model would fit seamlessly into such an
approach without pretending to usurp its basic role in the definition of physicalism.
9
Some interesting preliminary work on this has been done by Warren Smith for
the case of ideal Newtonian mechanics and basic quantum mechanics; see Smith
1999. The idea that quantum computers (of various types) can be used to simulate
a range of, possibly all, quantum mechanical systems goes back to work of Richard
Feynman (see e.g. Feynman 1982). Further work on the use of quantum computers
in physics simulation has been done by David Deutsch and Seth Lloyd (see
Deutsch 1985; Lloyd 1996).
10
There is an obvious epistemological problem here. How would one
distinguish a case of radical emergence from a theory of the basic constituents
which was merely false? One can imagine various ways such difficulties could be
addressed. For example, supposewhat we already know to be falsethat our
best theory of the elementary features could not explicate even the simplest
chemical properties of atoms. After enough failure, we might have sufficient
reason to come to believe that chemical properties were brute and radically
emergent features of certain complex structures. We have seen enough to
appreciate how radical a step that would be however, putting great pressure on our
theoretically informed views of nature.
11
It is also worth noting that supervenience and lack of genuine efficacy are
compatible. It is not hard to think of candidate examples of such epiphenomenal
supervenients. In addition to more or less bizarre properties, like the property of
existing in a universe with a prime number of goats, there are many ordinary
properties that appear to supervene on the physical but which lack genuine causal
efficacy: moral properties, aesthetic properties, monetary properties, etc. It strikes
me as obvious that monetary properties, such as the property of being a one dollar
coin, have no causal powers (it might be different for the non-monetary mental
property of believing that X is a one dollar coin of course). What could a one dollar
coin do that a perfect duplicate counterfeit could not? Perhaps, actually purchase
something legally? But a legal purchase is not a purely causal transaction.
12
I do not want to be committed to any particular view of the nature of time here
and certainly not to the claim that the correct description of the world is one in
which states unfold in time. It is hard, and I wont try, to avoid talk of nature as
temporally dynamic but this way of talking can always, I think, be recast in terms
236 Notes

of the constraints imposed by nature on the set of states which form allowable
sequences. Such sequences can be viewed as 4-dimensional entities stretched
across the temporal as well as spatial dimensions.
13
This example is of course from Hilary Putnam (Putnam 1975, pp. 295 ff.). It
is important to remember that Putnam is explicitly concerned with the issue of
explanation and never questions that the fundamental physical features serve to
fully determine the possible motions of the peg and hole.
14
It is also possible that this confusion is part of the reason why philosophers
have failed after over 250 years of serious effort to come up with an acceptable
philosophical account of causation and why the idea that science makes no
reference to nor needs any appeal to causation is so persistent (this claim was
famously made by Bertrand Russell (Russell 1917/1981, Chap. 9) for an updated
defense see Norton 2003).
15
For an amusing but not altogether uninstructive example of this kind of
explanatory promiscuity, as well as others, I commend to the reader the country
music song Third Rock from the Sun by Joe Diffie whose music video can be
readily found on youtube.
16
While Mills methods are rather crude and modern statistical analyses provide
a much more powerful and extensive tool kit, these newer methods stand as
refinements rather than replacements of Mills methods. A sophisticated
philosophical extension of Mills methods has been advanced as the
manipulability or interventionist theory of causation (for an overview see
Woodward 2008). This theory has been used in an attempt to show that there is no
exclusion problem about mixing macro and micro causation, with particular
reference to the problem of mental causation and the threat of epiphenomenalism
(Shapiro and Sober 2007). I think the interventionist account is clearly set at the
epistemic level and nicely shows, and exploits, the natural inter-level promiscuity
we should expect to find there. For an interesting review of current psychological
theories of inferring causes that provides insight into the folk theory of causation,
as well as a defense of her own account, see Cheng (1997).
17
Taken together these factors allow the (controversial) calculation of the
effects on human health of ozone depletion which the American Environmental
Protection Agency estimates at about 70,000 deaths by 2075 (Reilly 1992).
18
For an engaging defense of a very strong form of this claim of scientific
completeness, see the physicist Sean Carrolls post at the blog Cosmic Variance
(Carroll 2010).
19
Note that Dennett says These patterns are objectivethey are there to be
detectedbut from our point of view they are not out there independent of us,
since they are patterns composed partly of our subjective reactions to what is out
there, they are the patterns made to order for our narcissistic concerns (Dennett
1987, p. 39).
20
A cursory search of the journal Lung Cancer turns up recent studies that go
both ways on the coffeelung cancer link. All the researchers are well aware of
smoking as a possible confounder of course.
Notes 237

21
In light of this example, one might wonder why we have an inequality in (C2)
rather than requiring that PAjC ^ B [ PAjB: We seek only to determine causal
relevance amongst competing factors. Relative to A, Cs relevance absorbs Bs
putative relevance. But although (C2) guarantees that C makes a difference, it is
possible that B could modify Cs causal influence, either strengthening or
weakening it. Supposing that coffee either adds or subtracts power to tobacco
smokes carcinogenic influence we can get (C2) to go either way.
22
My results are contrary to an earlier attempt by Robert Brandon to use the
screening off test to show that high-level features can take efficacy away from low-
level features in the context of evolutionary biology (see Brandon 1982; Brandons
approach was criticized in Sober 1992).
23
For a critique of Yablos approach see Cox (2008).
24
One possible snare: the conscious apprehension of average family size
appears able to cause things but examples like these areif they are examples of
efficacy of any kindexamples of the efficacy of representational states of mind,
not of the efficacy of what is represented. Thoughts about unicorns have their
effects, but admitting this does not concede any causal powers to unicorns.
25
I must note these remarks of Richard Lewontin. Though he made them in a
debate about the nature of IQ and IQ testing, I think the point ought to be
generalized: It is important to point out that the distinction between mental
constructs and natural attributes is more than a philosophical quibble, even when
those constructs are based on physical measurements. Averages are not inherited;
they are not subject to natural selection; they are not physical causes of any events
(Lewontin 1982).

Chapter 9
1
What I am calling the generation problem has very close connections to both the
problem of the explanatory gap (see Levine 1983) or what is called the hard
problem of consciousness (see Chalmers 1996).
2
The core idea of the view I will develop here can be traced back at least to
Leibniz who, speaking of what he called aggregates (which in my treatment are
the conservative emergents), holds that it is appropriate to conceive of them as a
single thing. . . but then all these unities are made complete only by thoughts and
appearances (Leibniz 1967, p. 126). Leibniz himself cites Democritus as holding
a similar view. More recent philosophical work which explores somewhat similar,
if perhaps rather more extreme, ideas include Merricks (2001) and van Inwagen
(1990). The philosophical debate on the nature and status of ordinary objects
remains in flux.
3
Other conscious creatures may live in entirely different worlds insofar as they
conceptualize things differently. Such creatures need not be particularly
sophisticated but they do need to be conscious or else there is no sense-beyond
our own projection-in which distinctive high level patterns play any role for them.
238 Notes

For a fascinating attempt to get inside the mind of a simple creature whose way
of looking at the world is quite different from ours see the discussion of the life of
a tropical jumping spider in Harland and Jackson (2004).
4
A very rich conception of how this might work can be found in Richard Boyds
discussion of the general structure of natural kinds (Boyd 1999). Although Boyd
focuses on biological kinds his treatment is intended to extend to high level
structure in general. One might hope that Boyds approach could be integrated
with our discussion of the emergence of the classical world in Chap. 6 perhaps to
serve as some kind of overall general superstructure of conservative emergence.
Boyd himself sees his approach as part of a much broader scientific realism which
would resist the austerity the SPW as I am presenting it.
5
Once again, it is important to bear in mind the distinction between the
explanatory and metaphysical domains of causation, or, as I labeled it, the
difference between causation and kausation. There is an interesting confluence of
the two in this case. There is no doubt that the 2nd law provides a great deal of
explanatory power and intelligibility across a wide range of applications. But it is
also true that the appreciation of how the lower level features conspire to verify
the 2nd law deepens our understanding of the nature of thermodynamics. It
nonetheless remains true that the drivers of all thermodynamical systems are the
fundamental low level kausal processes.
6
There is a host of benighted designs that aim to exploit capillary action in
various ways to achieve perpetual motion. One of the earliest was a system of
sponges and weights invented by William Congreve around 1827 (see Ord-Hume
1977, Chap. 6 for a survey of capillary action and sponge wheel based attempts).
7
Its worth noting that entropy increase and information degradation are
themselves both reflections of non-fundamentality. At least, insofar as basic theory
is what we called total in Chap. 7 above there is perfect information preservation
as an isolated system evolves. Since it does seem that our basic physics aims at
totality, how can this be? Because systems of interest to us are not isolated (only
the whole universeor maximal causally interacting parts of itare isolated) and
the information spreads out from the system at issue into the larger system plus
environment (see p. 71 above).
8
The paradox of consciousness is vaguely analogous to an aspect of a problem,
the measurement problem, that arose early in the struggle to interpret quantum
mechanics. The theory implies that systems will form superpositions of all their
possible outcome states. For example, a detector monitoring the spin of a particle
whose spin is a superposition of spin states should, if regarded as a quantum
system, itself enter a superposition of detector states. But if conscious beings are
also simply quantum systems, then they too should go into superpositions of
distinct observations when they consult the detector. But we know that
consciousness never appears to itself in such an indeterminate state. Thus
consciousness appears to have a distinctive and irreducible place in the quantum
world (see Wigner 1962 for the classical presentation). Of course, there are many
way to respond to or evade this problem. I only wish to point out the structural
similarity to the paradox developed here.
Notes 239

9
I discuss this case in further detail as well as what I regard as similar failures of
naturalization in both Ruth Millikans and Jerry Fodors accounts of meaning in
Seager 2000b.
10
Aristotles argument was endorsed and extended by Franz Brentano (1874/
1973, pp. 130 ff.) and remains a prominent doctrine of the phenomenological
school of philosophy. Currently, there is something of a renaissance of interest in
reflexive accounts of consciousness in analytic philosophy, with several books out
or on the way (e.g. Janzen 2008; Kriegel 2009; Brook and Raymont, forthcoming).
At least one modern theory of consciousness denies the essential reflexivity of
consciousness but follows Aristotle in requiring that conscious states be such that
their subjects must be aware of them. This is the so-called higher order thought
theory of consciousness espoused by, for example, David Rosenthal andin a
somewhat different formPeter Carruthers (see Rosenthal 2005; Carruthers
2000). HOT theory halts the regress by accepting that there are some non-
conscious mental states. In particular, the thoughts which confer consciousness on
certain mental states in virtue of being about them are not themselves conscious so
there need be no conscious awareness of every conscious mental state, contrary to
Aristotle and Brentano.
11
Recognition of the importance of the fact that consciousness is an intrinsic
property goes back at least to Leibniz and still funds some radical ideas about the
nature of consciousness and its place in nature (see Strawson 2006; see also Seager
2006).

Chapter 10
1
Which is not to say that it was not noticed in one way or another long ago; for an
historical survey of the problem see Seager (2007).
2
See Timothy OConnors Emergent Properties (1994) for an endorsement
and philosophical development of a radical emergentism.
3
We have to add the caveat about fundamental laws since laws of nature can be
conservatively emergent in the usual sense that they are determined by the
fundamental laws. For example, the Weidemann-Franz law which states a positive
correlative relationship between thermal and electrical conductivity of a metal is
an emergent or derived law, which depends on the fact that both heat and
electricity conduction depend on the presence of free electrons in metals. The
radical emergentist of course posits the existence of primitive, irreducible laws of
emergence.
4
The principle of the conservation of energy has been doubted by highly
reputable scientists. In the 1930s Neils Bohr briefly advocated rejecting the
principle in the face of anomalous evidence from radioactive decay, but the
postulation of the almost undetectable neutrino by Wolfgang Pauli offered an
escape hatch (the name neutrino was Fermis however). The neutrino was finally
detected in 1956 by Frederick Reines and Clyde Cowan.
240 Notes

5
There is a way for the radical emergentist to avoid many of the difficulties
advanced above which is to take the unorthodox (for traditional emergentists)
route of epiphenomenalism. Standard epiphenomenalism is a kind of radical
emergence which says that the brain causes conscious states as non-physical
attendants which have no causal effect back on the physical realm. This avoids the
danger of violating conservation laws but the price is high. The supposed benefit of
emergentism is that it does not displace consciousness from the physical world.
Plus, of course, epiphenomenalism has difficulties of its own which are quite
formidable such as the charge that epiphenomenalism entails that our conscious
states are not the source of our (supposed) knowledge of them or even our
knowledge that we are conscious at all (for a detailed discussion from a pro-
epiphenomenalism stance see Robinson 2004). It is also worth pointing out here
that Cartesian dualism can also be regarded as a form of radical emergentism but
one in which the emergent is not a property of a physical entity but a novel
substance brought into existence under certain conditions. Of course, for Descartes
it is no law of nature that governs the emergence of mental substance but divine
decree. Obviously, substance dualism has all the difficulties of radical emergence
(conflicts with conservation laws, no sign of mental action, etc.) plus difficulties of
its own (for example the problem of how two utterly disparate substances can be
causally linked, a problem that was raised by Princess Elizabeth in correspondence
with Descartes as far back as 1643their correspondence can be found in Shapiro
2007and very forcefully raised again recently by Jaegwon Kim with what he
calls the pairing problem, see Kim 2005).
6
Panpsychism is an ancient doctrine with a lengthy philosophical pedigree
ranging from thinkers of the ancient world to contemporary philosophers. See
Skrbina (2005) for a detailed historical study; Seager and Allen-Hermanson (2008)
for a theoretical overview. There has been a recent revival of interest in
panpsychism amongst analytic philosophers, occasioned by the persistently
intractable difficulty of integrating consciousness into the SPW. The first spark
of panpsychisms renewal goes back to Thomas Nagels argument for
panpsychism (Nagel 1979). See also Cleve (1990); Chalmers (1996), Chap. 8;
Seager (1995); Strawson (2006) and the recent collections Freeman (2006);
Skrbina (2009); Blamauer (2011).
7
Russell is part of a tradition in which perception is founded on basic elements
often referred to as sense-data. These were not generally regarded as mental in
nature. After all, sense-data have spatial extent and qualities like redness which
seem foreign to mental states (how could a mental state be red). But the mental
states in question are states such as appearing red and these are certainly mental
and do not require that anything actually be red in order to occur in some mind.
8
In the words of Spinoza himself: The modes of each attribute have God for
their cause only insofar as he is considered under the attribute of which they are
modes, and not insofar as he is considered under any other attribute (Spinoza
1677/1985, Bk. 2, Prop. 6, p. 450).
9
This inference seems decidedly too swift. It is not easy to see how to build an
organism out of nothing but bosons, or nothing but electrons. So it seems there is
Notes 241

logical room for some of the physically fundamental entities to lack mentality even
if we grant the overall cogency of Nagels argument. The view that at least some
but perhaps not all of the physically fundamental constituents of the world possess
some sort of mental properties has been called micropsychism by Galen
Strawson (see Strawson 2006). On the other hand, if we are willing to grant that
some physically basic entities possess mental properties, there does not seem to be
much reason to balk at their universal distribution. At the very least this avoids the
question of exactly why certain fundamental physical entities possess while others
lack mentality.
10
It is worth forestalling a possible objection. Quantum superpositions are not
vague. Williams and Barnes (2009), following on the famous argument of Gareth
Evans (Evans 1978), provide an interesting argument that an underlying
determinate ontology is incompatible with the existence of vague objects.
11
The essential indeterminacy of mountainhood (and all other supposedly
vague properties) is something which so-called epistemicists about vagueness
deny. Philosophers willing to bite this bullet must insist that there is a critical fact
which determines whether or not X is a mountain, but this fact is for various
reasons more or less completely inaccessible to us (see Williamson 1994 for an
extended defense of epistemicism).
12
The notion of large simples is not essentially connected to emergence. There
could be large simples that are part of the fundamental furniture of the world from
its inception or a part of its eternal structure. Newtons conception of absolute
space might count as an example. At least, certain philosophers have held the view
that space is an extended simple. Absolute space is not the causal result of any
interaction of elementary physical features but instead stands by itself. Yet it
arguably does not have parts, save in a some purely notional sense of having
regions within it-it is not composed of these parts and it cannot be divided into
them. This is a complex issue and I only put it forward as an illustrative example;
for discussion see Holden (2004).
13
I attempt to flesh out this suggestion in more detail in Seager (2010). The
viability of this general strategy is examined and defended in Jonathan Powells
PhD dissertation (Reading University); some of his work was presented at Tucson
2010 Towards a Science of Consciousness conference. See also Hameroff and
Powell (2009). The suggestion is also highly reminiscent of the fusion operation of
Paul Humphreys, as discussed in Chap. 6 above (see Humphreys 1997a, b). The
crucial difference is that the current suggestion does not assume that consciousness
arises from the fusion of purely physical precursors.
14
The prospect of refuting physicalism has always been one of the driving forces
behind ESP research. Despite more than a century of at least quasi-scientific efforts
to isolate a demonstrable paranormal effect there has been no decisive, clear and
uncontroversial experimental evidence for its existence, still less any successful
technology dependent upon it. Although it is hard to prove a negative this does not
seem to be a thriving, progressive research program. It is worth remembering that a
thinker of stature no less than Alan Turing regarded ESP as a strong, though not
irrefutable, argument against the idea that a computer could exhibit intelligent
242 Notes

thought and also makes the remarkable claim that the statistical evidence, at least
for telepathy, is overwhelming (Turing 1950, p. 453). Sadly, Turing, along with
C. D. Broad, were largely persuaded by the then highly influential card guessing
experiments of Graham Soal whose results were later shown to be fraudulent (see
Markwick 1978). In Broads case there was an underlying and amusing reason for
his interest in the paranormal. He professed not to really care very much about the
question of post-mortem survival but eagerly if forlornly hoped that ESP research
would show that the scientific vision of the world may prove to be as inadequate as
it certainly is arrogant and ill-informed (Broad 1962, p. x).
15
Though these three thinkers provide important tools for developing a anti-
realist view of science which can be applied to the problem of consciousness, I do
not mean to imply that they are all anti-realists themselves. In fact, both Dupr and
Cartwright are happy to believe in the existence of unobservable entities
discovered by science. Nonetheless in their opposition to the unity of science
(Dupr) and physics fundamentalism (Cartwright) they provide grist for the anti-
realist mill, especially with regard to questions about the nature of what the SPW
regards as high level, conservatively emergent features of the world.
16
For an interesting and highly negative assessment of what he calls the perfect
model model of science see Teller (2001), which emphasizes the basic disconnect
between the epistemic and explanatory role of models and the unimaginable
complexity of any hypothetical model which completely captures some portion of
reality.
This attitude is nicely expressed in the report of the United States LHC
Communication Task Force. The aim of the Task Force is to sustain and build
support for particle physics in the USA by advertising the involvement of
American scientists and science funding agencies in the LHC. The first strategy
towards this goal listed in their report is to promote recognition by key audiences
of the value to the nation of particle physics, because of its unique role in
discovery of the fundamental nature of the universe (Banegas et al. 2007, p. 5).
References

Abbott, A.: CERN considers chasing up hints of Higgs boson. Nature 407(6800), 8 (2000)
Aczel, A: Entanglement: The Greatest Mystery in Physics. Raincoast Books, Vancouver (2002)
Alexander, S.: Space, Time and Deity. Macmillan & Co, London (1920)
Anderson, P.W.: More is different. Science 177(4047), 393396 (1972)
Andrews, K., Murphy, L., et al.: Misdiagnosis of the vegetative state: retrospective study in a
rehabilitation unit. Br. Med. J. 313(7048), 1316 (1996)
Ariew, A., Cummins, R., et al. (eds.): Functions: New Essays in the Philosophy of Psychology
and Biology. Oxford University Press, Oxford (2002)
Arp, H.: Quasars, Redshifts and Controversies. Interstellar Media, Berkeley (1987)
Arp, H.: Seeing Red: Redshifts, Cosmology and Academic Science. Apeiron, Montreal (1998)
Arp, H.: Catalogue of Discordant Redshift Associations. Apeiron, Montreal (2003)
Ashby, N.: Relativity in the global positioning system. Living Rev. Relativ. 6(1). (2003). URL
http://www.livingreviews.org/lrr-2003-1
Avrutin, S.: Linguistics and agrammatism. GLOT Int. 5(3), 8797 (2001)
Awh, E., Gehring, W.: The anterior cingulate cortex lends a hand in response selection. Nat.
Neurosci. 2(10), 853854 (1999)
Bacciagaluppi, G.: The role of decoherence in quantum mechanics. In: Zalta, E. N. (ed.) The
Stanford Encyclopedia of Philosophy. The Metaphysics Research Lab (2007). URL
http://plato.stanford.edu/archives/fall2008/entries/qm-decoherence
Bagarinao, E., Nakai, T., et al.: Real time function MRI: development and emerging applications.
Magn. Reson. Med. Sci. 5(3), 157165 (2006)
Baldwin, J. (ed.): Dictionary of Philosophy and Psychology. The Macmillan Company, New
York (1901)
Banegas, D. et al.: At discoverys horizon: report of the task force on US LHC communication.
Technical report, National Science Foundation. (2007). URL http://wwwlhcus.org/
communication/documents/US_ILC_Report_101807_v3.pdf
Barbour, J.: The End of Time: The Next Revolution in Physics. Oxford University Press, Oxford
(2000)
Barnes, J.: The Presocratic Philosophers. Arguments of the Philosophers, Routledge, London
(1983)
Baron-Cohen, S.: Mindblindness: An Essay on Autism and Theory of Mind. MIT Press,
Cambridge (1995)
Barrow, J., Tipler, F.: The Anthropic Cosmological Principle. Oxford University Press, Oxford
(1988)
Bartels, A., Zeki, S.: The neural basis of romantic love. NeuroReport 11(17), 38293834 (2000)
Bartels, A., Zeki, S.: The neural correlates of maternal and romantic love. Neuroimage 21(3),
11551166 (2004)

W. Seager, Natural Fabrications, The Frontiers Collection, 243


DOI: 10.1007/978-3-642-29599-7,  Springer-Verlag Berlin Heidelberg 2012
244 References

Bass, S.: How does the proton spin? Science 315, 16721673 (2007)
Batterman, R.:The Devil in the Details: Asymptotic Reasoning in Explanation, Reduction and
Emergence. Oxford University Press, Oxford (2002)
Batterman, R.: Response to Belot. Philos. Sci. 72, 154163 (2005)
Bauer, R.: Autonomic recognition of names and faces in prosopagnosia: a neuropsychological
application of the guilty knowledge test. Neuropsychologia 22, 457469 (1984)
Bedau, M.: Weak Emergence. In: Tomberlin, J. (ed.) Philosophical Perspectives 11: Mind,
Causation, and World, pp. 375399. Blackwell, Oxford (1997)
Bedau, M., Humphreys, P.: Emergence: Contemporary Readings in Philosophy and Science. MIT
Press, Cambridge (2008)
Begelman, M., Rees, M.: Gravitys Fatal Attraction: Black Holes in the Universe. W. H.
Freeman, New York (1996)
Belot, G.: Chaos and fundamentalism. In: Howard, D. (ed.) PSA 98, pp.S454S465. Newark, DE:
Philosophy of Science Association. (16th Biennial Meetings of the Philosophy of Science
Association, Part II: Symposium Papers.) (2000)
Belot, G.: Whose devil? which details? Philos. Sci. 72, 128153 (2005)
Belot, G., Earman, J.: Chaos out of order: quantum mechanics, the corrspondence principle and
chaos. Stud. Hist. Philos. Mod. Phys. 28, 147182 (1997)
Berlekamp, E., Conway, J. et al.: What is life? In: Winning Ways for Your Mathematical Plays,
vol. 2, Chap. 25. Academic Press, New York, (1982)
Bickle, J.: Multiple realizability. In: Zalta, E. N. (ed.) The Stanford Encyclopedia of Philosophy.
The Metaphysics Research Lab, fall 2008 ed. (2008)
Blackburn, S.: Supervenience Revisited. In: Hacking, Ian (ed.) Exercises in Analysis: Essays by
Students of Casimir Lewy. Cambridge University Press, Cambridge (1985)
Blackmore, S.: Alien abduction. New Sci. 144(1952), 2931 (1994)
Blackmore, S.: The Meme Machine. Oxford University Press, Oxford (2000)
Blamauer, M. (ed.): The Mental as Fundamental: New Perspectives on Panpsychism. Ontos
Verlag, Frankfurt (2011)
Blitz, D.: Emergent Evolution: Qualitative Novelty and the Levels of Reality. Kluwer, Dordrecht
(1992)
Bloch, F.: Nuclear induction. Phys. Rev. 70, 460474 (1946)
Block, N.: Do causal powers drain away? Philos. Phenomenol. Res. 67, 133150 (2003)
Block, N., Stalnaker, R.: Conceptual analysis, dualism, and the explanatory gap. Philos. Rev. 108,
146 (1999)
Boden, M. (ed.): The Philosophy of Artificial Life. Oxford University Press, Oxford (1996)
Bogen, J.: Physiological consequences of complete or partial commissural section. In: Apuzzo,
M. (ed.) Surgery of the Third Ventricle, 2nd ed. pp. 16786. Willams and Wilkins,
Philadelphia (1998)
Bohm, D., Hiley, B.: The Undivided Universe: An Ontological Interpretation of Quantum
Mechanics. Routledge, London (1993)
Bontly, T.: The supervenience argument generalizes. Philos. Stud. 109, 7596 (2002)
Boyd, R.:Homeostasis, species, and higher taxa. In: Wilson, R. (ed.) Species: New Interdisci-
plinary Studies, pp. 141185. MIT Press, Cambridge (1999)
Bozarth, M.: Pleasure systems in the brain. In: Warburton, D. M. (ed.) Pleasure: The Politics and
the Reality, pp. 514. Wiley, New York (1994)
Brandon, R.: The levels of selection. In: Asquith, P., Nickles, T. (eds.) PSA 1980, vol. 1, pp. 315
323. Philosophy of Science Association, East Lansing (1982)
Braun, A., Balkin, T., et al.: Regional cerebral blood flow throughout the sleepwake cycle: An
H2(15)O PET study. Brain 120, 11731197 (1997)
Bremner, J.D.: Does stress damage the brain? Biol. Psychiatry 45(7), 797805 (1999)
References 245

Brentano, F.: Psychology from an empirical standpoint. international library of philosophy and
scientific method. Routledge and Kegan Paul, London. (Original edition edited by O. Kraus;
English edition edited by L. McAlister. Translated by A. Rancurello, D. Terrell and L.
McAlister) (1874/1973)
Bringsjord, S.: Taylor, J.: P=NP. (2004). URL http://arxiv.org/abs/cs/0406056
Broad, C.D.: Mind and its Place in Nature. Routledge and Kegan Paul, London (1925)
Broad, C.D.: Lectures on Psychical Research: Incorporating the Perrott Lectures Given in
Cambridge University in 1959 and 1960. Routledge and Kegan Paul, London (1962)
Broadfield, D., Holloway, R., et al.: Endocast of sambungmacan 3 (Sm 3): A new homo erectus
from Indonesia. Anat. Rec. 262, 369379 (2001)
Brock, W.: The Norton History of Chemistry. W. H. Norton & Co., New York (1993)
Brook, A., Raymont, P.: The unity of consciousness. In: Zalta, E. N. (eds.) The Stanford
Encyclopedia of Philosophy. Stanford University: The Metaphysics Research Lab, winter
2009 ed. (2010). URL http://plato.stanford.edu/archives/win2009/entries/consciousness-unity
Brook, A., Raymont, P.: A Unified Theory of Consciousness. MIT Press, Cambridge,
(forthcoming)
Brownell, G.: A history of positron imaging. Presentation given in celebration of the 50h year of
services by the author to the Massachusetts General Hospital on 15 Oct 1999. URL
http://www.petdiagnostik.ch/PET_History/alb.html
Brush, S.: History of the Lenz-Ising model. Rev. Mod. Phys. 39(4), 883893 (1967)
Buckingham, E.: On physically similar systems: illustrations of the use of dimensional analysis.
Phys. Rev. 4(4), 345376 (1914)
Buckner, R.: The hemodynamic inverse problem: making inferences about neural activity from
measured MRI signals. Proc. Nat. Acad. Sci. 100(5), 21772179 (2003)
Buller, D.: Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human
Nature. MIT Press (Bradford Books), Cambridge, (2006)
Buss, D. (ed.): The Handbook of Evolutionary Psychology. Wiley, Hoboken (2005)
Byers, N.: E. Noethers discovery of the deep connection between symmetries and conservation
laws. The Heritage of Emmy Noether in Algebra, Geometry and Physics, vol. 12 of Israel
Mathematical Conference (1999). URL http://www.physics.ucla.edu/*cwp/articles/noether.
asg/noether.html
Capgras, J., Reboul-Lachaux, J.: Lillusion des sosies dans un dlire systmatis chronique.
Bulletin de la Socit de Mdecine Mentale, 11, 616. (English translation available in
History of Psychiatry (1994) 5, 119130) (1923)
Carroll, S.: Why is the universe accelerating? In: Freedman, W. (ed.) Measuring and Modeling
the Universe, pp. 23555. Cambridge University Press, Cambridge. (2004). URL
http://xxx.lanl.gov/abs/astro-ph/0310342
Carroll, S.: The laws underlying the physics of everyday life are completely understood. Cosmic
Variance, 23 Sept (2010). URL http://blogs.discovermagazine.com/cosmicvariance/2010/09/
23/the-laws-underlyingthe-physics-of-everyday-life-are-completely-understood/
Carruthers, P.: Phenomenal Consciousness: A Naturalistic Theory. Cambridge University Press,
Cambridge (2000)
Carter, B.: The anthropic principle and its implications for biological evolution. Philos. Trans.
R. Soc. Lond. A310(1512), 347363 (1983)
Cartwright, N.: The Dappled World. Cambridge University Press, Cambridge (1999)
Cartwright, N.: Reply to Anderson. Stud. Hist. Philos. Mod. Phys. 32(3), 495497 (2001)
Cartwright, N.: Another philosopher looks at quantum mechanics, or what quantum theory is not.
In: Ben-Menahem, Y. (ed.) Hilary Putnam, Contemporary Philosophy in Focus, pp. 188202.
Cambridge University Press, Cambridge (2005)
Caston, V.: Aristotle on consciousness. Mind 111(444), 751815 (2002)
246 References

Chaitin, G.: Leibniz, information, math and physics. In: Winfried Herausgegeben von Lffler and
Paul Weingartner (eds.), Wissen und Glauben/Knowledge and Belief, Internationalen
Wittgentstein-Symposiums, pp. 277286. Vienna: BV & HPT. (2004). URL http://www.
umcs.maine.edu/*chaitin
Chalmers, D.: The Conscious Mind: In Search of a Fundamental Theory. Oxford University
Press, Oxford (1996)
Chalmers, D.: Facing up to the problem of consciousness. In: Shear, J. (ed.) Explaining
Consciousness, pp. 932. MIT Press, Cambridge (1997)
Chalmers, D., Jackson, F.: Conceptual analysis and reductive explanation. Philos. Rev. 110, 315
361 (2001)
Chandrasekhar, S.: (1987). Shakespeare, Newton, and Beethoven. Truth and Beauty: Aesthetics
and Motivations in Science, pp. 2958. University of Chicago Press, Chicago
Chang, K.: Here, There and Everywhere: A Quantum State of Mind. New York Times (July 11)
(2000)
Cheng, P.: From covariation to causation: a causal power theory. Psychol. Rev. 104(2), 367405
(1997)
Choisser, B.: Face blind. (2007). URL http://choisser.com/faceblind
Chomsky, N.: New Horizons in the Study of Language and Mind. Cambridge University Press,
Cambridge (2000)
Churchland, P., Sejnowski, T.: The Computational Brain. MIT Press, Cambridge, MA (1992)
Churchland, P.: Some reductive strategies in cognitive neurobiology. Mind 95, 279309.
Reprinted in Churchlands A Neurocomputational Perspective: The Nature of Mind and the
Structure of Science, MIT Press, Cambridge. 1991 (1986)
Clayton, P., Davies, P. (eds.): The Re-Emergence of Emergence: The Emergentist Hypothesis
From Science to Religion. Oxford University Press, Oxford (2006)
Van Cleve, J.: Mind-dust or magic? Panpsychism versus emergence. Philos. Perspect. 4, 215226
(1990)
Collini, E., Wong, C.Y., et al.: Coherently wired light-harvesting in photosynthetic marine algae
at ambient temperature. Nature 463(7281), 644647 (2010)
Connolly, J., Anderson, R., et al.: FMRI evidence for a parietal reach region in the human
brain. Exp. Brain Res. 153(2), 140145 (2003)
Cooper, W.: William Jamess theory of mind. J. Hist. Philos. 28(4), 571593 (1990)
Copeland, J.: Hypercomputation. Mind. Mach. 12, 461502 (2002)
Copeland, J. (ed.): The Essential Turing. Oxford University Press, Oxford (2004)
Copeland, J., Sylvan, R.: Beyond the universal Turing machine. Aust. J. Philos. 77(1), 4666
(1999)
Corradini, A., OConnor, T. (eds.): Emergence in Science and Philosophy. Routledge, London
(2010)
Cosmides, L.: The logic of social exchange: Has natural selection shaped how humans reason?
Studies with the Wason selection task. Cognition 31, 187276 (1989)
Cover, J.A.: Non-basic time and reductive strategies: Leibnizs theory of time. Stud. Hist. Philos.
Sci. 28(2), 289318 (1997)
Cowey, A., Stoerig, P.: Blindsight in monkeys. Nature 373, 247249 (1995)
Cox, E.: Crimson brain, red mind: Yablo on mental causation. Dialectica 62(1), 7799 (2008)
Cronin, J., Pizzarello, S.: Enantiomeric excesses in meteoritic amino acids. Science 275(5302),
951955 (1997)
Damasio, A.: Descartes Error: Emotion, Reason and the Human Brain. G. P. Putnams Sons,
New York (1994)
Damasio, A.: The Feeling of What Happens: Body and Emotion in the Making of Consciousness.
Harcourt Brace and Co., New York (1999)
Darwin, C.: On the Origin of Species by Natural Selection. Murray, London (1859)
Davis, M.: Computability, computation, and the real world. In:Termini, S. (ed.),Imagination and
Rigor: Essays on Eduardo R. Caianiellos Scientific Heritage. Springer, Milan (2006)
References 247

Dawkins, R.: The Selfish Gene. Oxford University Press, Oxford (1989)
Dennett, D.: Intentional Systems. J. Philos. 68(4):87106. (Reprinted in Dennetts Brainstorms,
Bradford Books, Montgomery (1978).) (1971)
Dennett, D.: A cure for the common code. Brainstorms: Philosophical Essays on Mind and
Psychology, pp. 90108. Bradford Books, Montgomery. This first appeared as Dennetts
review of Jerry Fodors The Language of Thought in the journal Mind, April 1977 (1978a)
Dennett, D.: Why the law of effect will not go away. in Brainstorms: Philosophical Essays on
Mind and Psychology. Bradford Books, Montgomery. Originally published in The Journal of
Theory of Social Behavior 2, 169187 (1978b)
Dennett, D.: Elbow Room: The Varieties of Free Will Worth Wanting. MIT Press (Bradford
Books), Cambridge (1984)
Dennett, D.: The Intentional Stance. MIT Press, Cambridge (1987)
Dennett, D: Real Patterns. J. Philos. 88:2751.(Reprinted in Dennetts Brainchildren: Essays on
Designing Minds, MIT Press, Cambridge 1998) (1991)
Dennett, D.: Darwins Dangerous Idea: Evolution and the Meanings of Life. Simon and Schuster,
New York (1995)
Deutsch, D.: Quantum theory, the Church-Turing principle and the universal quantum computer.
Proc. Royal Soc. A 400(1818), 97117 (1985)
DiSalvo, F.J.: Thermoelectric cooling and power generation. Science 285(5428), 703706 (1999)
Dostoevsky, F.: The Idiot. Harmondsworth: Penguin. (Translated by David Magarshack). (1869/
1955)
Dretske, F.: Naturalizing the Mind. MIT Press, Cambridge (1995)
Dupr, J.: The Disorder of Things. Harvard University Press, Cambridge (1993)
Drr, S., et al.: Ab initio determination of light hadron masses. Science 322(5905), 12241227
(2008)
Earman, J.: A Primer on Determinism. Dordrecht: Reidel (1986)
Earman, J., Norton J.: Forever is a day: Supertasks in Pitowsky and Malament-Hogarth
Spacetimes. Philos Sci 60(1), 2242 (1993)
Edelman, G.: Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, New
York (1987)
Eigler, D.M., Schweizer, E.K.: Positioning single atoms with a scanning tunnelling microscope.
Nature 344, 524526 (1990)
El-Hai, J.: The Lobotomist: A Maverick Medical Genius and His Tragic Quest to Rid the World
of Mental Illness. Wiley, Hoboken (2005)
Ellis, H., Young, A.: Accounting for delusional misidentifications. Br. J. Psychiatry 157, 239248
(1990)
Evans, G.: Can there be vague objects. Analysis 38:208 (1978)
Everett, H.: Relative state formulation of quantum mechanics. Rev. Mod. Phys. 29, 454462
(1957)
Feynman, R.: Simulating physics with computers. Int. J. Theor. Phys. 21, 467488 (1982)
Feynman, R., Leighton, R., et al.: The Feynman Lectures on Physics. Addison-Wesley, Reading
(1963)
Fodor, J.: The Modularity of Mind. MIT Press, Cambridge (1983)
Ford, J.: What is chaos, that we should be mindful of it? In: P. Davies (ed.) The New Physics,
pp. 348372. Cambridge University Press, Cambridge (1989)
Fredkin, E.: Digital mechanics: an informational process based on reversible universal CA.
Physica D 45, 254270 (1990)
Fredkin, E.: An introduction to digital philosophy. Int. J. Theor. Phys. 42(2), 189247 (2003)
Fredkin, E.: Five big questions with pretty simple answers. IBM J. Res. Dev. 48(1), 3145 (2004)
Freeman, A. (ed.): Consciousness and Its Place in Nature. Imprint Academic, Exeter (2006)
Friedman, J., Patel, V., et al.: Quantum superposition of distinct macroscopic states. Nature
406(6791), 4346 (2000)
248 References

Frisch, U., Hasslacher, B., et al.: Lattice gas automata for the Navier-Stokes equation. Phys. Rev.
Lett. 56(14), 15051508 (1986)
Galilei, G.: Sidereus Nuncius (The Starry Messenger). The University of Chicago Press, Chicago
(1610/1989)
Gardner, M.: The fantastic combinations of John Conways new solitaire game life. Sci. Am.
223, 120123 (1970)
Gardner, M.: Wheels, Life, and Other Mathematical Amusements. W. H. Freeman, New York
(1983)
Gauthier, I., Skudlarski, P., et al.: Expertise for cars and birds recruits brain areas involved in face
recognition. Nat. Neurosci. 3(2), 191197 (2000)
Gazzaniga, M.: The Bisected Brain. Appleton-Century-Crofts, New York (1970)
Gazzaniga, M.: Whos in Charge? Free Will and the Science of the Brain. Harper Collins, New
York (2011)
Ge, Z., Marsden, J.: Lie-Poison Hamilton-Jacobi theory and Lie-Poisson integrators. Phys. Lett.
A 133(3), 134139 (1988)
Genay, R., Dechert, W.: The identification of spurious Lyapunov exponents in Jacobian
algorithms. Stud. Nonlinear Dyn. Econ. 1(3), 145154 (1996)
Ghirardi, G.: Collapse theories. In: Zalta E. N. (ed.), The Stanford Encyclopedia of Philosophy.
The Metaphysics Research Lab, Fall 2008 ed. (2008). URL http://plato.stanford.edu/
archives/fall2008/entries/qm-collapse
Glanz, J.: Cosmic motion revealed. Science 282(5397), 21562157 (1998)
Gleick, J.: Chaos: Making a New Science. Viking, New York (1987)
Glimcher, P.: Decisions, Uncertainty, and the Brain: The Science of Neuroeconomics. MIT Press,
Cambridge (2004)
Goodale, M., Milner, D.: Sight Unseen. Oxford University Press, Oxford (2004)
Granqvist, P., Fredrikson, M., et al.: Sensed presence and mystical experiences are predicted by
suggestibility, not by the application of transcranial weak complex magnetic fields. Neurosci.
Lett. 379(1), 16 (2005)
Greene, B.: The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the
Ultimate Theory. W. W. Norton and Co., New York (1999)
Gross, C.: Brain, Vision, Memory: Tales in the History of Neuroscience. MIT Press, Cambridge
(1999)
Grush, R.: Review of Port and van Gelders Mind as Motion. Philos. Psychol. 10(2), 233242
(1997)
Guth, A.: The Inflationary Universe: The Quest for a New Theory of Cosmic Origins. Addison-
Wesley (Helix Books), New York (1997)
Guth, A.: Inflation and the New Era of High-Precision Cosmology. MIT Physics Annual, pp. 28
39 (2002)
Hacking, I.: Representing and Intervening: Introductory Topics in the Philosophy of Natural
Science. Cambridge University Press, Cambridge (1983)
Hacking, I.: A tradition of natural kinds. Philos. Stud. 61(1/2), 109126 (1991)
Hagan, S., Hameroff, S.R., et al.: Quantum computation in brain microtubules: decoherence and
biological feasibility. Phys. Rev. E 65(6), 061901 (2002)
Hameroff, S., Penrose, R.: Conscious events as orchestrated spacetime selections. J. Conscious.
Stud. 3, 3653 (1996)
Hameroff, S., Powell, J.: The conscious connection: a psychophysical bridge between brain and
pan-experiential quantum geometry. In: Skrbina, David (ed.) Mind that Abides: Panpsychism
in the New Millennium, pp. 109127. Benjamins, Amsterdam (2009)
Hamilton, R., Messing, S., et al.: Rethinking the thinking cap: Ethics of neural enhancement
using noninvasive brain stimulation. Neurology 76(2), 187193 (2011)
Harel, D.: Computers, Ltd.: What They Really Cant Do. Oxford University Press,. Oxford
(2000)
References 249

Harland, D., Jackson, R.: Portia perceptions: the umwelt of an araneophagic spider. In: Prete, F.
(ed.) Complex Worlds From Simpler Nervous Systems. MIT Press, Cambridge (2004)
Haugeland, J.: Weak supervenience. Am. Philos. Q. 19, 93103 (1982)
Hawking, S.: A Brief History of Time. Bantam, New York (1988)
Heath, R.: Electrical self-stimulation of the brain in man. Am. J. Psychiatr 120, 571577 (1963)
Heath, R.: Pleasure response of human subjects to direct stimulation of the brain: Physiologic and
psychodynamic considerations. In: Heath, R. (ed.) The Role of Pleasure in Human Behavior,
pp. 219243. Hoeber, New York (1964)
Hellman, G., Thompson, F.: Physicalist materialism. Nous 11, 302345 (1975)
Hellman, G., Thompson, F.: Physicalism: ontology, determination and reduction. J. Philos. 72,
551564 (1977)
Henson, S., Constantino, R., et al.: Lattice effects observed in chaotic dynamics of experimental
populations. Science 294, 602605 (2001)
Hercz, R.: The God helmet. Saturday Night 117(5), (2002)
Hirstein, W.: Brain Fiction: Self Deception and the Riddle of Confabulation. MIT Press,
Cambridge (2005)
Hofstadter, D.: Waking up from the Boolean dream, or, subcognition as computation.
Metamagical Themas, pp. 63165. Bantam Books, New York. Reprint of Hofstadters article
in the July 1982 issue of Scientific American (1986)
Hogan, C.: Interferometers as probes of Planckian quantum geometry. (2010). URL
http://arxiv.org/abs/1002.4880v26
Hogarth, M.: Does general relativity allow an observer to view an eternity in a finite time? Found.
Phys. Lett. 5, 173181 (1992)
Holden, T.: The Architecture of Matter: Galileo to Kant. Oxford University Press, Oxford (2004)
Holland, J.: Emergence: From Chaos to Order. Addison-Wesley (Helix Books), Reading (1998)
Hooke, R.: Micrographia. Oakland, Octavo (CDROM). This is an electronic photo-reproduction
of the original 1665 edition (1665/1998)
Horgan, T., Tienson, J.: Connectionism and the Philosophy of Psychology. MIT Press (Bradford
Books), Cambridge (1996).
Hugdahl, K. (ed.): Handbook of Dichotic Listening: Theory, Methods, and Research. Wiley, New
York (1988)
Hughes, R.I.G.: The Structure and Interpretation of Quantum Mechanics. Harvard University
Press, Cambridge (1992)
Hughes, R.I.G.: The Ising model, computer simulation and universal physics. In: Morgan, M.,
Morrison, M. (eds.) Models as Mediators: Perspectives on Natural and Social Science, pp. 97
145. Cambridge University Press, Cambridge (2000)
Hume, D.: An Enquiry Concerning Human Understanding. Oxford University Press (Clarendon),
Oxford (2000)
Humphrey, N.: Consciousness Regained: Chapters in the Development of Mind. Oxford
University Press, Oxford (1984)
Humphreys, P.: Emergence, not supervenience. Philos. Sci. 64, S337S345 (1997a)
Humphreys, P.: How properties emerge. Philos. Sci. 64, 117 (1997b)
Hurley, S.: Action, the unity of consciousness and vehicle externalism. In: Cleeremans, A. (ed.)
The Unity of Consciousness: Binding, Integration and Dissociation, pp. 7891. Oxford
University Press, Oxford (2003)
Huxley, T.: On the hypothesis that animals are automata, and its history. Fortnightly Rev. 95,
555580 (1874). URL http://aleph0.clarku.edu/huxley/CE1/AnAuto.html
Jackson, F.: Epiphenomenal qualia. Philos. Q. 32, 127136 (1982)
Jackson, F.: From Metaphysics to Ethics. Oxford University Press (Clarendon), Oxford (1998)
James, W.: The Principles of Psychology, vol. 1. Henry Holt and Co., New York (1890)
Janiak, A. (ed.): Newton: Philosophical Writings. Cambridge University Press, Cambridge (2004)
Janzen, G.: The Reflexive Nature of Consciousness. Advances in Consciousness Research.
Amsterdam, John Benjamins (2008)
250 References

Jaynes, E.T.: Quantum beats. In: Barut, A. (ed.) Foundations of Radiation Theory and Quantum
Electrodynamics, pp. 3743. Plenum Press, New York (1980)
Jensen, K., Call, J., et al.: Chimpanzees are rational maximizers in an ultimatum game. Science
318(5847), 107109 (2007)
Joos, E., Zeh, D. H., et al.: Decoherence and the Appearance of a Classical World in Quantum
Theory. Springer, Berlin (2003)
Kadosh, R.C., et al.: The neuroethics of non-invasive brain stimulation. Curr. Biol. 22(4), R1R4.
(2012). URL
http://download.cell.com/images/edimages/CurrentBiology/homepage/curbio9329.pdf
Kanwisher, N.: Neural events and perceptual awareness. Cognition 79, 89113 (2001)
Kay, K.N., Naselaris, T., et al.: Identifying natural images from human brain activity. Nature 452,
352355 (2008)
Kelley, D., Milone, E.: Exploring Ancient Skies: An Encyclopedic Survey of Archaeoastronomy.
Springer, New York (2005)
Kim, J.: Downward Causation in emergentism and non-reductive materialism. In: Becker-
mann, A., Flohr, H., Kim, J. (eds.) Emergence or Reduction? De Gruyter, Berlin (1992)
Kim, J.: Supervenience and Mind. Cambridge University Press, Cambridge (1993)
Kim, J.: Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation.
MIT Press, Cambridge (1998)
Kim, J.: Physicalism, or Something Near Enough. Princeton University Press, Princeton (2005)
King, A., Menon, R., et al.: Human forebrain activation by visceral stimuli. J. Comp. Neurol. 413,
572582 (1999)
Kirk, U., Downar, J., et al.: Interoception drives increased rational decision-making in meditators
playing the Ultimatum Game. Frontiers Neurosci 5(0) (2011). URL http://www.frontiersin.
org/decision_neuroscience/10.3389/fnins.2011.00049/abstract
Koch, C.: The Quest for Consciousness: A Neurobiological Approach. Roberts and Company,
Englewood (2004)
Krauss, L.M., Scherrer, R. J.: The return of a static universe and the end of cosmology. Int.
J. Mod. Phys. D 17, 685 (2008). URL http://dx.doi.org/10.1007/s10714-007-0472-9
Kriegel, U.: Subjective Consciousness: A Self-Representational Theory. Oxford University Press,
Oxford (2009)
Lagage, P., Doucet, C., et al.: Anatomy of a flaring proto-planetary disk around a young
intermediate-mass star. Science 314(5799), 621623 (2006)
Landauer, R.: Information is physical. Phys. Today 44(5), 2329 (1991)
Lange, M.: An Introduction to the Philosophy of Physics: Locality, Fields, Energy and Mass.
Blackwell, Oxford (2002)
Larson, R.: The physics of star formation. Rep. Prog. Phys. 66(10), 16511697 (2003)
Laskar, J., Gastineau, M.: Existence of collisional trajectories of Mercury, Mars and Venus with
the Earth. Nature 459, 817819 (2009)
Lassonde, M., et al.: Effects of early and late transection of the corpus callosum in children. Brain
109, 953967 (1986)
Laughlin, R.: A Different Universe. Basic Books, New York (2005)
Lecar, M., Franklin, F., et al.: Chaos in the Solar System. Ann. Rev. Astron. Astrophys. 39, 581
631 (2001)
Leggett, A.: Reflections on the quantum measurement paradox. In: Hiley, B.J., David Peat, F.
(eds.) Quantum Implications: Essays in Honor of David Bohm, pp. 85104. Routledge and
Kegan Paul, London (1987)
Leibniz, G.W.: Discourse on Metaphysics. In: Ariew, R., Garber, D., (eds.) G. W. Leibniz:
Philosophical Essays, pp. 3568. Hackett, Indianapolis (1686/1989)
Leibniz, G.W.: The Leibniz-Arnaud Correspondence. Manchester University Press. Edited and
translated by H. Mason, Manchester (1967)
Leibundgut, B.: Time dilation in the light curve of the distant type Ia supernova SN1995k.
Astrophys. J. Lett 466, L21L24 (1996)
References 251

Leslie, A.: Theory of mind impairment in autism. In: Whiten, A. (ed.) Natural Theories of Mind:
Evolution, Development, and Simulation of Everyday Mindreading. Blackwell, Oxford
(1992)
Leslie, J.: The End of the World: The Science and Ethics of Human Extinction. Routledge,
London (1998)
Levine, J.: Materialism and qualia: the explanatory gap. Pac. Philos. Q. 64, 354361 (1983)
Levine, J.: Purple Haze: The Puzzle of Consciousness. Oxford University Press, Oxford (2001)
Lewes, G.: Problems of Life and Mind, vol. 2. Kegan Paul, Trench, Turbner & Co., London
(1875)
Lewis, D.: Counterfactuals. Blackwell, Oxford (1973)
Lewis, D.: New work for a theory of universals. Aust. J. Philos. 61, 343377 (1983)
Lewis, D.: On the Plurality of Worlds. Blackwell, Oxford (1986)
Lewontin, R.: Reply to Martin, Orzack and Tomkow. New York Review of Books, 29(1). This is
Lewontins reply in the Letters section to criticisms of his review of Stephen Jay Goulds The
Mismeasure of Man, (1982)
Liddle, A.: An introduction to cosmological inflation. In: Masiero, A., Senjanovic, G., Smirnov,
A. (eds.) 1998 Summer School in High Energy Physics and Cosmology: ICPT, Trieste, Italy,
29 June, 17 July 1998. World Scientific Publishing Company, London (1999)
Lloyd, S.: Universal quantum simulators. Science 273, 10731078 (1996)
Lloyd, S.: Computational capacity of the universe. Phys. Rev. Lett. 88(23) (2002)
Lockwood, M.: Mind, Brain and the Quantum. Blackwell, Oxford (1989)
Logothetis, N., Pauls, J., et al.: Neurophysiological investigation of the basis of the fMRI signal.
Nature 412(6843), 150157 (2001)
Lorenz, E.: Deterministic nonperiodic flow. J. Atmos Sci. 20(2), 130141 (1963)
Lovelock, J.: A physical basis for life detection experiments. Nature 407(4997), 568570 (1965)
Macdonald, G., Macdonald, C. (eds.): Emergence in Mind. Oxford University Press, Oxford
(2010)
Macmillan, M.: An Odd Kind of Fame: Stories of Phineas Gage. MIT Press (Bradford Books),
Cambridge, (2000)
Madsen, J.: Intermediate mass strangelets are positively charged. Phys. Rev. Lett. 85, 46874690
(2000)
Manafu, A.: The prospects for fusion emergence in chemistry. Paper delivered at 2011 Canadian
Philosophical Association meeting (2011)
Marcel, A.: Blindsight and shape perception: deficit of visual consciousness or of visual function?
Brain 121, 15651588 (1998)
Marks, C.: Commissurotomy, Consciousness, and Unity of Mind. MIT Press (Bradford Books),
Cambridge (1980)
Markwick, B.: The Soal-Goldney experiments with Basil Shackleton: new evidence of data
manipulation. Proc. Soc. Psych. Res. 56, 250281 (1978)
Martin, C.B.: Dispositions and conditionals. Philos. Q. 44, 18 (1994)
Matsumoto, M., Saito, S., et al.: Molecular dynamics simulation of the ice nucleation and growth
process leading to water freezing. Nature 416(6879), 409413 (2002)
Maudlin, T.: The Metaphysics Within Physics. Oxford University Press, Oxford (2007)
McGinn, C.: Can we solve the mind-body problem? Mind 98(391), 349366 (1989)
McGinn, C.: The Mysterious Flame: Conscious Minds in a Material World. Basic Books, New
York (1999)
McGivern, P., Rueger, A.: Emergence in physics. In: Corradini, A., OConnor, T. (eds.)
Emergence in Science and Philosophy, pp. 213232. Routledge, London (2010)
McLaughlin, B.: The rise and fall of British emergentism. In: Beckermann, A., Flohr, H., Kim, J.
(eds.) Emergence or Reduction, pp. 4993. De Gruyter, Berlin (1992)
McLaughlin, B., Bennett, K.: Supervenience. In: Zalta, E. N. (ed.) The Stanford Encyclopedia of
Philosophy. Stanford University: The Metaphysics Research Lab, fall 2008 ed. (2008). URL
http://plato.stanford.edu/archives/fall2008/entries/supervenience
252 References

Melnyk, A.: A Physicalist Manifesto: Thoroughly Modern Materialism. Cambridge University


Press, Cambridge (2003)
Menon, D., Owen, A., et al.: Cortical processing in persistent vegetative state. The Lancet
352(9123), 200 (1998)
Merricks, T.: Objects and Persons. Oxford University Press, Oxford (2001)
Mill, J. S.: A System of Logic, vol. 78 of The Collected Works of John Stuart Mill. University of
Toronto Press, Toronto (1843/1963)
Milne-Thomson, L.: Some hydrodynamical methods. Bull. Am. Math. Soc. 63(3), 167186 (1957)
Milner, D., Goodale, M.: The Visual Brain in Action. Oxford Psychology Series. Oxford
University Press, New York (2006)
Monti, M., Vanhaudenhuyse, A., et al.: Willful modulation of brain activity in disorders of
consciousness. New Engl. J. Med. (2010) URL http://content.nejm.org/cgi/content/abstract/
NEJMoa0905370v1
Morgan, C.L.: Emergent Evolution. Williams and Norgate, London (1923)
Morin, D.: Balancing a pencil. (2004) URL http://www.physics.harvard.edu/academics/
undergrad/problems.html
Morowitz, H.: The Emergence of Everything: How the World Became Complex. Oxford
University Press, Oxford (2002)
Morrison, M.: Emergence, reduction, and theoretical principles: rethinking fundamentalism.
Philos. Sci. 73, 876887 (2006)
Motl, L.: Hawking and unitarity. (2005). URL http://motls.blogspot.com/2005/07/hawking-
and-unitarity.html
Mourelatos, A.: Quality, structure, and emergence in later pre-socratic philosophy. In:
Proceedings of the Boston Colloquium in Ancient Philosophy, vol. 2, pp. 12794 (1986)
Mller, J.: Mllers Elements of Physiology. Thoemmes, Bristol. This edition, based upon the
18381842 English translation of William Baly, is edited by Nicholas Wade who provides an
extensive introduction (2003)
Murray, N., Holman, M.: The role of chaotic resonances in the Solar System. Nature 410(6830),
773779 (2001)
Musallam, S., Cornell, B., et al.: Cognitive control signals for neural prosthetics. Science
305(5681), 258262 (2004)
Nagel, T.: Brain bisection and the unity of consciousness. Synthese 22, 396413. (Reprinted in
Nagels Mortal Questions, Cambridge University Press, Cambridge 1979) (1971)
Nagel, T.: What is it like to be a bat? Philos. Rev. 83(4), 43550. (This article is reprinted in
many places, notably in Nagels Mortal Questions, Cambridge: Cambridge University Press,
1979.) (1974)
Nagel, T: Panpsychism. In: Mortal Questions, pp. 18195. Cambridge University Press,
Cambridge. (Reprinted in D. Clarke Panpsychism: Past and Recent Selected Readings,
Albany: SUNY Press, 2004) (1979)
Nagel, T.: The View from Nowhere. Oxford University Press, Oxford (1986)
Narlikar, J., Vishwakarma, R., et al.: Inhomogeneities in the microwave background radiation
interpreted within the framework of the quasi-steady state cosmology. Astrophys. J. 585, 593
598 (2003)
Newman, M.: Alan Matheson Turing: 19121954. Biograph. Mem. Fellows Roy Soc 1, 253263
(1955)
Newton, I.: Opticks, or, A Treatise of the Reflections, Refractions, Inflections and Colours of
Light. Dover, New York (1730/1979)
Nicholls, J., Martin, A. et al.: From Neuron to Brain: A Cellular and Molecular Approach to the
Function of the Nervous System. Sinnauer Associates, Sunderland (2001)
Nichols, S., Stich, S.: Mindreading. Oxford University Press, Oxford (2003)
Nielsen, M., Chuang, I.: Quantum Computation and Quantum Information. Cambridge University
Press, Cambridge (2000)
References 253

Nisbett, R., Peng, K., et al.: Culture and systems of thought: holistic vs. analytic cognition.
Psychol. Rev. 108, 291310 (2001)
Nishimoto, S., Vu, A.T., et al.: Reconstructing visual experiences from brain activity evoked by
natural movies. Curr. Biol. 21(9), 16411646 (2011)
Noll, D.: A primer on MRI and functional MRI (2001). URL http://www.eecs.umich.edu/
*dnoll/primer2.pdf (Univ. of Michigan Depts. of Biomedical Engineering and Radiology)
Norton, J.: Causation as folk science. Philosophers Imprint, 3 (4) (2003). URL:http://hdl.
handle.net/2027/spo.3521354.0003.004
OConnor, T.: Emergent properties. Am. Philos. Q. 31, 91104 (1994)
OConnor, T., Wong, H. Y.: Emergent properties. In: Zalta, E. N. (ed.) The Stanford
Encyclopedia of Philosophy. Stanford University: The Metaphysics Research Lab, spring
2009 ed. (2009). URL http://plato.stanford.edu/archives/spr2009/entries/properties-emergent
Ord, T.: The many forms of hypercomputation. Appl. Math. Comput. 178(1), 143153 (2006)
Ord-Hume, A.: Perpetual Motion: The History of an Obsession. Adventures Unlimited Press,
Kempton (1977)
Owen, A.M.: Functional neuroimaging of the vegetative state. Nat. Rev. Neurosci. 9(3), 235243
(2008)
Owens, D.: Disjunctive laws. Analysis 49, 197202 (1989)
Pais, A.: Inward Bound. Oxford University Press, New York (1986)
Pasley, B., David, S., et al.: Reconstructing speech from human auditory cortex. PLoS Biol.
10(1), e1001251 (2012). URL http://dx.doi.org/10.1371/journal.pbio.1001251
Paull, R., Sider, T.: In defense of global supervenience. Philos. Phenomenol. Res. 52(4), 833854
(1992)
Peixoto, J., Ort, A.: Physics of Climate. American Institute of Physics, New York (1992)
Penfield, W.: The Excitable Cortex in Conscious Man. Liverpool University Press, Liverpool (1958)
Penrose, R.: The Emperors New Mind: Concerning Computers, Minds and the Laws of Physics.
Oxford University Press, Oxford (1989)
Persinger, M.: Religious and mystical experiences as artifacts of temporal lobe function: a general
hypothesis. Percept. Mot. Skills 57, 12551262 (1983)
Persinger, M.: Electromagnetic remote control of every human brain. Percept. Motor skills 80,
791799. (1995)
Petrie, B.: Global supervenience and reduction. Philos. Phenomenol. Res. 48, 119130 (1987)
Phillips, M., Young, A., et al.: A specific neural substrate for perceiving facial expressions of
disgust. Nature 389(6650), 495498 (1997)
Pinker, S.: How the Mind Works. W. W. Norton and Co., New York (1999)
Pitowsky, I.: The physical Church thesis and physical computational complexity. Iyyun 39, 81
99 (1990)
Poincar, H.: Science and Method. Dover, New York.(Reprint of F. Maitlands English
translation of Science et mthode, Paris: Flammarion, 1908) (1908/1960)
Popper, K., Eccles, J.: The Self and Its Brain. Springer International, New York (1977)
Port, R., van Gelder, T. (eds.): Mind as Motion. MIT Press, Cambridge (1995)
Pour-El, M., Richards, I.: The wave equation with computable initial data such that its unique
solution is not computable. Adv. Math. 39(3), 215239 (1981)
Puccetti, R.: Brian bisection and personal identity. Br. J. Philos. Sci. 24(4), 339355 (1973)
Pullman, B.: The Atom in the History of Human Thought. Oxford University Press, New York
(1998)
Putnam, H.: Minds and machines. In: Sidney Hook (ed.) Dimensions of Mind, pp. 14880. New
York University Press, New York. (Reprinted in Putnams Mind, Language and Reality,
Cambridge: Cambridge University Press, 1975.) (1960)
Putnam, H.: Philosophy and our mental life. In: Mind, Language and Reality: Philosophical
Papers, vol. 2, pp. 291303. Cambridge University Press, Cambridge (1975)
Radder, H.: Heuristics and the generalized correspondence principle. Br. J. Philos. Sci. 42(2),
195226 (1991)
254 References

Radner, D., Radner, M.: Animal Consciousness. Prometheus Books, Buffalo (1989)
Rainville, P., Duncan, G., et al.: Pain affect encoded in human anterior cingulate but not
somatosensory cortex. Science 277(5328), 968971 (1997)
Reilly, W. K.: Remarks at the fourth meeting of the parties to the Montreal protocol (1992). URL
http://www.epa.gov/history/topics/montreal/05.htm
Robinson, W.: Understanding Phenomenal Consciousness. Cambridge University Press, Cam-
bridge (2004)
Rosenberg, G.:A Place for Consciousness: Probing the Deep Structure of the Natural World.
Oxford University Press, Oxford (2004)
Rosenthal, D.: Consciousness and Mind. Oxford University Press, Oxford (2005)
Roy, C. S. Sherrington, C. S.: On the regulation of the blood-supply of the brain. J. Physiol. 11(12),
85108; 158ff (1890)
Rumelhart, D., McClelland, J., et al.: Parallel Distributed Processing: Explorations in the
Microstructure of Cognition, vol. 1. MIT Press, Cambridge (1986)
Russell, B.: Mysticism and Logic, and Other Essays. Totowa, Barnes & Noble Books, New York
(1917/1981)
Russell, B.: The Analysis of Mind. George Allen & Unwin, London (1921)
Sahin, N.T., Pinker, S., et al.: Sequential processing of lexical, grammatical, and phonological
information within Brocas area. Science, 326(5951), 445449 (2009). URL http://www.
sciencemag.org/cgi/content/abstract/326/5951/445
Salam, A.: The role of chirality in the origin of life. J. Mol. Evol. 33, 105113 (1991)
Salmon, W.: Scientific Explanation and the Causal Structure of the World. Princeton University
Press, Princeton (1984)
Sandage, A.: Edwin Hubble 18891953. J. Royal Astron. Soc. Can. 83(6) (1989)
Sanfey, A., Rilling, J., et al.: The neural basis of economic decision-making in the ultimatum
game. Science 300(5626), 17551758 (2003)
Sarnat, H., Netsky, M.: Evolution of the Nervous System. Oxford University Press, New York
(1981)
Schienle, A., Stark, R., et al.: The insula is not specifically involved in disgust processing: an
fMRI study. NeuroReport 13(16), 20232026 (2002)
Schrdinger, E.: Discussion of probability relations between separated systems. Proc. Camb.
Philos. Soc. 31: pp. 55563 (1935)
Schroeder, T.: Three Faces of Desire. Oxford University Press, Oxford (2004)
Schweber, S.: Physics, community and the crisis in physical theory. Phys. Today 46, 3440 (1993)
Scully, M., Drhl, K.: Quantum eraser: a proposed photon correlation experiment concerning
observation and delayed choice in quantum mechanics. Phys. Rev. A 25, 22082213 (1982)
Seager, W.: Weak supervenience and materialism. Philos. Phenomenol. Res. 48, 697709 (1988)
Seager, W.: Instrumentalism in psychology. Int. Stud. Philos. Sci. 4(2), 191203 (1990)
Seager, W.: Disjunctive laws and supervenience. Analysis 51, 9398 (1991)
Seager, W.: Consciousness, information and panpsychism. J. Conscious. Stud. 2(3), 27288.
(Reprinted in J. Shear (ed.) Explaining Consciousness, Cambridge, MA: MIT Press, 1997)
(1995)
Seager, W.: A note on the quantum eraser. Philos. Sci. 63, 8190 (1996)
Seager, W.: Theories of Consciousness. Routledge, London (1999)
Seager, W.: Introspection and the elementary acts of mind. Dialogue 39(1), 5376 (2000a)
Seager, W.: Real patterns and surface metaphysics. In: Ross, D., Brook, A., Thompson, D. (eds.)
Dennetts Philosophy: A Comprehensive Assessment, pp. 95130. MIT Press, Cambridge
(2000b)
Seager, W.: Emotional introspection. Conscious. Cogn. 11(4), 666687 (2002)
Seager, W.: The intrinsic nature argument for panpsychism. J. Conscious. Stud., 13(1011),
129145. (Reprinted in A. Freeman (ed.) Consciousness and Its Place in Nature, Exeter:
Imprint Academic, 2006.) (2006)
References 255

Seager, W.: A brief history of the philosophical problem of consciousness. In: Zelazo, P.,
Moscovitch, M., Thompson, E. (eds.) The Cambridge Handbook of Consciousness, pp. 934.
Cambridge University Press, Cambridge (2007)
Seager, W.: Panpsychism, aggregation and combinatorial infusion. Mind and Matter 8(2), 167
184 (2010)
Seager, W., Allen-Hermanson, S.: Panpsychism. In: Zalta, E. N. (ed.) The Stanford Encyclopedia
of Philosophy. The Metaphysics Research Lab, fall 2008 ed. (2008). URL http://plato.
stanford.edu/archives/fall2008/entries/panpsychism
Seager, W., Bourget, D.: Representationalism about consciousness. In: Velmans, M., Schneider,
S. (eds.) The Blackwell Companion to Consciousness, pp. 261276. Blackwell, Oxford (2007)
Searle, J.: The Rediscovery of the Mind. MIT Press, Cambridge (1992)
Searle, J.: Consciousness and the philosophers. NY Rev. Books 44(4), 4350 (1997)
Seife, C.: What Is the universe made of? Science 309(5731), 78 (2005)
Sellars,W.: The language of theories. In: Science, Perception and Reality, pp. 106126.
Routledge and Kegan Paul (1963a)
Sellars, W.: Philosophy and the scientific Image of Man. In: Science, Perception and Reality,
pp. 140. Routledge and Kegan Paul, London (1963b)
Shapiro, L., Sober, E.: Epiphenomenalism: the dos and the donts. In: Wolters, G., Machamer, P.
(eds.) Thinking About Causes: From Greek Philosophy to Modern Physics. University of
Pittsburgh Press, Pittsburgh (2007)
Shapiro, L. (ed.): The Correspondence Between Princess Elisabeth of Bohemia and Ren
Descartes. University of Chicago Press, Chicago (2007)
Shepherd, G.: Foundations of the Neuron Doctrine. Oxford University Press, Oxford (1991)
Shoemaker, S.: Physical Realization. Oxford University Press, Oxford (2007)
Siegelmann, H., Sontag, E.: Analog computation via neural nets. Theor. Comp. Sci. 131 (1994)
Silberstein, M., McGeever, J.: The search for ontological emergence. Philos. Q. 49, 182200
(1999)
Sklar, L.: Physics and Chance: Philosophical Issues in the Foundations of Statistical Mechanics.
Cambridge University Press, Cambridge (1993)
Skrbina, D.: Panpsychism in the West. MIT Press, Cambridge (2005)
Skrbina, D. (ed.): Mind that Abides: Panpsychism in the New Millennium. John Benjamins,
Amsterdam (2009)
Smith, P.: Explaining Chaos. Cambridge University Press, Cambridge (1998)
Smith, W.: Churchs thesis meets quantum mechanics. In: NEC Research Technical Report
(1999). URL http://www.math.temple.edu/*wds/homepage/churchq.ps
Smith, W.: Churchs thesis meets the N-body problem. Appl. Math. Comput. 178(1), 154183
(2006)
Smolin, L.: The Life of The Cosmos. Oxford University Press, Oxford (1999)
Smolin, L.: Three Roads to Quantum Gravity. Basic Books, New York (2001)
Sober, E.: Screening-off and the units of selection. Philos. Sci. 59, 142152 (1992)
Sousa, C., Matsuzawa, T.: The use of tokens as rewards and tools by chimpanzees. Anim. Cogn.
4(34), 21321 (2001)
Sperry, R.: In defense of mentalism and emergent interaction. J. Mind Behav. 12(2), 221246 (1991)
Spinoza, B.: Ethics. In: Curry, E. (ed.) The Collected Works of Spinoza, pp. 401617. Princeton
University Press, Princeton (1677/1985)
Stalnaker, R.: A Theory of conditionals. In: Rescher, N. (ed.) Studies in Logical Theory, pp. 98
112. Blackwell, Oxford (1968)
Stannett, M.: The case for hypercomputation. Appl. Math. Comput. 178(1), 824 (2006)
Steiner, M.: The Applicability of Mathematics as a Philosophical Problem. Harvard University
Press, Cambridge (1998)
Stewart, L., Walsh, V., et al.: TMS produces two dissociable types of speech disruption.
Neuroimage 13(6), S45 (2001)
Stoerig, P., Cowey, A.: Blindsight in man and monkey. Brain 120, 535559 (1997)
256 References

Stoljar, D.: Ignorance and Imagination. Oxford University Press, Oxford (2006)
Stompe, T., Ortwein-Swoboda, G., et al.: Old wine in new bottles? Stability and plasticity of the
contents of schizophrenic delusions. Psychopathology 36(1), 612 (2003)
Stone, T., Young, A.: Delusions and brain injury: the philosophy and psychology of belief. Mind
Lang. 12(3/4), 327364 (1997)
Strawson, G.: Realistic monism: why physicalism entails panpsychism. J. Conscious. Stud. 13
(1011), 331. (Reprinted in A. Freeman (ed.) Consciousness and Its Place in Nature, Exeter:
Imprint Academic, 2006) (2006)
Strutt, J.: The principle of similitude. Nature 95(2368), 6668 (1915)
Stubenberg, L.: Neutral Monism. In: Zalta, E. N. (ed.) The Stanford Encyclopedia of Philosophy.
The Metaphysics Research Lab, fall 2008 ed. (2008) URL http://plato.stanford.edu/
archives/fall2008/entries/neutral-monism
Sur, M., Garraghty, P., et al.: Experimentally induced visual projections into auditory thalamus
and cortex. Science 242(4884), 14371441 (2000)
t Hooft, G.: Can quantum mechanics be reconciled with cellular automata? Int. J. Theor. Phys.
42(2), 349354 (2003)
t Hooft, G.: Entangled quantum states in a local deterministic theory (2009). URL http://arxiv.
org/abs/0908.3408
Tateyama, M., Asai, M., et al.: Transcultural study of schizophrenic delusions: Tokyo versus
Vienna versus Tuebingen (Germany). Psychopathology 31(2), 5968 (1998)
Tegmark, M.: Importance of quantum decoherence in brain processes. Phys. Rev. E 61(4), 4194
4206 (2000)
Tegmark, M.: Many lives in many worlds. Nature 448(7149), 2324 (2007)
Tegmark, M.: The mathematical universe. Found. Phys. 38 (2), 101150. (2007b). URL
http://arxiv.org/abs/0704.0646
Teller, P.: Relational holism and quantum mechanics. Br. J. Philos. Sci. 37, 7181 (1986)
Teller, P.: Twilight of the perfect model model. Erkenntnis 55, 393415 (2001)
Thijssen, J.: Computational Physics. Cambridge University Press, Cambridge (1999)
Thomas, A.: Interplay of spin and orbital angular momentum in the proton. Phys. Rev. Lett. 101
(10) (2008)
Thomson, W.: On vortex atoms. Proc. Royal Soc. Edinb. 6, 94105 (1867)
Tong, F., Nakayama, K., et al.: Binocular rivalry and visual awareness in human extrastriate
cortex. Neuron 21, 753759 (1998)
Tonomura, A.: Demonstration of single-electron buildup of an interference pattern. Am. J. Phys.
57(2), 117120 (1989)
Turing, A.: On computable numbers, with an application to the entscheidungsproblem. Proc.
Lond. Math. Soc. 2(42), 230267. (Reprinted in Copeland (2004)) (1936)
Turing, A.: Systems of logic based on ordinals. In: Copeland, J. (ed.) The Essential Turing,
pp. 146204. Oxford University Press (Clarendon), Oxford (1939/2004)
Turing, A.: Computing machinery and intelligence. Mind 59(236), 433460. (Reprinted in
Copeland (2004)) (1950)
Tye, M.: Ten Problems of Consciousness: A Representational Theory of the Phenomenal Mind.
MIT Press, Cambridge (1995)
Tye, M.: Consciousness and Persons. MIT Press, Cambridge (2003)
Umeno, K.: Integrability and computability in simulating quantum systems. In: Hirota, O.,
Holevo, A.S., Caves, C.M. (eds.) Quantum Communication, Computing and Measurement,
the Language of Science. Plenum Press, New York (1997)
Vaidman, L.: Many-worlds interpretation of quantum mechanics. In: Zalta, E. N. (ed.) The
Stanford Encyclopedia of Philosophy. The Metaphysics Research Lab (2008). URL
http://plato.stanford.edu/archives/fall2008/entries/qm-manyworlds
van Fraassen, B.: The Scientific Image. Oxford University Press (Clarendon), Oxford (1980)
van Fraassen, B.: The Empirical Stance. Yale University Press, New Haven (2002)
van Inwagen, P.: Material Beings. Cornell University Press, Ithaca (1990)
References 257

von Neumann, J.: Theory of Self-Reproducing Automata. University of Illinois Press, Urbana
(1966)
Watkins, P.: Story of the W and Z. Cambridge University Press, Cambridge (1986)
Weinberg, S.: Dreams of a Final Theory: The Search for the Fundamental Laws of Nature.
Pantheon, New York (1992)
Weingarten, D.: Quarks by computer. Sci. Am. 274(2), 104108 (1996)
Weiskrantz, L.: Consciousness Lost and Found. Oxford University Press, Oxford (1997)
Wigner, E.: The unreasonable effectiveness of mathematics in the natural sciences. Commun.
Pure Appl. Math. 13, 114. (Reprinted in Wigners Symmetries and Reflections, Blooming-
ton: Indiana University Press, 1967) (1960)
Wigner, E.: Remarks on the mind-body problem. In: Good, I. (ed.) The Scientist Speculates,
pp. 284302. Heinemann, London (Reprinted in J. Wheeler and W. Zurek (eds.) Quantum
Theory and Measurement, Princeton: Princeton University Press, 1984.) (1962)
Williams, R., Barnes, E.: Vague parts and vague identity. Pac. Philos. Q. 90(2), 176187 (2009)
Williamson, T.: Vagueness. Routledge, London (1994)
Wimsatt, W.: The ontology of complex systems. In: Matthen, M., Ware, R. (eds.) Biology and
Society: Reflections on Methodology, pp. 207274. University of Calgary Press, Calgary.
Can. J. Philos. (supplementary volume) (Reprinted in Wimsatts Re-Engineering Philosophy
for Limited Beings, Harvard University Press, Cambridge 2007) (1994)
Winsberg, E.: Science in the Age of Computer Simulation. University of Chicago Press, Chicago
(2010)
Wittgenstein, L.: Philosophical Investigations. The Macmillan Company, New York (1953)
Wolfram, S.: A New Kind of Science. Wolfram Media, Champaign (2002)
Wong, H.Y.: Emergents from fusion. Philos. Sci. 73, 345367 (2006)
Woodward, J.: Causation and manipulability. In: Zalta, E. N. (ed.) The Stanford Encyclopedia of
Philosophy. The Metaphysics Research Lab, winter 2008 ed. (2008). URL http://plato.
stanford.edu/archives/win2008/entries/causation-mani
Wright, P., He, G., et al.: Disgust and the insula: fMRI responses to pictures of mutilation and
contamination. NeuroReport 15(15), 23472351 (2004)
Yablo, S.: Mental causation. Philos. Rev. 101, 245280 (1992)
Yang, E., Blake, R., et al.: Fearful expressions gain preferential access to awareness during
continuous flash suppression. Emotion 7(4), 882886 (2007)
Young, D.: Computational chemistry: a practical guide for applying techniques to real world
problems. Wiley, New York (2001)
Young, L., Camprodon, J., et al.: Disruption of the right temporoparietal junction with
transcranial magnetic stimulation reduces the role of beliefs in moral judgments. Proc. Nat.
Acad. Sci. 107(15), 67536758 (2010)
Zeman, A.: Consciousness: A Users Guide. Yale University Press, New Haven (2002)
Zimmer, C.: Soul Made Flesh. New York: Free Press, (2004)
Zotev, V., Volegov, P. et al.: Microtesla MRI of the human brain with simultaneous MEG.
(2007). URL http://arxiv.org/abs/0711.0222
Zurek, W.: Decoherence and the transition from quantum to classical. Phys. Today 44(10), 3644
(1991). URL http://arxiv.org/abs/quant-ph/0306072. Updated version (2003)
Zurek, W.: Decoherence, chaos, quantum-classical correspondence, and the algorithmic arrow of
time. Phys. Scr. T76, 186206 (1998)
Zurek, W.: Decoherence and the transition from quantum to classical-revisited. Los Alamos Sci
27, 86109 (2002). URL http://www.fas.org/sgp/othergov/doe/lanl/pubs/number27.htm
Zurek, W., Paz, J.: Decoherence, chaos and the second law. Phys. Rev. Lett. 72(16), 25082512
(1994)
Zuse, K.: Rechnender Raum. Friedrich Vieweg & Sohn, Braunschweig (Schriften zur
Datenverarbeitung, Band 1) (1969) An English translation is available at ftp://ftp.idsia.ch/pub/
juergen/zuserechnenderraum.pdf
Zuse, Konrad: The ComputerMy Life. Springer, New York (1993)
Titles in this Series

Quantum Mechanics and Gravity


By Mendel Sachs

Quantum-Classical Correspondence
Dynamical Quantization and the Classical Limit
By Josef Bolitschek

Knowledge and the World: Challenges Beyond the Science Wars


Ed. by M. Carrier, J. Roggenhofer, G. Kuppers and P. Blanchard

Quantum-Classical Analogies
By Daniela Dragoman and Mircea Dragoman

LifeAs a Matter of Fat


The Emerging Science of Lipidomics
By Ole G. Mouritsen

Quo Vadis Quantum Mechanics?


Ed. by Avshalom C. Elitzur, Shahar Dolev and Nancy Kolenda

Information and Its Role in Nature


By Juan G. Roederer

Extreme Events in Nature and Society


Ed. by Sergio Albeverio, Volker Jentsch and Holger Kantz

The Thermodynamic Machinery of Life


By Michal Kurzynski

W. Seager, Natural Fabrications, The Frontiers Collection, 259


DOI: 10.1007/978-3-642-29599-7,  Springer-Verlag Berlin Heidelberg 2012
260 Titles in this Series

Weak Links
The Universal Key to the Stability of Networks and Complex Systems
By Csermely Peter

The Emerging Physics of Consciousness


Ed. by Jack A. Tuszynski

Quantum Mechanics at the Crossroads


New Perspectives from History, Philosophy and Physics
Ed. by James Evans and Alan S. Thorndike

Mind, Matter and the Implicate Order


By Paavo T. I. Pylkknen

Particle Metaphysics
A Critical Account of Subatomic Reality
By Brigitte Falkenburg

The Physical Basis of The Direction of Time


By H. Dieter Zeh

Asymmetry: The Foundation of Information


By Scott J. Muller

Decoherence and the Quantum-To-Classical Transition


By Maximilian A. Schlosshauer

The Nonlinear Universe


Chaos, Emergence, Life
By Alwyn C. Scott

Quantum Superposition
Counterintuitive Consequences of Coherence, Entanglement, and Interference
By Mark P. Silverman

Symmetry Rules
How Science and Nature Are Founded on Symmetry
By Joseph Rosen

Mind, Matter and Quantum Mechanics


By Henry P. Stapp

Entanglement, Information, and the Interpretation of Quantum Mechanics


By Gregg Jaeger
Titles in this Series 261

Relativity and the Nature of Spacetime


By Vesselin Petkov

The Biological Evolution of Religious Mind and Behavior


Ed. by Eckart Voland and Wulf Schiefenhvel

Homo NovusA Human Without Illusions


Ed. by Ulrich J. Frey, Charlotte Strmer and Kai P. Willfhr

BrainComputer Interfaces
Revolutionizing Human-Computer Interaction
Ed. by Bernhard Graimann, Brendan Allison and Gert Pfurtscheller

Extreme States of Matter


On Earth and in the Cosmos
By Vladimir E. Fortov

Searching for Extraterrestrial Intelligence


SETI Past, Present, and Future
Ed. by H. Paul Shuch

Essential Building Blocks of Human Nature


Ed. by Ulrich J. Frey, Charlotte Strmer and Kai P. Willfhr

Mindful Universe
Quantum Mechanics and the Participating Observer
By Henry P. Stapp

Principles of Evolution
From the Planck Epoch to Complex Multicellular Life
Ed. by Hildegard Meyer-Ortmanns and Stefan Thurner

The Second Law of Economics


Energy, Entropy, and the Origins of Wealth
By Reiner Kmmel

States of Consciousness
Experimental Insights into Meditation, Waking, Sleep and Dreams
Ed. by Dean Cvetkovic and Irena Cosic

Elegance and Enigma


The Quantum Interviews
Ed. by Maximilian Schlosshauer
262 Titles in this Series

Humans on Earth
From Origins to Possible Futures
By Filipe Duarte Santos

Evolution 2.0
Implications of Darwinism in Philosophy and the Social and Natural Sciences
Ed. by Martin Brinkworth and Friedel Weinert

Probability in Physics
Ed. by Yemima Ben-Menahem and Meir Hemmo

Chips 2020
A Guide to the Future of Nanoelectronics
Ed. by Bernd Hoefflinger

From the Web to the Grid and Beyond


Computing Paradigms Driven by High-Energy Physics
Ed. by Ren Brun, Federico Carminati and Giuliana Galli Carminati

The Dual Nature of Life


Interplay of the Individual and the Genome
By Gennadiy Zhegunov

Vous aimerez peut-être aussi