Vous êtes sur la page 1sur 61

Complexity and Philosophy

Reimagining emergence, Part 3:


Uncomputability, transformation,
and self-transcending constructions
Jeffrey A. Goldstein
Adelphi University, USA

This paper concludes a three part series by reimagining processes of emergence


along the lines of a formal “blueprint” for the “logic” of these processes, a topic
surprisingly neglected even within the camp of those advocating some form of
emergence. This formalism is presented according to the following conceptual
strategy. First, the explanatory gap of emergence, the presence of which is one
of the main defining characteristics of emergent phenomena, is interpreted in
terms of uncomputability, an idea introduced in complexity science in order to
supplement the more traditional features of unpredictability, nondeducibility, and
irreducibility. Uncomputability is traced back to a method devised by Georg Cantor
in a very different context. I label Cantor’s formalism a type of “self-transcending
construction” (STC), a phrase coined by an early commentator on Cantor’s work.
Next, I examine how Cantor’s STC was appropriated, respectively, in the work of
Gödel and Turing on undecidability and uncomputability. Next, I comment on
how self-transcending constructions derive a large measure of their potency via
a kind of “flirtation” with paradox in a manner similar to what Gödel and Turing
had done. Finally, I offer some suggestions on how the formalism of an STC can
shed light on the nature of macro-level emergent wholes or integrations. This
formalism is termed a “self-transcending construction” a term derived from the
anti-diagonalization method devised by George Cantor in 1891 and then utilized in
the limitative theorems of Godel and Turing.

116 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Hollow rhetoric or substantive concept?


…but we have that in physics as well—physics is all emergent!
Niels Bohr (Pavarini, Koch, & Schollwőlk, 2013; footnote, p.1)

W
hile talking with a prominent classicist and philosopher around fifteen years
ago, the idea of emergence happened to come-up in the conversation
whereupon she suddenly and vehemently declared it a “weasel word.” This
left me a bit disconcerted since I had a great deal of respect for her erudition and
thinking skills, and I was increasingly drawn to what I took as the conceptual promise
of the idea of emergence. A “weasel word” refers to verbiage claiming to be saying
something specific and meaningful but turning out to be ambiguous and trite—the
metaphor coming from the observation that a weasel has a unique ability to suck an
egg out of its shell leaving it intact yet hollow. Of course, since its inception the idea of
emergence has been the subject of numerous kinds of criticism, some of it stemming
from stringent reductionists, some from the all too often deficient way emergence has
been phrased and framed by its own adherents. And much more recently, as I have
called attention to previously, the credibility of emergence is being undermined, not
by its opponents, but instead by recent converts to it from within the particle physics
and cosmologist camp who only a short time before had considered the idea anath-
ema. Here are four excerpts from papers which exemplify this usage: “… our current
understanding of string theory, in which the macroscopic spacetime … can often be
viewed as an emergent concept” Hořava (2009); “The notion that general relativity
might be an emergent property in a condensed-matter-like quantum theory of grav-
ity“ (Chapline et al., 2000); “We then expect the emergent superspace to be some sort
of group manifold… “ (Oriti, 200); “A basic tenet of causet theory is that spacetime
does not exist at the most fundamental level, that it is an ‘emergent’ concept …” (Sor-
kin, 2003).

As far as I have been able to tell from the papers containing these excerpts plus
similar ones found at Arxiv, “emergent” is being used as a stand-in for “being de-
rived from,” “secondary to,” or to point to phenomena on a “higher” level “caused” by
more fundamental or underlying dynamics. Although these connotations might make
it seem that emergents are merely epiphenomena, a position that has in fact come
and gone over the years among the idea’s critics, the current embracers are instead
touting the notion of an emergent as a significant step on the way of resolving long
standing “origin” problems such as how space, time, space-time, gravity, and so on
have come about.

E:CO 2014 16(2): 116-176 | 117


But amidst all this speculative brouhaha, a curious lack stands out: how exactly do
the emergents emerge, that is, what processes are producing these emergents? The
very issue of how seems to be taken for granted. It’s as if a description of a desirable
destination is put forward at a travel website. But a search through the website does
not reveal any information on how one gets there except vague presumptions about
underlying dynamics and so forth. The issue I am emphasizing here is definitely not
about the dangers of speculation. Rather, I am pointing to the lack, indeed, the lack of
even an attempt, to provide a cogent account of how emergents emerge. Here is what
the AI theorist Eliezer Yudkowsky (2007) trenchantly observed about this co-optation
of emergence:

Now suppose I should say that gravity is explained by “arisence” or that chemistry is an
“arising phenomenon” from physics, and [claim this is] explaining something important.
… what more do we know after we say something is emergent?… It feels like you believe
a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated,
but it has not been fed...

What is very easy to gloss over is that the very claim that emergents exist at all
requires at least a modicum of understanding of those processes which bring them
about. To simply assume some universal principle of “emergence” as a catch-all ex-
planation is as weasel wordy as the graduate student in Moliere’s play “The Imaginary
Invalid” who explained the efficacy of opium in putting people to sleep as due to it
containing a “dormative principle.” Unless one can imagine how emergence happens,
then any claims of its explanatory significance should fade away.

Neglect of the processes responsible for the emergence of emergent phenomena


has not been confined to the recent particle physics’ enthusiasm for the idea. Such
a disregard has been all too common even among its most enthusiastic advocates
except, surprisingly, the two first conceptualizations of emergence on the part of Mill
and Lewes which called upon new kinds of causative processes: Mill’s heteropathic
causality and Lewes’ emergent vs resultant dynamism (both of which, it will be recalled
from Part 2, were modeled on chemical reactions). Thus, for Mill, in the case of out-
comes generated by heteropathic causative processes, it was not possible to “compute
[sic - Mill’s anachronistic term!] all the combination of causes” since the “same laws”
were not being followed all along the way. In contemporary complexity science, the
uncomputability of emergent phenomena has even become a defining mark of them.

The English philosopher, C. D. Broad (1925; see also the clear presentation of
Broad’s tenets in Mainwood, 2006), the most philosophically rigorous among the

118 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Emergent Evolutionists, characterized emergent phenomena as macro-level wholes


that were not deducible from the most complete knowledge of the properties of its
“constituents” whether these constituents are in isolation or are parts of a different
whole. Broad held that this non-deducibility was due mostly to “the specific laws of
composition” which as “higher” modes of organizing needed to be uncovered in each
specific case of emergence. Like Mill and Lewes, Broad also turned to chemical reac-
tions and the resulting chemical compounds to reveal “higher” level “configurational
forces”. The latter corresponds to Hendry’s argument discussed in Part 2 that despite
McLaughlin’s claim that the quantum bonding micro-explanation of a chemical com-
pound’s properties led to the demise of the idea of emergence, it turns out that de-
ducing emergent outcomes in chemical reactions is also not strictly possible on the
basis of a micro-explanation, e.g., by way of quantum mechanical formulations relying
on calculations from the Hamiltonian and the Born-Oppenheimer rule. Instead, many
of the features of chemical compounds depend on contextual information about
structural forces, and so forth. As I’ll say much more about below, much of this con-
textual information is only able to be known ostensively, and thus functions as one of
the sources of the explanatory gap of emergence.

To foreshadow a bit as to one of the directions I will be pursuing: the explana-


tory gap of emergence (which was added to the list of characteristics of emergent
phenomena in Part 1) provides at least three functions. One is to point to the need
for taking into consideration context which cannot be known ahead of time. Another
is to act as a pointer to the need for exploring “higher” level principles in explain-
ing emergent outcomes. Third, the explanatory gap expresses the uncomputability of
emergent phenomena according to the following syllogism: if the explanatory gap is
generated by processes of emergence, and if the explanatory gap is characterized by
uncomputability, then we can gain insight into the nature of the processes of emer-
gence by probing the nature and source of uncomputability in general.

It might have been anticipated that Emergent Evolutionism, coming along a half
century after Lewes, would further adumbrate on the processes of emergence begun
by Mill and Lewes. This didn’t take place for several reasons. First, since emergence
was being defined in the via negativa fashion of not-predictable, not-deducible, not-
mechanical (the preferential term in early emergentism) and not-computable, this had
to result in an image of emergence as mostly not explicable, an attitude not particu-
larly helpful in spurring on imaginal endeavors to account for it. Not explicable is
equivalent to not being imaginable and this deficit of the imagination was emblematic
both within and outside the camp of emergentists. One of the culprits for this via neg-
ativa was the Bergsonian-Manichean perspective of contrasting drastically antithetical

E:CO 2014 16(2): 116-176 | 119


sides of the emergent saltation, e.g., a living creature on one side of the divide and a
unmoving rock on the other. Put in this way, it is no wonder the imagination cannot
traverse this chasm.

Second, before the coming along of the sciences of complex systems, there was a
dearth of suitable laboratory means for studying emergence. In fact, how was such a
laboratory for emergence even conceivable if the primary prototypes for emergence
were only momentous, even “cosmic” origins, e.g., the origin of life, the origin of sen-
tience, even the origin of space and time (way before modern cosmology this was a
preoccupation of the Emergent Evolutionist Samuel Alexander). Only recently do we
have laboratories for emergence—computational, chemical, biological, social and so
on—where we can observe emergence as it is happening.

Third was the proclivity for some of emergence’s most prominent proponents
to appeal to more-than-natural sources to explain how emergents emerge. Some
of this proclivity stemmed no doubt from the dramatic nature of the prototypes of
emergence which would require equally momentous causes. Morgan, for example,
posed at least two such more-than-natural origins of emergents: a supra-naturally
sourced “Directing Activity” behind the leaps of emergence; and a differentiation be-
tween naturalistically conceived causation of ordinary change and the supra-naturally
driven causality behind emergent saltations although I cannot fathom, after repeated
attempts, what he was trying to say except some kind of strange occasionalism. At
the same time, Alexander was espousing “natural piety” as the sentiment and com-
mitment appropriate for the study of emergence (I am supposing this phrase referred
to something more spiritually specific and laudable at that day and age). There was
also Alexander’s proposal for a cosmic nisus energizing and guiding emergence in
the direction of a final apotheosis of an emergent deity, an idea that was to influence
Whitehead’s later, more overtly theological take on emergence.

Not being an acolyte of the appallingly uninformed cadre known as the “new athe-
ists,” I find nothing conceptually suspect here except that insult to theology called “the
God of the Gaps”, which has accomplished the stunning feat of denigrating nature
and the divine at the same time. I am not referring here to the overt and intentional
appropriations of emergence found in “cosmic” metaphysical/theological emergentist
positions, for example, the Emergent Evolutionist, John Boodin’s neo-“harmony of the
spheres” or the cosmic emergence system of the paleontologist and Jesuit Teilhard de
Chardin or even certain recent Whiteheadian process theologies. Instead, I am refer-
ring to the kind of “God of the gaps” incisively parodied in Sidney Harris’s well-known
cartoon depicting two scientists/mathematicians standing in front of a chalk board

120 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

filled with equations. One of them is pointing to a gap between two sets of equations
where it is written in chalk “THEN A MIRACLE OCCURS” and the caption beneath has
one of the scientists saying, “I think you should be more explicit here …”

The explanatory gap of emergence

Y
et, some kind of diremption seems a necessary element in emergence to the
extent that a defining characteristic of emergent phenomena is the presence
of an explanatory gap. In scientific and philosophical inquiries a chief aim is the
elimination of gaps in explanation, indeed these gaps are what prod inquiry to begin
with. But in the case of emergence, the presence of an explanatory gap is what under-
scores the nature of the radical novel nature of emergent phenomena (I am restrict-
ing my take to diachronic and not synchronic emergence since, as far as I can tell, the
latter is relevant to philosophy of mind and consciousness and downward causation
which I intentionally excluded from my considerations in Part 1 and 2). The explanato-
ry gap, in other words, is what indicates the presence of a transformation that eludes
traditional causal and change processes that allow deducibility and predictability, a
process of transformation that involves higher level organizing factors and constraints
which transcend micro-level deduction.

In some ways, the explanatory gap of emergence might seem akin to the explana-
tory gap that the philosopher Joe Levine (1983) called attention to between expe-
riential qualia or consciousness and physicalist explanations. Motivating Levine and
then later employed by David Chalmers with this “hard problem of consciousness”
is a pressure emanating by a commitment to a “closed” physicalist orientation which
doesn’t allow the presence of something like experiential qualia so different in charac-
ter than something like the merely physical. It is a “hard” problem due to the explana-
tory pressure that the physicalist viewpoint must be able to incorporate what seems
to be so unphysical.

In the case of the explanatory gap of emergence, though, we don’t have the same
situation of some phenomena on one side of the divide and some explanatory scheme
on the other. Instead, we have the divide between the origin and the terminus of pro-
cesses that take the origin phenomena of substrates and transform them into the
terminus phenomena of emergent outcomes. Yes, the terminus has radical different
properties than the origin but that is precisely why there is a gap. If this situation was
just like the explanatory gap of consciousness there would be pressure from some
ontological or metaphysical assumption that both origin and terminus falling under
the same assumption. The explanatory gap of emergence does not carry that kind of

E:CO 2014 16(2): 116-176 | 121


demand since one of the functions of the explanatory gap is to include not just lower
level possibilities, but the dynamics, organizing principles and so forth of the macro
or global level as well. Facing up to the challenge of the explanatory gap of emer-
gence needs to accomplish three things: imagine how the transformation traversing
from substrate to outcome is effectuated; imagine how this transformation manages
to travel from origin to terminus without obliterating the explanatory gap along the
way; and imagine how this transformation can be accomplished from natural means
using natural capacities and consequently not appealing to the supra-natural.

The explanatory gap of consciousness though does carry an important lesson for
imagining the processes of emergence in regard to the nature of the substrates. Hard
core physicalists keep coming back to primordial particulate physical objects as the
kind of thing that might count as substrates. But I don’t see how such physicalist con-
ceived substrates make sense outside of physicalist emergence. For example, if we
are talking about social emergence (see Sawyer, 2005) particulate physical units are
simply not pertinent as substrates. It would be like claiming elephants emerge out of
stainless steel ball bearings. The primordial substrate units of social emergence would
instead need to be human beings and already existing social groupings of various
types. One could, of course, take the tack of rejecting the possibility of social emer-
gence altogether because of the lack of particulate physical social substrates. It seems
obvious to me, however, as it was to George Herbert Mead and his theory of the social
self and social emergence, that certain social groupings are indeed emergent integra-
tions with the potential of behaving in radically novel ways. In terms of conscious-
ness, although I have questioned the idea of consciousness itself being an emergent
phenomena, I didn’t at the same time renounce the possibility of specific contents of
consciousness being emergent, e.g., thoughts, feelings, perceptions, and so on. If the
latter are emergent, then one would likewise not be on the right track to believing
that the viable substrates of these contents of consciousness are particulate physical
objects like quarks, and so forth.

It needs to be recognized that emergence is neither a theory of everything nor


does it mandate any particular underlying metaphysics. This means that there are a
host of other change processes beside emergence, and that emergence need not be
tied to any specific assumptions of how it must occur, for example, presuming it must
involving self-organization as was previously usually maintained. Nor is emergence re-
stricted to only certain kinds of substrates and outcomes, whether cellular automata,
organic cells, electrons, social groupings, or whatever. The result is a freeing-up of the
imagination since we are then not stuck with truncated views of natural processes and
natural capacities.

122 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

The exlanatory gap as uncomputability

T
here’s no question that a chief spark for the resurgence of interest into emer-
gence in the past few decades has been the computational emergence found
in artificial life and comparable simulations. Because of its computational infra-
structure, this type of emergence has spawned a variety of computationally related
constructs and methods. The idea of defining emergent phenomena in terms of un-
computability comes in part from this computational setting, the other main sources
being computational complexity theory in which Alan Turing’s work on uncomputable
numbers has provided a cornerstone, and the development of various measures of
complexity, e.g., the metric of algorithmic complexity.

In relation to emergence, uncomputability refers to impediments in deducing,


predicting, computing crucial parameters of emergent outcomes from knowledge of,
even extensive knowledge of, substrates alone (see Darley, 1994; Boschetti & Gray,
2007). For the purpose of this paper the main advantage offered by understanding
emergent phenomena in terms of uncomputability hinges of what has been discov-
ered (couched in terms of mathematical/logical formalisms) about how uncomputable
outcomes are produced. That is, there is an extensive background for over a century in
mathematics, logic, and computer science that exhibits processes capable of generat-
ing uncomputable outcomes. Our strategy is to explore this research into processes
able to bring about uncomputability and then apply it to emergence whose explana-
tory gap offers itself as another arena wherein uncomputable outcomes come about.
These formalisms on uncomputability can be used to guide our reimagining of the
processes responsible for a cognate uncomputability as a property of emergent phe-
nomena. So rather than acquiescing in the face of the uncomputability of the explana-
tory gap of emergence or turning our attention away from it, we can instead use the
gap’s uncomputability itself to probe how is it that uncomputable outcomes can be
generated. This will enable us possess the additional resources required for reimagin-
ing how such processes productive of uncomputable outcomes may proceed in the
non-formal natural world, natural processes realizing natural capacities.

It is helpful to distinguish the kind of uncomputability used to describe the ex-


planatory gap of emergence with other versions uncomputability. For example, a ran-
dom series of numbers is uncomputable if it really is being produced by some sto-
chastic process, e.g., some kind of random number generator program or radioactive
decay. But emergence is not the result of pure randomization so any uncomputability
assigned to it must differ accordingly.

E:CO 2014 16(2): 116-176 | 123


Another kind of uncomputability to distinguish from what we are after (although
historically and conceptually related) is the mathematical intractability of what in com-
putational complexity theory are considered NP complete problems. Although any
given solution to an NP-complete problem can be verified in polynomial time, there
is no known way to locate a solution in the first place since the time required to solve
the problem using any currently known algorithm increases very quickly as the size
of the problem grows. This means that the time required to solve even moderately
sized versions of many of these problems can easily reach into the billions or trillions
of years using computing power available today. Examples include such well-known
problems in combinatorial optimization as the traveling sales person problem (what is
the shortest possible route that visits each city exactly once and returns to the origin
city?), the knapsack problem (given a set of items, each with a weight and value, de-
termine the number of each item to include in a collection with the total weight less
than a given limit and total value large as possible), and so on. Emergence however,
although at times meeting tangentially with such problems by way of genetic algo-
rithms and genetic programs, is not about optimization so that problems in the latter
are not relevant. This means the type of uncomputability meant in labeling emergent
phenomena as such does not scale or accelerate in uncomputability as these optimi-
zation problems do.

As we’ll see below, none of the early work done on uncomputability had anything
directly or intentionally to do with emergence, nor, looking back on it, was there any
such indication. They were pursued in very different camps with different agendas
and different objectives. It was only afterwards that the idea started being applied to
characterize emergent phenomena and that was for the most part in the wake of the
limitative theorems of Gödel and Turing on undecidability and uncomputability. That
work in logic, in turn, relied on a certain mathematical construction going back to the
great mathematician Georg Cantor in the third quarter of the nineteenth century on
the existence of transfinite sets (these terms will be defined below) but Cantor’s work
also had nothing to do with emergence and there is no reason to think he was even
aware of the notion or similar notions in Germany at the time (e.g., by Wundt and oth-
ers), nor if he was aware of these, he would have paid the slightest attention to it.

This early work on uncomputability was later supplemented by the advent of


complexity science, particularly in the case when research could be furthered through
means of various complexity metrics, such as algorithmic complexity, statistical com-
plexity, logical depth, sundry chaoticity measures, and advances in statistical criteria.
Aside from chaos data analysis, it wasn’t until the arrival of computational emergence
that the application of uncomputability per se to emergent phenomena came to the

124 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

fore (see Darley, 1994). For instance, algorithmic complexity (see just about anything
written by Chaitin who is one of the pioneers in its development; also see Rabinowicz,
2005, for a clear exposition of the main tenets) measures the length of the shortest
possible description of those computational instructions (or bit string if we are talking
about the computational massaging of data streams) able to reproduce the outcome
being measured. An example is how the algorithmic complexity of a random bit string
generated by a stochastic process would have an algorithmic complexity the size of
the random bit string of the data itself since there is no shorter set of steps to gener-
ate that particular bit string less than the actual run itself of the randomization. The
algorithmic complexity in this case can’t be compressed but must be as long as the
ostensive manifestation of what the random series displays.

Darley defined “a true emergent phenomenon” as one for which the optimal
means of prediction is the actual simulation or running of the program itself. This
means that in the case of emergence some sort of accurate analytic deduction from
pre-given parameters will not yield any improvement in the ability to predict what will
happen over just observing what happens. Of course, this is just a way of expressing
the ostensiveness property of emergent phenomena. For Darley, two implications of
this definition are: 1. Emergence involves a kind of “phase change” in the amount of
computation necessary, that is, it must consist of much more than a simple unfold-
ing of what is given; 2. Large scale behavior of a system emergent out of lower level
interacting substrates will not be capable of possessing any clear explanation in terms
of those interacting substrates. In our terminology, this means that for “true emergent
phenomena” there must be an explanatory gap between substrate and emergent out-
comes.

Darley generalized from algorithmic complexity by complementing it with Turing’s


theorem of uncomputable numbers, more specifically, the latter’s conceptualization
via the “Halting Problem” (which we will be going over later) to demonstrate that the
question of whether a system is or is not emergent is undecidable (another topic to be
discussed later on). Therefore, not only are emergent outcomes uncomputable from
knowledge of substrates alone, what is also undecidable (a corollary to uncomput-
able) ahead of time is whether a system will turn out to be emergent or not. According
to Darley, emergent phenomena are those for which the amount of computation nec-
essary for deducing their outcomes analytically, even with perfect knowledge before-
hand, can never be improved upon.

E:CO 2014 16(2): 116-176 | 125


From uncomputability to self-transcending constructions

W
e can trace the development of uncomputability back to the aforemen-
tioned mathematical formalism devised by Cantor in 1891 in his proof for
the existence of transfinite sets, that is, sets containing more members than
our typical conception of a countable infinity or sets (the members of the set can be
counted off by following the counting numbers 1, 2, 3, 4, 5, 6, 7, …; see Cantor, 1891;
Dauben, 1979; Lavine, 1994). Since our concern in this paper does not involve the na-
ture of infinity, we can concentrate instead on Cantor’s method which went on to play
an indispensable role in the later theorems on undecidability and uncomputability.
It was from a commentary on Cantor’s proof that I first came upon the phrase “self-
transcending construction” which I subsequently could function as an especially apt
phrase for the processes of emergence.

It may seem unwarranted and thereby unprofitable to conceptually stretch from a


proof method used in the arcane universe of transfinite set theory to a formal guide-
line trying to capture the “logic” by which emergents emerge First, I had already
come across various allusions, hints, clues, intimations of the relevance of Cantor’s
work to emergence before I was intrigued enough to examine his formalism in more
detail (for a list of such clues, see Goldstein, 2006).

Second, the phrase “self-transcending construction” (stc) that was used to de-
scribe how Cantor’s proof method worked carried certain associations which seemed
expressly suitable for describing emergence as well. Chief among these had to do
with the difference between the prefix “self-“ in “self-organization” and in “self-tran-
scending constructions.” The “self-“ in “self-organization” indicates the locus of what is
driving the organizing or structuring activities taking place, an image of an internally-
driven dynamism coming out of a system’s own inner resources and accordingly not
resulting from an externally-imposed order or organization (see Bernard-Weil, 1995).
This image of self-organization has persisted since the time of Kant’s original phrase
“self-organized” and its later extension by Schelling (see Keller, 2008a, 2008b; Heuser-
Kessler, 1992; Greene & Depew, 2004). This sense of inner agency also carries asso-
ciations of spontaneity (or in the extreme form “order-for-free”) since the system is
imagined as not requiring an external imposition for its new order.

Several drawbacks stem from these connotations of self-organization which I have


discussed in some detail in earlier papers (Goldstein, 2004, 2006). Perhaps the most
prominent has to do with the fact that, at least as it is observed in the laboratory,
self-organization requires numerous and stringent constraints that argue against the

126 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

claim of it being a spontaneous inner-driven process. Moreover, the phrase has, since
Kant, carried an emphasis on self-regulation, the apotheosis of which can be found in
the cybernetics idea of systems driven to restore equilibrium after being disturbed.
The image of a equilibrium-seeking system is quite difficult to square with emergen-
tist claims for the possibility of generating radically novel outcomes and not mere
restoration, although there have been a few valiant, even persuasive, reworkings of
self-organization to account for novelty production such as Kampis’s (1995) notion of
self-modifying systems.

The “self” in “self-transcending construction”, however, plays a very different role.


First, it doesn’t include a commitment to any locus of agency involved in the structur-
ing or organizing of the system, whether internal or external, and only ascertainable
empirically through research into each specific case of emergence. Also, the proces-
sual activity of a self-transcending construction can be either spontaneous or deter-
mined by constraints acting on it (see Goldstein, 2011; Bernard-Weil, 1995). Further-
more, in contrast to the “self” of “self-organization,” the “self” of “self-transcending
constructions” refers to the anterior, “lower” level condition of the substrates which
are then transformed into radically novel emergent outcomes. This means that by
looking at emergence through the lens of stc’s, out attention is directed at the pro-
cesses of emergence which work on these “self/substrates”, transforming them, and
thus enabling them to be transcended from the way they were before emergence into
the production of novel emergent phenomena.

It is not enough that the phrase “self-transcending construction” offers certain


benefits over “self-organization” to justify our attempt to utilize Cantor’s construction
for emergence. We must also address how is it that a purely mathematical, quantita-
tive approach can be of help in reimagining emergence which, at least in my perspec-
tive, is not primarily constituted by mathematical operations nor can be adequately
interpreted in quantitative terms.

My claim is that this mathematical excursion can help us uncover a “logical” tem-
plate or scaffolding involved in Cantor’s methods which was later incorporated into
Turing’s proof on uncomputable numbers and that can demonstrates how the pro-
duction of a radically novel, uncomputable outcome can be generated. The strategy
here is the following: by unearthing the “logic” underlying a mathematical production
of an uncomputable outcome, we gain insight into how this same “logic” may guide
the production of an uncomputable outcome in the non-mathematical realm of emer-
gence. Of course, moving from the purely mathematical sphere to that of emergence
will require appropriate translational means to open the imagination up to the “logic”

E:CO 2014 16(2): 116-176 | 127


steering the production of uncomputable outcomes. The challenge then is to see if
this analogy is sufficient to establish correspondence between an intentionally formal-
ist construction and a natural process such as emergence.

I think this breach is not unsurmountable if we look at the situation in the light of
the following little “thought experiment”: how could one tell the difference between
the action or effect of a higher level organizing constraint (like put forward in Juarerro,
2002; Goldstein, 2011) and the action of a natural process? Look, for example, at the
natural processes of emergence at work generating, forming, and shaping the famous
hexagonally shaped Benard Convection Cells which Prigogine and others had so in-
tensively and extensively studied. The usual story is to account for the emergent form
of the cells by invoking the “mechanism” of self-organization, a purported spontane-
ous natural process. But what do “natural” and “spontaneous” mean in this context?
Presumably, they imply that nature will take its course when left to its own devices in
contrast to shaping that is intentionally imposed or constructed to be a certain way
such as hexagonally shaped. These emergent forms are explained as coming about
when the system is driven far-from-thermodynamic equilibrium through the means of
heat being applied so that what then results is a more efficient way of heat transfer-
ence through the system by way of convection rather than diffusion.

But why the specific emergent order of hexagonal cells? One answer is the influ-
ence of the constraining effect of the size and shape of the container in which the
“self-organization” takes place. It turns out that the influence of the constraints in
shaping the order of the emergent cells is much greater than that of the physical con-
tainer, an influence that can, from a not too great shift of perspective, be likened to a
constructor constructing the order to be a certain way. In Goldstein (2011), I discussed
two types of higher level organizing constraints acing in this system, the mathematical
constraint involved in packings on the plane or in three dimensional space, the other
having to do with certain differential topological constraints. Here I will only remark
on the first, the geometrical constraints on packing circles on a two dimension plane
surface.

According to the account given by D’Arcy Thompson: circles are most closely
packed via six circles around a central one when there is uniform stress forces opera-
tive, in this case emanating from growth or expansion within and a uniform constrict-
ing pressure from outside. Such constraints will “push” the circles from their contigu-
ous point in common to lines representing the surfaces of contact, a process which
thereby converts the closely packed circles into hexagonal shaped cells. Thompson
went so far as asserting, in a manner quite prescient of the idea of universality in

128 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

phase transitions and quantum protectorates, to be mentioned below, that the micro-
level details about the inner or outer forces were irrelevant to the ensuing hexagonal
shape as long as they were uniform in their action and thereby aimed at a state in
which their surface tends towards an area minima. Thompson also pointed out that
the brother of Lord Kelvin (of the Second Law of Thermodynamics fame), James Thom-
son, saw a similar “tessellated structure” in the soapy water of a wash tub meeting in
three partitions where vertices of a hexagon meet. Sounding like a modern complexity
aficionado of self-organization, Thompson stated that the Bernard’s tourbillons cel-
lulaires “make themselves” but he definitely didn’t mean that they “make themselves”
without the presence of the constraints which he had spent so much creative intel-
ligence in revealing.

I don’t think it is a particularly far cry from talk about cells making themselves
to them being constructed under the influence of constraints. There are higher or-
der pattern-organizing constraints that effect whatever is going on so that “natural
processes” are equivalent to processes under the influence of constraints from the
very first instance these processes originate. Similarly, there are no phenomena at
all, whether substrate or emergent, which are not natural processes acting and be-
ing shaped by the contextual ambience of the constraints making or constructing the
order of the emergent phenomena. Hence insight into formal operations under the
sway of intentionally constructed constraining influences are, at least phenomenologi-
cally, no different from natural processes in natural systems also under the sway of the
constraints, which channel the construction of emergent order.

Cantor’s anti-diagonalization as a self-transcending


construction

A
s mentioned, I first came upon the expression “self-transcending construc-
tions” in a description of Cantor’s method offered by the German mathemati-
cian and historian Oskar Becker (quoted by the Austrian-American philosopher
of mathematics and law Felix Kaufmann, 1978). Becker coined the term as a neutral
description for the ingenious mathematical construction Cantor devised in his proof
for the existence of transfinite sets. Although Cantor’s formalism turned out to be
of paramount importance in twentieth century mathematics and logic, it had noth-
ing intentionally or overtly to do with emergence per se. What had caught my atten-
tion, though, was that the method called by Becker a “self-transcending construction”
enabled Cantor’s formalism to generate an outcome with radically novel features in
comparison to the substrates from which it was generated.

E:CO 2014 16(2): 116-176 | 129


Felix Kaufmann had borrowed Becker’s phrase “self-transcending construction” for
what he took to be its derogatory sense in implying an oxymoron: for how could any
construction transcend itself? Wasn’t that like lifting yourself up by your own boot-
straps? Kaufmann was a member of the influential Wiener Kreis including the likes of
Rudolph Carnap, Karl Menger, Richard von Mises, Otto Neurath, and other leading
mathematical, logical, and philosophical luminaries of the time, even Kurt Gődel and
Ludwig Wittgenstein stopping by. The issues taken up by the Vienna Circle had to do
with the intersection of science, philosophy, and mathematics, but at the forefront of
such issues for Kaufmann was the philosophy of mathematics. Kaufmann was in the
camp of strict finitism, that doctrine which vigorously refuted the notion of an actual
infinite, as opposed to a potential infinity seen, e.g., in the three dots suggesting that
a listing does not end. Since he held that Cantor’s transfinite sets presumed the ex-
istence of an actual infinity, Kaufmann abrogated Cantor’s proof tout court, his main
avenue of attack being Cantor’s specific method of the self-transcending construction:

...no construction can ever lead beyond the domain determined by the principle
underlying it….the diagonal procedure [see below on “diagonalization”] … will lead to
more and more new ‘mathematical objects’, but we must at each stage remain within
the framework of the most general formation law according to which the progression
runs. The progression is determined as an unfolding of this and no other law… (p. 135)

(By the way, Kaufmann’s own logic here was incorrect since what Cantor wanted to
prove might have been proven by a different, sounder method.)

But the soundness of Cantor’s conclusion is not relevant to our purpose which is
instead to probe how this stc was capable of generating radically novel outcomes.
Consider Kaufmann’s term “unfolding” from the passage above. It brings to mind the
same term with a cognate meaning discussed in Part 1: Roque’s exposition of the
deductive-nomological approach to reduction as an explanation involving an unfold-
ing on a planar surface of something that had already been a priori convoluted. Such
an image presumes predictability and deducibility or, as we now can call it, “comput-
ability”. For Kaufmann, a mathematical construction such as Cantor’s self-transcending
construction must conform to the same circumscriptions as a deductive-nomological
explanation, it is not allowed to involve anything but an unfolding of what has been
already folded-up.

As I delved deeper into how Cantor’s stc, it dawned on me that it worked precisely
because it did not fit Kaufmann’s image of unfolding. That is, it had to depart from this
unfolding enough for radically novel outcomes to ensue. Nevertheless I was still puz-

130 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

zled about whether Kaufmann’s disparaging appraisal of it was correct, that “no con-
struction can ever lead beyond the domain determined by the principle underlying
it”? As we’ll see below, our exploration of Cantor’s formalism will consequently also
need to inquire into the issue of contradictoriness and the related notion of paradox.
In order to investigate these questions further, it is necessary to go into some depth
and detail as to how Cantor’s method operated (although the following will get a bit
technical, it will not require anything more than elementary mathematics).

In his earlier research Cantor had become interested in the continuity of continu-
ous functions, particularly in the magnitude or cardinality of the sets of the points
representing continuous functions, e.g., the cardinality of this set {1, 3, 6, 9, 100, 2112,
50000456} is seven because it has seven members. The property of cardinality does
not have anything to do with the nature of the sets’ members nor the criteria for set
membership. Certain cardinalities have become standard such as the countably infi-
nite cardinality of the set of all counting numbers {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 …} (the
three final dots indicating the counting can go on indefinitely). Later in his famous
1891 proof of transfinite sets (there were earlier, more indirect versions), Cantor want-
ed to know if the set of rational numbers (i.e., numbers made from ratios of integers
such as 5/6, 3/4, …) and the set of real numbers (all the rationals plus the irrationals,
transcendentals and so forth) have the same cardinality. It was here that Cantor in-
troduced his famous anti-diagonal construction or anti-diagonalization method (as
I’ll justify below, I add the prefix “anti-“ to be more accurate) in order to generate a
radically novel real number (the analogy is to the emergent outcome) from a set of
rational numbers (the analogy is the lower level substrates) so that his stc represented
a transformational process from substrate to outcome.

He had already shown that the numerosity of the rationals was a countable set by
demonstrating how to match, count off, or map each rational number to each count-
ing number in a manner that would not allow repetition (i.e., 2/3, 4/6, and 12/18 are
counted only once since they are the same number) nor leave out any of the rational
numbers. A mapping operation consists of a function that takes one member of one
set to a member of another set. If the sets have the same cardinality, in the case of
the rational numbers, for instance, establishing that the set of rationals has the same
cardinality can come about by way of a map that shows a one-to-one isomorphism of
the rationals to the other set, e.g., the counting numbers.

Today, Cantor’s stc is usually depicted as an array of three elements (see Figure 2
below): a vertical column exhibiting an exhaustive list of rational numbers; horizontal
rows of the decimal expansion of each of the rational numbers on the vertical; and

E:CO 2014 16(2): 116-176 | 131


a diagonal going from upper left to lower right marking the intersection of the col-
umns and rows—a decimal expansion is the decimal representation of a non-negative
integer arrived at by division, e.g., to arrive at the decimal expansion of the rational
number 2/5 we need an algorithm like long division for transforming the fractional
notation to decimal notation as in Figure 1.

Figure 2 is one such possible array, with H referring to the decimal expansion
going from left to right, V referring to the list of rational numbers going down (only
rationals between 0 and 1 are used because the cardinality of the set of rationals be-
tween 0 and 1 are equivalent to all rational numbers, a strange characteristic of infinite
sets), a diagonal sequence marked by underlining, and a vertical list to the far left

Figure 1 Long division as an algorithm

Figure 2 Diagonal array of Cantor’s Stc

132 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

which indicates the matching between the counting numbers and the rationals (the
magnitude of a set is arrived at by a type of counting using the counting numbers).
What each horizontal row contains is simply the decimal rendition of the same frac-
tion on the vertical list, for example, the vertically listed rational number 1/3 becomes
the decimal expansion 0.3333333333… This entails that the sets of rationals making-
up both the vertical columns and horizontal rows have the same cardinality, that of
the counting numbers or the kind of “simple” infinity usually thought of in regards to
the notion of a potential infinity.

It is important to notice that because the vertical columns and the horizontal
rows contain the set of the rational numbers, and that the diagonal sequence (the
underlined numbers in Figure 2) marks the intersection which is the representation of
the mapping of these two sets against one another, the array can be thought of as a
self-referential mapping/matching of the set of the rational numbers to itself. In other
words, each rational number is matched or refers to each rational number including
itself. Furthermore, the diagonal sequence can be interpreted as a kind of code of self-
reference since the numerical values on the diagonal represent the point of intersec-
tions constructed by the self-referential mapping of the set of rationals onto itself (see
Hofstadter, 1985). We’ll say more about how the diagonal is composed as a represen-
tation of self-reference below.

Since it is easy to get lost among the trees and lose sight of the forest, let us recall
the purpose of laying-out Cantor’s stc: we are appropriating it as a logical formalism
that, as we’ll shortly see, represents the generation of a radically novel outcome. This
formalism is intended to serve as an aid for imagining the passage from substrate to
emergent outcome according to the following presumptions: the rational numbers
are the lower level, anterior substrates which are set up in the array in order to display
the countable cardinality property of the set of rationals; the array itself serves as a
conceptual device which both indicates an exhaustive and complete list representing
a self-referential mapping of set of the rational numbers to itself and how the diago-
nal sequence is established as a kind of integration of the separate H and V sides of
the array, this integration acting as a kind of “code” of self-reference. The diagonal is
consequently the representation of a combinatory operation which plays a key role in
the process of radical novelty generation in Cantor’s stc.

Here are three examples of the diagonal sequence from Figure 2: the second frac-
tion/rational number down the vertical list (counting #2) is 1/3 whose decimal expan-
sion as shown in the figure is 0.3333333333… The diagonal is made precisely from
going down and to the right the exact same number of places. Here it is two down

E:CO 2014 16(2): 116-176 | 133


and two to the right which is the numerical value of 3. What about the fourth ratio-
nal number/fraction on the list, 1/4? Its decimal expansion is 0.250000000 … Since
it is the fourth rational number down the vertical list, we also count out four places
from the left of its decimal expansion which turns out to be 0 which is underlined to
indicate its status on the diagonal. Finally, let’s go to ninth rational/fraction down the
list which is 1/7 whose decimal expansion is 0.1428571428… If we then go out nine
places out from the left we have 2 which is underlined accordingly as being on the
diagonal.

In each case, then, we go down and to the right the same number of digit places.
The resulting diagonal sequence then represents a mapping of the rational numbers
on the vertical and the rational numbers on the horizontal, the mapping being a way
to talk about the intersection, the point of meeting, the place where the previously
separate component is integrated, so to speak, with another cognate component. As
pointed out by Simmons (1990) there is nothing sacred about using a diagonal form
like that shown by the underlined numbers in Figure 2; the only crucial requirement
is that the diagonal is constructed so as not to leave out any specific mapping of V
to H, that is, captures all the rationals on both the horizontal and vertical means of
representation. By the way, following mathematical practice, we are to accept that this
diagonal sequence continues indefinitely, an infinitely large set (the three dots indicat-
ing this) just as the vertical and horizontal lists are.

As we’ll go over in greater depth below, some sort of combinatorial, mixing, or


integrative operation occurs during emergence in order to transpose the local, micro-
level of the substrates to be integrated at the macro-level. In their anterior, lower level
condition, the substrates possess the property of being isolated and particulate rela-
tive to what they will become in the novel emergent collective at the more global,
macro-level of the emergent outcomes (in their post facto condition their radically
novel characteristics of being collective, coherent, or correlated subsumes the previ-
ous isolated particulate condition). Let’s call this transition the process of two-into-one
which I propose is found in one form or another in all cases of emergence since there
must be some processes which facilitate the integrative or collective wholeness on the
macro- or global-level, one of the defining characteristics of emergent phenomena.

In the Cantorian self-transcending construction, two-into-one is exhibited in the


construction of the diagonal sequence which represents the intersection, and there-
fore a kind of melding, of the horizontal and vertical sides of the array. In his analy-
sis of the diagonal construction, the logician Dale Jacquette (2002) elucidates how it
partakes of an integration of the horizontal and vertical forms of each of the rational

134 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

numbers and does so in a way that is exhaustive of all possible permutations of the
two values found respectively on the horizontal and vertical sides. If put in the form of
algebraic notation (which Cantor’s proof was phrased in), the number at the diagonal
would become a variable with an ordered pair of subscripts, say, Dh v as in Figure 3.

The first subscript h stands for the numerical value that is h steps out from the left
on the horizontal decimal expansion of the rational number which in turn is v steps
down the vertical list of rational numbers. The diagonal sequence, consequently, con-
tains those variables (or numerical values) when the two subscripts h and v are identi-
cal. This implies that the diagonal construction, unlike either the horizontal or the ver-
tical sequences, is an integration or melding of the previously isolated situation of the
horizontal and vertical sequences (for similar perspectives, see also Hofstadter, 1979,
1985; Webb, 1980). This might seem a bit confusing but I think that a large measure
of this confusion stems from the diagonal sequence being a confounding of the previ-
ous separate rows and columns, “confounding” in the sense of a “pouring together”
of what was previous separate which in turn implies a transformation of the substrates
(Goldstein, 2002).

This is all well and good but what are all the appurtenances of the arrays supposed
to accomplish? To answer that we have to recognize the specific reductio ad absurdum
proof format that Cantor used since it will show what was motivating him to the next
crucial steps. In a reductio ad absurdum type of proof, in order to prove a proposition

Figure 3 The diagonal sequence as comprised of subscription by ordered pairs

E:CO 2014 16(2): 116-176 | 135


true, one first negates the truth valuation of this proposition and, then, in finding a
falsehood that logically follows from the negation, the proof of the original unnegated
proposition is established. Cantor had wanted to show that the real numbers were not
equivalently numerous as rational numbers, that is, the two sets had a different cardi-
nality. Following the inferential rules of reductio ad absurdum, he accordingly negated
that proposition into: the set of the rational numbers and the set of the real numbers
have an identical cardinality (and since they are identical, then the real numbers must
make up a countably infinite set since the rational numbers also do). Utilizing the idea
of counting the numerosity of a set by showing its match/map or lack thereof with the
set of the counting numbers, Cantor’s self-transcending construction aimed at gen-
erating a real number than could not be matched with any possible rational number.
This then was the challenge: how to generate a number out of the rational numbers as
substrates that was so radically novel this new number could not possibly be matched
with any member of the full set of rational numbers, that is, could not be included in
any complete list of the rational numbers.

With the diagonal construction, Cantor now had the substrate at the right meso-
scopic level since if we consider the digits on H or V to be at the microscopic level,
then a shift from H or V to the diagonal represents a movement upward to the meso-
scopic level. It was at this juncture that Cantor was now in a position to construct his
counterexample to the falsified proposition which began his reductio style of proof.
This counterexample is the radically novel number that is to be constructed out of the
rational numbers but which violates their cardinality of countability and whose con-
struction is made possible now that the diagonal sequence is at hand.

However, there is one other crucial element of the level or scale at which Can-
tor’s construction of the diagonal (and eventual real number) “dwelt” that needs to be
pointed out since it is a crucial factor in the composition of an emergent whole in con-
trast to a mere aggregate. First, let’s say “l” is the level of the diagonal series, in other
words, the level of the diagonal’s representation of the entire set of rational numbers,
which also entails that “l” will be the level of the radically novel number soon to be
constructed.

If l is the level of the entire set of rational numbers, then we should interpret
each rational number functioning as a component of the entire set a level one step
downwards, l-1 (see this kind of level classification in Salthe, 1993). Thus, each rational
number member on the vertical side as well as the diagonal slash of the array is at the
level l-1. But next consider horizontal rows as the decimal expansion of each rational
number which is at l-1. The decimal expansion is constituted by the numerical digits

136 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

found through some algorithm like long division as mentioned above. This means
that each of these numerical digits, as components making-up the horizontal row,
must be at a still lower level, (l-1) - 1, or l-2. Indeed, these numerical digits comprising
the horizontal rows of the decimal expansion are a kind of strange mixed substrate
since they are neither individual rational number members making-up the set of the
rationals numbers such as the individual rational numbers listed in the vertical column
which are at the level l-1 nor are they the same as what they become as one of the
ordered pairs indexing the diagonal numbers according to Jacquette’s scheme. Rather
their nature is a function of being ingredients somewhat arbitrarily designated by the
context of the operation of decimal expansion. I use the term “arbitrarily” here since
other numerical digits could also been used for a similar purpose in indexing each of
the placements on the diagonal, e.g., instead of a decimal expansion, the horizontal
row might have been constituted as binary digits coming from an alternative binary
expansion of each rational number on the vertical list. The resulting diagonal number
then would have been a different sequence but could have save the same purpose.

Since the diagonal sequence is constructed at the intersection by combining via


the ordered pair scheme suggested by Jacquette, it means it takes one component of
the substrate from the vertical which as a rational number is at the (l-1) level and an-
other component from the particular numerical digit of the horizontal decimal expan-
sion, each of these digits being at the l-2 level. This mixture of levels in the construc-
tion of the diagonal has two significant implications for our purposes of reimagining
emergence. First, the combinatorial integration on the way to the macro- or global
collective or coherent level actually consists of a melding of components from sub-
strate components from different levels, again a sign of the “confounding of levels”.
The combining which we are calling the two-into-one aspect of emergence requires
activity of components from levels beneath the operational level of what is being
brought together to form integrations and the final emergent outcome.

Second, emergent phenomena seem to indicate that the transformational pro-


cesses leading to them involve not just what would normally be expected as substrates
but strange new combinations of the normal substrates formed with certain ambient
or contextual dynamics in the process producing novel substrate components them-
selves, thus a sign of the beginning of the transformation of the substrates on way to
full blown emergence. For example, the four distinct formal written English words “I
do not know” are typically said during informal conversation as “dunno”, which man-
ages to retain the meaning of the four distinct words but is voiced as a melding into
just two flowing together syllables. It is because ease of sound takes prominence while
preserving meaning that pidgin English developed as the “parole” in one geographi-

E:CO 2014 16(2): 116-176 | 137


cal region can be largely understood on the other side of the world, even in countries
where the official languages may be as different as French and Dutch. If one considers
the whole of a learned conversational language, the primary units are not words as
separated into the commonly held formal categories of say, article, nouns, verbs, ad-
jectives, adverbs, but rather integrated sound complexes which incorporate meanings
into sounds joined together for ease of pronunciation and similar factors.

That combinatorial processes effectuating in novel outcomes involves substrates


at lower levels is of course not a new insight. A turn to a lower lever l-1 as a key for
explaining phenomena on a level l is a common move in scientific theorizing. For
example, an appeal to the l-1 locus of lower level parts of atoms became crucial in
understanding the findings stemming from experiments in radioactivity in the early
days of the development of the atomic physics and then shortly thereafter quantum
mechanics. At the time, Niels Bohr had put forward the idea that the atomic nucleus
was not like a solid billiard ball but continuously morphing, like a liquid drop, pull-
ing in different directions enabling, e.g., that a neutron hitting the uranium had hit a
wobbling nucleus making it wobble more and then split into half (halves are l-1, then
there is l-2 dynamics going on as well). Of course, this is not to claim that radioactive
transformations are an example of emergence, it is instead just an illustration of the
involvement of levels under l. What makes emergence different is that the involve-
ment of different confounding levels becomes a necessary but not sufficient condition
for the formation of emergent wholes rather than mere aggregates. Other conditions
are also needed as we’ll go over below.

There is also another facet of Cantor’s proof that must all be kept in mind, the
statement of which may seem unnecessary to assert, namely, that each action of
construction leading to some new feature of the array was, of course, the intentional
stroke on the part of the deviser or constructor of the proof, that is, that numbers ob-
viously don’t have tendencies to list themselves, or to combine with other numbers. I
bring this up since when it comes to applying the stc formalism to natural emergence,
we must be able to find fitting cognates happening naturally to what in the proof is
constructed to be the way it is. This point was touched upon above in terms of the
little thought experiment concerning how difficult it is distinguish the act or effect of a
higher level organizing constraint and a natural process.

138 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

The need for a negation operator

W
ith the diagonal construction, Cantor’s stc had accomplished the establish-
ment of a representation of the set of all rational numbers that exhaustively
insured all the rationals would be included. Furthermore this diagonal se-
quence had a countable cardinality since Cantor had previously shown the cardinality
of the set of rational numbers was itself a countable infinity, a surprising finding since
the set of the counting numbers {1, 2, 3, 4, 5, 6, …} only consisted of each succeeding
integer whereas the rational numbers included in addition the immensely larger infi-
nite magnitude of the set of fractions between each integer (e.g., between the integers
2 and 3 there is the set {2 1/2, 2 1/3, 2 1/4, 2 2/3, 2 5/7, 2 16/19, 2 3987/6789215…}).
The stage was set but as of yet, the preceding acts were only preparations for the next
scene.

It was the mathematical construction which Cantor next devised that was both
the masterly tour de force of his proof and the principal reason justifying our turn to
Cantor’s stc as a formalism to be applied to the processes of emergence. The heart
of this potent construction hinged on a particular type of negation, a broader opera-
tion than negating a truth into a falsehood, or some other means for producing an
antithesis. The negation of Cantor’s stc amounted to any kind of change as long as
this change is applied consistently across the board of what is being negated. What
the negation operator of Cantor’s stc changed was the numerical values constituting
the diagonal sequence. For example, consider the diagonal sequence marked by the
underlined numbers going from upper left to bottom right in Figure 2: 536000002.
Negating this number can be achieved by consistently adding 1 to each number in
the sequence yielding the anti-diagonal sequence 647111113. Anti-diagonal then
refers to the applying of negation directly to each of the numbers on the diagonal
and anti-diagonalization refers to the process of negating the entire infinite series of
the diagonal numbers. These “anti-“ prefixed modifications are to be preferred, in my
opinion, since although the establishment of the diagonal sequence is a key move of
Cantor’s stc, by itself it cannot lead to the radical novelty generation that Cantor was
after—for that negation in this sense was mandatory.

Now what is exactly is the big deal about the new anti-diagonal number, whether
it is the outcome of adding 1, or 3, or 7 or subtracting 2, or 5, or whatever? Isn’t the
outcome just another rational number and therefore in that sense at least not radically
novel? Indeed, with the formalism of the diagonal array, Cantor had already shown
how to present the rational numbers exhaustively by way of a diagonal construction
that demonstrated a way to integrate them via a self-referential mapping. Because it

E:CO 2014 16(2): 116-176 | 139


was constructed from this array, the diagonal sequence itself would appear to consti-
tute another rational number since it was constructed out of a combination of rational
numbers. Then, how exactly did the negation operation achieve the desired outcome
of the reductio ad absurdum format of the proof?

Consider the first digit of the anti-diagonal sequence. Following the action of ne-
gation and Jacquette’s interpretation of the diagonal number as a variable with an or-
dered pair for subscripts (see Figure 3), one subscript taken from the vertical and one
from the horizontal, if we look at the first digit of the newly constructed anti-diagonal
sequence, this number cannot be the same as the first rational number going down
the list since it is constructed to be different than that; moreover, the number of this
first digit on the diagonal of number cannot be found in the first place of the first digit
of the decimal expansion going left to right on the horizontal because it has been
constructed not to be equivalent. Next, consider the second digit of the anti-diagonal
sequence: it is constructed by negation to be different than the second digit of the
second rational number on the vertical list as well as not being equivalent to the sec-
ond digit of the second rational number’s decimal expansion. This same pattern of
disallowing the numerical value of the diagonal to match any digit placement of the
rational numbers is continued descending down the diagonal direction—they are all
constructed to be, via negation, different than each of the rational numbers corre-
sponding to the placement of the digits on the diagonal. In general, in the case of
the anti-diagonal sequence, its nth element has been constructed to be different than
every nth digit of the countable set it was constructed from.

But this implies that the anti-diagonal sequence cannot be included on any total
list of a complete set of rational numbers even though it was constructed out of a list
of rational numbers! To repeat, even though the diagonal sequence before the opera-
tion of negation could be such another rational number and thus capable of being in-
cluded among the list of all possible rational numbers, after the action of the negation
operator, the anti-diagonal cannot be included because it has been constructed not
to be! One cannot cry “unfair” here since this is just an example of how mathemati-
cal artifice typically works: it constructs novel mathematical objects in order both to
resolve mathematical problems and to probe and create new mathematical “spaces”
along the way as long as such moves are logically consistent.

If the new anti-diagonal number cannot be included in a complete list of the ra-
tionals then, even though it was generated out of those same rational numbers via
Cantor’s anti-diagonalization stc, it must be radically novel in relation to them. This
also implies that any new set fashioned by appending the new anti-diagonal number

140 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

to the set of the rationals must contain more than a countable cardinality, this un-
countability then being a radically novel property. For instance, if what was added to
the list of rationals was just any succeeding diagonal number from the sequence of
diagonal numbers (i.e., before the action of negation in creating an anti-diagonalized
new number) the result would remain a countably infinite set since any succeeding
diagonal number is just one more rational number. But the anti-diagonal has been
constructed to not be just any succeeding rational number, instead it is constructed
to be, via Cantor stc, a radically new number with the capacity of transforming the
cardinality property from countable to transfinite. Moreover, the operation of anti-
diagonalization can be performed on any newly appended set with a similar outcome,
namely, a radically new number not able to be included in the newly appended set.

The effectiveness of Cantor’s self-transcending construction to transcend the an-


terior substrate dynamics, patterns, and laws has attracted various descriptor terms
over the years including: “diagonalizing out of” (Demoupolis & Clark, 2005; Priest,
1994); “levering itself out” (Priest, 2002); “joosting out” (Hofstadter, 1985); “switching
function” (Odifreddi & Cooper, 2012), “negation operator” (Raja, 2008); and others.
These phrases are telling not so much for their role in pointing to the Cantorian stcs
aiding in proving the existence of transfinite cardinality but much more so for empha-
sizing the potent force of the self-transcending construction in transcending previous
frameworks, patterns, structures, and properties by the transformation of substrates
into radically novel outcomes with radically novel properties. Cantor himself consid-
ered the transfinite sets engendered by his anti-diagonal method “an entirely new
kind of number” not achievable by any amount of “piecemeal” addition or multiplica-
tion of the rational numbers (cited in his letter to Gustav Enestrom quoted in Shanker,
1987: 217, footnote 34; see also, Kamke, 1950, and a similar description of the radical
novelty of Cantor’s new number by the Field Medalist Winner, Paul Cohen cited in
Tiles, 1991).

Moreover, we need not get caught up in the issue of claim of the uncountability of
the transfinite sets Cantor believed his proof demonstrated, since such mathematical
luminaries as Henri Poincare, Luitzen Brouwer, Ludwig Wittgenstein, and others, who
were skeptical of the supposed transfinite implications of the proof, did accede it was
a powerful “recipe” for generating a radically new number (for Poincare’s reaction, see
Moore, p. 136; for Brouwer’s, see Dauben, 1979, and for Wittgenstein’s see Shankar,
1987). Kaufmann himself acknowledged that the anti-diagonal procedure could de-
termine a sequence of numbers other than those contained in the original sequence.
Ormell (2006) has an interesting fix on a transfinite-free interpretation of Cantor’s
novel anti-diagonalized number that fits our appropriation: he calls them lawless and

E:CO 2014 16(2): 116-176 | 141


indefinable which make-up open ended totalities or systematically elastic totalities,
connotations we will be coming back to later on.

As Berto (2009) points out in his masterful clear study of Gödel’s proofs on im-
completeness, the Cantorian stc is quite general: “The procedure holds for whichever
way one tries to construct the real numbers in a list: we can always produce an ele-
ment (whose identity will certainly vary according to the way the list is constructed)
that cannot appear as an item in the list” (p. 34) and agrees with Priest’s (1995) re-
marks, that the anti-diagonal method’s action on the diagonal consists in its “system-
atically destroying the possibility of its identity with each object on the list” (p. 119).
Priest’s comment can be said to confirm Humphries’s claim that during the emergence
process of “fusion” the substrates disappear into the new emergents. This is what hap-
pens in the case of transformation: substrates are so changed as to not be reducible
to what they were before emergence.

Furthermore, it is worth mentioning that if all one had as resources in order to


transcend the cardinality of the lower level set were restricted to the lower level count-
able cardinality, e.g., the aforementioned piecemeal operation, then there could be
no self-transcendence. But the construction was not confined to the lower level of the
substrates since it also had available the application of the negation operator by the
constructor. If we can capitalize on the point made above concerning the fit between
the constructor constructing with the effect of a higher level constraints of natural
processes, then transcendence in this sense could be seen as possible as a natural
process and allowable as a realization of a natural process.

We can also see in this activity of the negation operation, or diagonalizing out,
joosting out, or the action of a switch function, in the generation of an explanatory
gap according to the following viewpoint. Envision the process of constructing the di-
agonal sequence as a building-up from building blocks at each numerical value at the
intersection of the horizontal and vertical sides of the array. From Jacquette’s perspec-
tive this is equivalent to the consecutive construction of each ordered-pair dual sub-
scripts. Correspondingly, for the anti-diagonal construction we simultaneously negate
each of the numerical values of the diagonal as we go along. Thus we first build up the
diagonal as representative of the countable rational set and then the anti-diagonal as
the representative of a new set with its radically novel cardinality property. From this
perspective, we can see the stc ant-diagonal construction as similarly a bottom-up
process (or more literally in accordance with Figure 2 since the diagonal descends to
the left, a bottom-down building-up process!)

142 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Yet, the negation operator subverts this sense of some new construction being
constructed bottom-up and accordingly followed in a piece-meal fashion since the
negation operators negates each expected value of the diagonal. What this implies is
that although we can follow along deductively as the diagonal “progression” is “un-
folded” (Cantor’s terminology) or, in other words, as the sequence is constructed, the
same following along is not possible for the anti-diagonal construction. Sure it seems
like we are following it the same way we follow the diagonalization, but the negation
insures that instead each step of following is negated. This entails, in turn, that even
though the diagonal sequence can be reversed or traced backwards to its origin in
the sense that any particular diagonal number achieved along the way can in fact be
included back on the list of the rational numbers, such is not the case for the anti-
diagonal number for it has been constructed to not be possible. Hence, the radically
novel anti-diagonal number which is intentionally constructed to be that way, stops
the building up process from being reversed or traced back or reduced to what was
already “unrolled”.

What I am trying to get at here is that the anti-diagonal construction, the heart
of Cantor’s stc operates like a ratchet illustrated in Figure 4. The building-up of the
diagonal sequence is analogous to the “forward” motion of the saber-tooth wheel
in a clockwise direction to the right permitted by the ratchet mechanism since the
phalange “a” will not get caught by the prong “b” but instead will only slide along the
upper side of “b”. But the anti-diagonalization construction, on the contrary, functions
corresponding to the way a ratchet blocks the reverse direction. That is, each time
the negation operator operates on each succeeding digit of the diagonal sequence,
it acts like the over-hanging hooked “tooth-like” spur of “b” that stops the saber saw-
like wheel from going in the reverse or counter clockwise direction. Each “hook” of
“b” illustrates each negation of the digit along the diagonal. And it is this blockage
to the reverse direction which keeps the newly constructed anti-diagonal from being
be included on the list of the rationals, or in other words, from being susceptible to
reduction to the diagonal sequence it is constructed out of. The difference between
the smooth “a” is analogous to following the diagonal in building up the rest of the
sequence while the hooked shape of “b” is what insures an explanatory gap is gener-
ated at each act of the negation operator in anti-diagonalization. It is an explanatory
gap generation since the anti-diagonal sequence cannot be contained in the list of
the substrate from which it is generated. Here it is the irreducibility aspect of the
explanatory gap which is receiving the emphasis, that is the untraceability of the anti-
diagonalization sequence backwards to the substrates. Shortly we’ll look more at the
undeducibility side of the explanatory gap.

E:CO 2014 16(2): 116-176 | 143


Figure 4 Anti-diagonalization construction imagined as a ratchet

The “break or stoppage effectuated by the negation operator can also be seen in
other formal constructional approaches such as the twist and gluing involved in the
making of a Mobius Strip that was mentioned in Part 1. The “mechanism” of the stc is
cognate to the “fold”, “twist”, and “glue” operations involved in the construction of a
Mobius strip from a flat piece of rectangular paper. In Part I, I suggested that only if we
consider the actual construction of a Mobius strip (Ryan 2006) with its unique global-
level properties (Weisstein, no date), would it count as an example of emergence. First
consider the given substrate: a two dimensional bounded surface such as a rectangu-
lar strip of paper. This surface has the crucial property of orientability (or directionality
if you prefer) which means that any figure “written” on the surface in a certain direc-
tion, e.g., an arrow, cannot be moved around the space in a continuous fashion to
eventually wind back its original position as the mirror image of itself, i.e., as an arrow
pointing now in a different direction.

Yet, a seemingly quite simple procedure can result in a radical different property,
the procedure consisting of folding, gluing or taping one side of the rectangle to its
opposite side (by curling it, e.g.) but while doing so twisting the surface around be-
fore the gluing. This transformative process generates a surface with a radically novel
topological property, non-orientability. This new surface has the characteristic that a
figure written on it and moved around can come back to its same original position but
now as a mirror image of what it was, the arrow pointing in the opposite direction.

Furthermore, whereas before the operation of twisting and gluing, the surface
had two sides, front and back, after the operation the surface has only side! This radi-

144 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

cally novel new property of one-sidedness can be seen if one draws a line down the
middle of paper and continues this line along the middle. Eventually one will see just a
line down the middle of the surface no matter how you turn the surface. In this other
example of a two-into-one operation, two sides have been combined to create one
side, at least one side in the sense of just moving along continuously along the strip
although this continuing is actually happening in a dimension one up from the local
experience, the scope or macro property is not evident from the perspective of the
going along the strip alone. Moreover, the original two sides have been subsumed
into the remaining one side only. The original two sides just don’t exist as they did
before (again, like Humphries, 1997, and his “fusion” take on emergence).

The twist step in the creation of a Mobius strip is analogous to a narrative plot
twist in a mystery or detective novel or film. Drawing the reader further and further in
because of its beguilingly uncomputability from the anterior plot line, the twist rep-
resents a transgression of what took place before, i.e., a digression away from “an
unfolding of this and no other law.” The negation operator, the Mobius twist, makes it
evident that for a process to lead to an unexpected outcome, this process must itself
include unexpected shifts, turns, digressions, divergences. In other words, the pro-
cesses responsible for radically novel outcomes must themselves partake of radically
novel shifts.

There is yet another aspect of the formalism of a self-transcending construction


that must be taken into consideration as we apply it to emergence, namely, a certain
arbitrariness in how the construction is constructed. The way the array in Figure 2 is
arranged gives the impression of being the only way it can be ordered in order to
insure that all the rational numbers are exhaustively “captured”. That is, the formalism
suggests that the building up of this list by constructing each number in a consecutive
fashion is a matter of strict rule following according to a deductively accessible laying
out of them in consecutive order. The same would hold true not only for the construc-
tion of the diagonal list since it merely follows the order building of the vertical list,
but the anti-diagonal sequence as well since it is constructed as a negation of each
number in the exact same order as the diagonal.

While it is indeed true that the specific arrangement depicted in Figure 2 is one
way to guarantee that all the rationals are included, this specific arrangement is mere-
ly contingent so that others that can accomplish the same guarantee are also ac-
ceptable. For instead of the way the order is arranged in Figure 2, 1/2, 1/3, 2/3, 1/4,
3/4, 1/5, 2/5, 3/5,..., it could just as easily been ordered as: 3/5, 2/5, 1/5, 3/4, 1/4, 2/3,
1/3, 1/2, …; or 1/9, 2/9, 4/9, 5/9, 7/9, 8/9,…; or 156/157, 155/157, 154/157, 153/157,

E:CO 2014 16(2): 116-176 | 145


152/157, 151/157; or any other sequence as long as there was some way to insure the
exhaustiveness of the list such. The upshot is that all such lists are arbitrary and as a
result all it takes to come up with a totally different anti-diagonal, radically novel num-
ber is to transpose differently even one number in the vertical list. This means that the
radical novelty property of the anti-diagonal number, as the “emergent” outcome of
the formalism, is not tied down to any specific deducible arrangement on the level of
the micro-constituents of, e.g., the digits found in the decimal expansion.

This conclusion, of course, follows from the nature of it being a deliberate con-
struction. However, regarding the point made above concerning how constructions
can be translated into natural constraints, we would expect that in naturally occurring
emergence, there would be a corresponding independence, i.e., explanatory gap, be-
tween emergent phenomena and their micro-level substrates even though the former
is transformed out of, or constrained to be, the latter. In fact, we can see this kind of
independence or explanatory gap between macro- and micro-levels in the emergent
“quantum protectorates” such as superconductivity in solid state physics (see the re-
marks of Laughlin & Pines, 2000; and the insightful discussion on it by Morrison, 2012)
. Moreover, this is why Laughlin and Pines point out that many crucial features of
these emergent quantum protectorates cannot be deduced or predicted from funda-
mental equations but rather must be measured in each experimental context which
the emergent phenomena occurs within. They stress that this is not due to some spe-
cial mathematical intractability but rather to the independence of the emergent state
to its anterior substrates. The formal structure supplied through the idea of an stc will
always need to be appended by the empirical research into each particular case of
emergence.

Emergent phenomena may form into prototypes but they will, if they are really
emergents, elude prefabricated algorithms or explanatory deductions from anterior
fundamentals and this is why empirical research is always necessary. According to
Laughlin and Pines, things can be shown to be clearly that “cannot be deduced by
direct calculation from supposed underlying micro-scale founded theoretical proper-
ties expressed mathematical which can only yield approximations but “exact results
cannot be predicted by approximate calculations”. Instead, these results are found
contingently by experimental measures of the actual prototypes, non-deducible fea-
tures of these emergent phenomena. Laughlin and Pine offer some examples of this
“explanatory gap” manifested as an inability to deduce emergent outcome from lower
level substrates and instead demonstrate a need to measure the system in context:
“simply electrical measurements performed on superconducting rings determine to
high accuracy the quantity of the quantum of magnetic flux hc/2e; the magnetic field

146 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

generated by a superconductor that is mechanically rotated measures e/mc. “ … “the


low energy excitation spectrum of a conventional superconductor, for example, is
completely generic and is characterized by a handful of parameters that maybe deter-
mined experimentally but cannot, in general, be computed from first principles;” and
the Josephson quantum effect and the quantum Hall effect cannot be deduced from
micro-level dynamics.

There is a direct analog to this in chemistry according to Hendry (2010a and b).
using appropriate micro-level Coulomb Schrödinger wave equations even modified by
the method of the Born-Oppenheimer approximation, will not yield molecular struc-
ture (remember that molecules are at a higher level while the Schrödinger equations
operate at the level of assemblages of electrons and nuclei. For Hendry the problem is
not one of mathematical intractability but because the molecular structure is not there
to begin with as data for the Coulomb Schrödinger equations.

Finally, it is worthwhile to point out that there are actually two phases in which
Cantor’s construction can effectuate self-transcendence. The first is the initial opera-
tion of anti-diagonalization that enables the construction of the anti-diagonal real
number transcending the countable cardinality property of the rational numbers from
which it is constructed. But there is also the possibility of a continual reapplication
of the anti-diagonalization operation to any new set comprised of the original list
appended with the newly generated number. In discussing this second aspect of self-
transcendence Kaufmann cited the work of the distinguished American mathemati-
cian Oswald Veblen who had used a version of Cantor’s stc in his own work on con-
structing ordinal numbers which, in contrast to cardinal numbers, express the order
and not magnitude of sets. According to Kaufmann, “What is characteristic of this new
type [of Veblen’s take on Cantor’s construction] consists in its being in principle undis-
closed: a determinate constructional principle, however widely conceived, will never
lead to the goal, but it is only in the course of constructional activity itself that new
instructions for continuing the procedure emerge” (p. 135). But if the stc is to retain
its radical novelty producing potency, these new “instructions” arise at each repeated
reapplication of anti-diagonalization to each new set constructed out of the original
substrate list appended with the radically novel number. This procedure of the stc is
“undisclosed” since the new set to which it is reapplied is unknown before the actual
anti-diagonal construction operation is accomplished with each new reapplication.

E:CO 2014 16(2): 116-176 | 147


Cantor’s stc in the limitative theorems of Gödel and Turing

W
e have gone over the inner workings of Cantor’s self-transcending construc-
tion, whose potency in generating radically novel outcome by way of com-
paratively simple processes is well captured by this quip about it from the
mathematical logician Nathaniel Hellerstein (1997), “Has ever so much be gotten from
so little?” Going deeply into this particular mathematical construction was motivated
by two factors. The first were the allusions to it, e.g., the relation between emergent
and transfinite sets offered presciently the proto-emergentist Oliver Reiser, an intima-
tion of the critical role of a set’s cardinality proposed by one of the chief architects of
today’s understanding of emergence, John Holland, and so forth. The second motiva-
tion was indirect, namely, the conceptual linchpin role of Cantor’ stc in Darley’s defini-
tion of emergent phenomena according to uncomputability, an idea with roots in vari-
ous strands of mathematics, logic, computer science, and most importantly the vastly
influential limitative theorems of mathematical devised in the nineteen thirties by first
Kurt Gödel and shortly after Alan Turing with side trips along the way before and
after in the works of other mathematical logicians. Most pertinent here was Turing’s
demonstration of the existence of uncomputable numbers, work building on Gödel’s
conclusions about undecidability in his incompleteness theorems.

The first thing to note is just how far the conceptual context of Cantor’s stc had
come during its passage from its original set theoretical framework. For Cantor, the
formalism’s power lay in its ability to produce an entirely new kind of quantity, the
quantitative measure of numerosity or cardinality. When it had come to mathematical
logic, though, the context had shifted to non-quantitative issues of the decidability or
solvability of mathematical problems in general (although quantity as such could be
seen in them to a much lesser extent such as in Gödel suggesting the same countable
cardinality for listing propositions according to their match with the natural numbers,
a countably cardinal set—see Gödel, 1992: 39, fn. 7). These theorems were formulated
against the backdrop of David Hilbert’s famous Entscheidungsproblem or “Decision
Problem” which contended that all well-posed mathematical problems were solvable
or decidable by some fixed method, an idea held by many mathematicians at the
time. According to Hilbert, if a mathematical proposition is true then there must be
some way to decide or prove or demonstrate it is true. To attack this problem, Hil-
bert argued that all mathematical problems could be put into an appropriate formal
construction. The central issue of the decidability problem was whether it can always
be decided if a given proposition of the formal system is a theorem, that is, can it be
proven true (Peregrin, n.d.) by following the steps of deduction according to the rules
of the formal system in question.

148 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Adapting Cantor’s stc in order to shift to the issue of decidability/computability


required at least two moves, which are not just purely quantitative and thus not re-
solved by counting through, e.g., the appropriate mapping to the counting numbers.
There needed to be a shift to the appropriate kinds of cognate listings, combinatorial
diagonalization, and then modified anti-diagonalization which would fit deductive or
decisional steps rather than pure quantitative counting.

Second and related to the first, the combinatorial strategy in constructing the di-
agonal had to also shift to what would make sense in the new context of deductive/
decisional progressions and not mere progression by increase of counting numbers.
Here the genius of Gödel was revealed in full measure with his self-referential “coding”
scheme (later usages refer to a “diagonal lemma” thus drawing an association be-
tween what Gödel did and Cantor’s diagonal). From this Gödel devised a true proposi-
tion that was evidently true because of the way it was presented but was not provably
true, that is, there was no definite method to decide whether or not it was true or
false, In other words, its evident truth was undecidable. Cantor’s stc ability to tran-
scend substrate cardinality, thus had become transposed into transcendence of de-
cidable methods confined to a substrate level composed of logical and mathematical
resources.

Gödel had considerably ramped-up the elementary self-reference found in the


mapping of the set of rationals to itself that was central to Cantor’s stc (represented in
the vertical and horizontal arrays as well as the diagonal intersection). In Gödel’s con-
voluted coding method, numbers and sequences of numbers could represent either
the number itself or operations on the number (Gödel justified this move by claiming
constructing sentences about themselves and arithmetic operations on them does not
violating the logic of Russell and Whitehead since his propositions are recursively de-
fined functions; see Gaifman, 2005, 2007).

Since mathematical propositions amounted to operations on numbers, e.g., the


function of squaring is an operation where a number is multiplied by itself, the self-
referential structure meant that a proposition could be coded as a sequence of num-
bers. As Hintikka (2000) has pointed out, the nested self-referentiality in Gödel’s proof
construction can become quite dizzying to keep track of. He likens it to a celebrity
who plays themselves in a movie or TV show, and then talks about him/herself as
they are in real life yet they are doing so within the framework of the movie/TV show.
For example, John McEnroe played himself in a TV movie about an imaginary tennis
champion so that when he did in fact speak in the TV movie about himself, he was

E:CO 2014 16(2): 116-176 | 149


simultaneously talking about himself as he was in real life, as well as how he was sup-
posed to be in the movie as well as just another member of the cast.

An alternate analogy for the self-referential coding of Gödel’s was proposed by


P. J. Fitzpatrick (1966): think of Gödel’s method as making use of three languages si-
multaneously: Latin, French, and English. Latin stands for the formal system itself, e.g.,
arithmetic, which typically needs to be translated and interpreted; French stands for
the meta-language used for meta-logical (or meta-mathematical) purposes to make
precise statements about, say, Latin; and, English being the actual language that is
used in which we can make what seem to be true statements and sometimes even can
tell if in fact a statement is true. We will come back to self-reference below when we
discuss its relation to paradox.

I point out these analogies to emphasize the role of self-reference which originally
goes back to Cantor’s stc where the diagonal as a representation of the self-referential
mapping is both a sequence in itself making up a particular diagonal number whose
cardinality should be the same as those horizontal and vertical arrays from it is com-
bined. The relevance of this will become later when I compare the “logic” of what is at
work in stcs to the self-referential closure of other complexity models who overem-
phasize self-reference.

By setting up a particular sequence following this self-referential code (akin to


the arrangement of the array of Cantor’s stc), Gödel showed how his formalism could
generate an unprovably true statement, radically novel in its own unique non-quan-
titative “diagonalizing out” of substrate methods of deciding true propositions. This
result shook up the mathematical and logical world for it entailed that every formal
system was incomplete, that is, contained true but undecidable propositions. If truth
couldn’t be decided, then how did one have faith that the formal systems was con-
sistent or sound? According to Franzen (2005), basically Gödel constructed an anti-
diagonalization of provability with this Gödel sentence:

G if and only if n is not the Gödel number of a theorem of S.

But this meant that the sentence entailed a true but unprovable proposition!

It needs emphasizing that Gödel’s limitations on decidability was itself limit-


ed since what he had precisely proved was that the evidently true but undecidable
proposition could not be proved only using the resources available to the formalism
in which the proposition was stated. This didn’t rule out (as many have erroneously
thought), though, the possible decidability of the same proposition through utilizing

150 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

resources from formalisms not so restricted, that is, sets of formalisms that could con-
tinually be transcended by appended radically new numbers constructed via a Cantor-
like stc, or as Gödel put it himself (1992: 62, footnote 48a):

The true source of the incompleteness (undecidability) attaching to all formal systems
of mathematics...is the fact that higher types can be continued into the transfinite...
whereas in every formal system at most denumerably many types occur. It can be
shown, that is, that the undecidable propositions here presented always become
decidable by the adjunction of suitable higher types...

This is indeed what I have been meaning by the term “self-transcendence” in regard to
emergence. Accordingly, emergence can be thought of as precisely that which evades
formalization with any particular model and thus must be formalized by transcending
that very model. That this is not just a trick of words will be shown below

Bequeathed the revolutionary methods and conclusions involving incompleteness


and unprovability/undecidability from Gödel’s remarkable feat, Turing turned his at-
tention to a related avenue of inquiry, one that from hindsight we can appreciate as
in line with his special genius in connecting the abstract with the concrete seen, e.g.,
in his remarkable code-breaking work with the German Enigma code machine. Less
interested than Gödel in Hilbert’s search for an all-encompassing decidability, Turing
focussed more on the specific issue of whether all numbers were computable.

Turing devised his own self-referential type code (analogous to Gödel’s) for rep-
resenting any possible algorithm or decisional procedure, a machine version of what
could be accomplished by an actual “human computer.” He described his machines as
doing a calculation, a “human computer”, a primitive input-output device which was
termed by later commentators “Turing machines.” Each possible algorithm could be
represented via the Turing machine construction, the code expressing the decisional
steps or methods needed for computing or solving a number, not matter how long
this chain of deductions or logical inferences as long as only a finite “alphabet” for
representing a particular computation was available (Petzold, 2009). In an algorithm
like the long division above in Figure 1, each step in the chain of reasoning arguing
arose from the succeeding one, hence is machines were “deterministic” just as it is
the operation of logic gates in computer softward, embedded in hardware, that are
responsible for the deductive flow of a program at work. As will become relevant be-
low, we can see in the deciding or deducing operations making up the inner workings
of an algorithm as analogy to a series of deductions going from some substrate and
computing some solution or outcome from it, again from an origin to a terminus.

E:CO 2014 16(2): 116-176 | 151


Turing explicitly stated he had wanted to proceed following pretty much along the
lines of Cantor’s anti-diagonal stc by setting-up a kind of listing of computable num-
bers in association with the algorithms used to compute them, like the array of hori-
zontal, vertical, and diagonal used to depict Cantor’s stc in Figure 2 above. He would
then be in a position to apply the anti-diagonal negation operator to “diagonalize
out” the resulting radically new uncomputable. Yet he realized he first had to have a
way of recognizing algorithms that could actually compute numbers for his list, or as
he put it, “but the problem of enumerating computable sequences is equivalent to the
problem of finding out whether …. “ it is decidable, but “we have no general process
for doing this in a finite number of steps then “by applying the diagonal argument
correctly, we can show that there cannot be any such general process” (Turing quoted
in Floyd, 2012; see also Petzold, 2009; Gaifman, 2005; Cai, 2003; Shagrir, 2007; Chaitin,
2011).

Since, the set of numbers that are computable are the ones that are comput-
able by algorithms, and using his machines Turing had defined algorithms as definite
instructions that could be encoded in a finite “alphabet”, this meant the set of com-
putable numbers must be countable (to relate all of this back to Cantor’s focus on
cardinality). But this wasn’t enough to use a direct form of Cantor’s stc operating on
countable numbers since it is also true that most real numbers are not computable.
Oddly enough, though, some real numbers are computable, e.g., the transcendental
number pi is calculable since there is a definite decisional procedure to calculate it
(Florian, 2011).

This meant for Turing that he first had to prove he was only capturing computable
producing algorithms for his “arrays” which is why he then introduced what was later
called the Halting Problem. This demonstrated it was not possible in general by some
algorithmic to decide ahead of time which numbers would turn out to be computable.
That is, there had to be uncomputable numbers. The term “Halting” was later used to
refer to the impossibility of knowing if a particular algorithm would or would not halt
at a correct solution ahead of time (some attribute the term to the computer scientist
Martin Davis). Various implications followed from the proof of the Halting Problem,
e.g., there is no general algorithm that decides whether a given statement about natu-
ral numbers is true or not since a proposition having it that a certain program will
halt can be converted into an equivalent proposition about natural numbers (“Halting
Problem,” Wikipedia). If we had an algorithm that could solve every statement about
natural numbers, it could certainly solve this one; but that would determine whether
the original program halts, which is impossible, since the halting problem is undecid-
able.

152 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Yet despite the need to demonstrate the uncomputability of the Halting Problem
and thereby the existence of uncomputable numbers, the basic outline Turing fol-
lowed was Cantor’s stc and its “moving parts”. As described by the British mathema-
tician Andrew Hodges (2002), author of an masterful and provocative biography of
Turing,

Turing’s proof can be recast in many ways, but the core idea depends on the self-
reference involved in a machine operating on symbols, which is itself described by
symbols and so can operate on its own description. …However, the “diagonal” method
has the advantage of bringing out the following: that a real number may be defined
unequivocally, yet be uncomputable. It is a non-trivial discovery that whereas some
infinite decimals (e.g., p ) may be encapsulated in a finite table, other infinite decimals
(in fact, almost all), cannot. (p. 4)

Hodges underscores an apposite parallel between the processes and outcomes of


Cantor’s stc and Turing’s work: “…the point relevant to Alan Turing’s problem was that
it showed how the rational could give rise to the irrational in exactly the same way,
therefore, the computable could give rise to the uncomputable by means of an [anti]-
diagonal argument” (p. 102). This relation between uncomputability and uncountabil-
ity can be interpreted as even stronger through an insight from the late, preeminent
mathematical logician and protégé of Gödel, Hao Wang (1974: 76) that the “wider
implication [of Turing’s proof is]…that no formal system can contain all the real num-
bers.” This is in essence a corollary to Turing’s proof that there can be a number that
is not computable from means devised from within the formal system from which the
number is generated.

Thus, even though there is a formal structure for proving the fact about the halt-
ing problem, this same formal structure cannot be used to derive a priori which spe-
cific algorithms can be found germane to solving particular problems. To be sure there
are always rules of thumb that can be appealed to in order to devise appropriate al-
gorithms, the means of arriving at the useful ones making up a large part of computer
science inquiry. Applied to emergence this implies that although a formal structure
like the stc developed here can provide needed insight, at the same time it mandates
the presence of an non-eliminable explanatory gap. We will return to how this gap
can be understood and handled contextually below.

E:CO 2014 16(2): 116-176 | 153


Uncomputability and complexity metrics in the wake of
Turing

T
here is a close connection, as Chaitin has repeatedly emphasized, between al-
gorithmic complexity and the theorems of Gödel and Turing (and indirectly to
Cantor’s stc). In Darley’s view, the property of uncomputabiity is equivalent to
the algorithmic complexity being so high that emergent outcomes can only be known
ostensively, i.e., letting the program run.

We can also see a close connection between Turing’s theorem and Cohen and
Stewart’s (1994) Existence Theorem for Emergence, which hinges on a kind of uncom-
putable intractability involved in deriving emergent features from lower level laws,
one of the clues that led me to consider Cantor’s stc role in emergence to begin with.
Cohen and Stewart justified emergence through their Turing-like contention that “in
any sufficiently rich rule-based system there exist simple true statements [by which
they are referring to emergent features] whose deduction from the rules is necessarily
enormously complicated.”

More specifically, they point to how the computational emergents found in the
Game of Life have been proven capable of generating a programmable computer
which can be shown to possess the crucial feature of undecidability associated with
the halting problem. That is, one could set-up the Game of Life in such a way that one
of its “creatures”, a glider, say, is annihilated if it halts. However, and this is the key
point they want to emphasize, Turing proved there was no way to tell, ahead of time,
if the program would indeed halt and therefore if the computational emergent would
be annihilated. As they write, “This kind of uncomputability occurs because the chain
of logic that leads from a given initial configuration to its future state becomes longer
and longer the further you look into the future, and there are not short cuts” (their
emphasis). They concluded, “...[emergent phenomena] are not outside the low-level
laws of nature; they follow from them in such a complicated manner that we can’t
see how” (p. 438) and, consequently, emergence “transcends its internal details, and
there’s a kind of scale transcendence” (pp. 440, 441). The transcendence of the details
shows up in the explanatory gap of the quantum protectorates, which we’ll be saying
more about below.

The phrases “such a complicated manner”, “whose deduction from the rules is
necessarily enormously complicated” and “refractorily long” can be interpreted as
nonspecific complexity measures that speak to a kind of uncomputability of emer-
gence along the lines of Darley’s algorithmic complexity metric. To be sure, the aim of

154 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Cohen and Stewart was not to devise such a complexity measurement but rather lay-
out informally what a formal “existence proof” for emergence would look like. Indeed,
relying on specific complexity metrics to emphasize the uncomputability property of
emergence comes with disadvantages if it is presumed one size fits all. For example,
even algorithmic complexity, as useful as it might be in certain cases, displays an un-
wanted correlation between the degree of complexity and the order in a system. Thus,
algorithmic complexity is at its highest in random systems, where it clearly is the case
that any description of the program length cannot be less than the actual size of the
data stream the metric is measuring, i.e., the algorithmic complexity of random series
is said to be uncompressible. This implies that the random series is uncomputable but
emergence is not produced by just stochastic processes. It may indeed be true that
one facet of the processes involved in generating emergent phenomena may incor-
porate some kind of stochasticity and this will be displayed in the eventual emergent
patterns in some form or another, but randomness generation cannot be a leading
player in the production of emergents since it is pattern and order which are signs of
emergent phenomena and not lack of pattern and disorder. That is, emergent phe-
nomena are supposed to be all about novel order, structure and pattern, not random
bit strings. So how is uncomputability, if restricted to what algorithmic complexity
measures, to be of help in characterizing emergence?

It was partly because of these limitations of algorithmic complexity, the comput-


er scientist and quantum information theorist, Charles Bennett (1986, 1988) devised
an alternate complexity metric with a different correlation profile called logical depth,
which is a relevant complexity measure for emergence due to two reasons. First, it has
the advantage, in contrast to algorithmic complexity, of being highest in the face of
the kinds of new order, new patterns, new structures of emergent phenomena rather
than at its highest in the case of random series. Second, in order for logical depth,
as a measure of the logical steps involved in deducing an emergent outcome, to be
brought to its appropriate length so as to possess its apt unique correlation profile,
Bennett proposed that its calculation utilize the Cantorian anti-diagonal stc, i.e., trans-
pose the stc into a mathematical operation which represents a negation operator:
“the gist of which is to generate a complete list of all shallow n bit strings and then
output the first n bit string not on the list.”

Adding the Cantorian anti-diagonal-like step to the calculation of the metric en-
ables it to bring out how the data from the emergent structures “contain information
about their own depth” (hints at it being another instance of a kind of self-reference)
such as obvious redundancies in emergent order (redundancy being one crucial sign
of order as opposed to disorder), unequal digit frequencies, aspects of the outcome

E:CO 2014 16(2): 116-176 | 155


that are predictable only over considerable time (notice parallels to Cohen and Stew-
art’s existence theorem), and computational efforts in general. As has been recognized
by many scientists (e.g., Crutchfield, Fontana, Buss, Gell-mann, and so on), modern
physics has not been a fruitful breeding ground for the development of insights and
measures of structure or order. This fallowness has been a liability in studying emer-
gence for the obvious reason that the radical novelty of emergence is all about the
novel order observed in emergent phenomena. However, one can detect a reinforc-
ing feedback loop between increasing research into emergence (and other aspects
of complexity science) and increasing invention of new complexity measures whose
influence will extend far beyond that of physics alone.

Self-reference, negation, and flirting with paradox

S
ince Cantor first devised his anti-diagonalization construction, an association
with paradox has prompted much commentary. Earlier on, for example, were
Cantor’s Paradox of the Set of All Sets, and Russell’s infamous Barber Paradox.
This relationship to paradox became re-emphasized with the central role of Cantor’s
stc in the limitative theorems of Gödel, Turing, and others. Indeed, Gödel admitted
this his theorems were patterned in part on the notorious Liar and Richards para-
doxes (Grim, Marr, and St. Denis, 1998). As Douglas Hofstadter (1985) has trenchantly
commented, “To some people, [anti-] diagonalization seems a bizarre exercise in arti-
ficiality, a construction of a sort that would never arise in any realistic context. To oth-
ers, its flirtation with paradox is tantalizing and provocative, suggesting links to many
deep aspects of the universe” (p. 335; my emphasis; Hofstadter’s magnum opus Gödel
Escher Bach examines this theme in his inimitable and enlightening fashion).

Taking a closer look at this “flirting with paradox” in the Cantorian stc can aid us
in two ways: first we can better appreciate the potency of stc’s in producing radically
novel outcomes; second, we can see why it will not be necessary to turn to paracon-
sistent or paradoxical logics for a cogent and naturalistic reimagining of emergence.

There are two essential ingredients of Cantor’s anti-diagonal construction that


also happen to be two of the crucial ingredients in most paradoxes: self-reference
and negation. As we went over above, self-reference in Cantor’s proof showed-up in
the mapping of the set of rational numbers to itself which was exhibited in the ar-
rays and most significantly in the diagonal sequence comprised by what Jacquette
explained according to the diagonal sequence’s “ordered pair” subscripts, one of the
subscript index numbers being taken from the vertical list and the other being taken
from the decimal expansions on the horizontal rows. Negation, of course, showed-up

156 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

in the consistent changing of each numerical value on the diagonal which enabled the
resulting number to “diagonalize out,” “lever out,” “joost” from” or “switch” from the
diagonal sequence. It was the action of this negation operator which generated the
radically novel real number not capable of being included in the list of rationals and
thus displaying the radically novel cardinality property of an uncountable set.

Although they are obviously closely related, paradoxes and contradictions are dif-
ferent logical forms. Thus, when Felix Kaufmann rejected the very possibility of a self-
transcending construction because “...no construction can ever lead beyond the do-
main determined by the principle underlying it”, he was in effect claiming that the idea
of stc was absurd for implying this contradictory syllogism:

1. Any kind of construction must follow and stay within the conceptual arena
defined by the principle underlying the construction;
2. The “self-transcending” part of the phase “self-transcending construction”
entails that the construction in question is not staying within the conceptual
arena defined by the principle underlying the construction, i.e., it transcends or
diverges from this normative conceptual arena;
3. Hence, the expression “self-transcending construction” is a contradiction;
4. Therefore, a self-transcending construction is impossible (which meant for
Kaufmann that what the stc was used to prove, i.e., the existence of sets with
transfinite cardinality, was also impossible).

Negation obviously plays critical play a role in a contradiction so that the presence
of a contradiction emanating from a sentence that was presumed to be true, negates
the truth of the statement containing it. Moreover, if a system of related statements
includes even one contradiction, then the system is considered inconsistent, or is said
to “explode”, what the Scholastics called ex contradictione quodlibet or “from a false-
hood, anything follows” since allowing one contradiction leads to the ability to entail
whatever one wants (Sainsbury, 2008). Since the fact that the truth valuation of a sen-
tence containing a contradiction is false, and thus the presence of contradictions can
wreak havoc on recently various “paraconsistent”, “paradoxical,” or “dialetheic”, log-
ics have been developed to incorporate certain contradictions into acceptable logical
statements for certain purposes, a topic we will discuss below.

A paradox has a different kind of logical structure, the most prominent feature of
which is the inclusion of a negation operator within a self-referential structure. For in-
stance in the famous self-referential paradox of the Liar, the negation operates inside
the circle created by the self-reference, e.g., here is one version of the Liar.

E:CO 2014 16(2): 116-176 | 157


1. This sentence is false.

Sentence 1. has a self-referential structure due to the indexical “this” which wraps what
is being referenced in the sentence, that is, the truth valuation of the sentence, around
back to itself. The negation is applied inside the self-referential structure so that the
negation of “false” negates the sentence’s own truth valuation. If we consider the sen-
tence’s truth valuation through a series of deliberations about it, its truth valuation,
which is presumably what the sentence is about, oscillates from true to false to true
to false…. this loop going on indefinitely. Thus, if we assume that sentence 1. Is a true
statement then what it says about itself being false must be a true statement. But if
we deliberate that it is true that it is false, then what the sentence is saying about itself
must be false. This means that it is false that that sentence is false. This then lead to a
deliberation that it must be true and the round robin goes around again. When rep-
resented by a dynamical system (see Grim, Mar, St. Denis, 1998; and Goldstein, 2001)
with appropriate adjustments made to fit with the nature of a dynamical system, the
truth valuation is first an attractor, then a repeller, then an attractor, then a repeller,…
There can be no final settled truth valuation since the self-referential structure has cre-
ated a complete closure to the system so that nothing else but this unstable wobbling
can take place.

In mathematical logic, self-referential sentences or formulae (not necessarily self-


referential paradoxes like the Liar) are referred to as diagonalization (after Cantor, of
course) in which, for an expression (“ex“ for short) of the sentence, substitute for ex
the literal quotation of the whole statement (Smullyan, 1994). For example, start with
sentence (1):

(1) Jeff is reading “Reimagining Emergence, Part 3” (“Reimagining Emergence, Part 3” is


“ex”)

Now it might be thought that sentence 1 is already self-referential since it states that
I (Jeff) am, or is, reading what I am writing. But it is not really self-referential since it is
referring only to a paper on the computer screen, not me. Thus we need to proceed
by substituting the whole sentence (1) for ex to yield:

(2) Jeff is reading “Jeff is reading ‘Reimagining Emergence, Part 3’”.

Yet, strictly speaking, sentence (2) is not yet self-referential for it merely asserts that
Jeff is reading sentence (1) not sentence (2). This means that to generate complete
self-reference requires wrapping what is being referred completely around back to the
“self”, a feat necessitating a kind of bending around of the referring completely back

158 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

to itself. As Smullyan points out, applied to sentence (1), strict self-reference would
not be instantiated until we reach this rather strange beast of a sentence:

(3) Jeff is reading the diagonalization of “Jeff is reading the diagonalization of ‘”Jeff is
reading “Reimagining Emergence, Part 3’”.

The philosopher W. O. Quine cleverly devised a simple method to create self-referen-


tial sentences, called “quning” (Franzen, 2005: 42 is an example of how to generate a
self-referential sentence without the need for such indexicals as “this” as in “this sen-
tence”, simply keep appending, in a nested way to generate something like:

“...yields a sentence with property P when appended to its own quotation,” yields a
sentence with property P when appended to its own quotation.

It is also important to realize that cross-reference, e.g., two or more sentences refer-
ring to each other, can be interpreted as just a more indirect form of self-reference, re-
taining the special features of the latter. Consider a conversation between Tweedledee
and Tweedledum sitting next to each other in the cafeteria engaged in a conversation
with their friends at the table (adapted and revised from Grim, Mar, and St. Denis,
1998):

Tweedledee: You, Tweedledum, are speaking truthfully right now because I can see that
you are wearing your glasses today which you usually forgets.

Tweedledum: I can see you, Tweedledee, sitting in the chair next to mine and you are
wearing a blue shirt.

What they are saying to each other refers to things relevant to each, and there is
clearly no issue about the truth valuation of what they are saying (from a purely logi-
cal point of view—empirically, it might be the case that Tweedledum is color-blind and
often gets it wrong as to whether something is green or blue).

Next, their conversation veers in a different direction:

Tweedledee: What Tweedledum will next say is true.

Tweedledum: What Tweedledee just said is not true.

Here we see an unstable truth valuation parallel to that shown to take place in the
Liar. Ultimately though, both instances of the cross referential conversation can be col-
lapsed to self-reference by using something like proxy variables for each speaker’s ut-

E:CO 2014 16(2): 116-176 | 159


terances (Incidentally here is my all time favorite self-referential paradox, coming from
the master of the fantastic, Jorge Luis Borges, 2000):

In Sumatra, a graduate student taking her oral examinations for a doctorate in


prophecy is asked the first question, “Will you pass?” She quickly replies, “I will fail”.
If what she says is a true statement, i.e., she actually will fail the exam, then she had
prophesied correctly. But if her prophecy is correct, this means she really should pass
the test on her capability for telling the future, after all, her degree is in prophecy.
However, passing the test means that what she said about failing is not true. But then,
if her statement “I will fail” is not true, then the implication is that she should pass, i.e.,
not fail the test. How can the candidate pass if she had prophesied that she would
not?

With these ideas on self-reference in mind, let’s turn to the logician Graham Priest’s
(1994, 2002) proposal for a kind of universal blueprint describing self-referential para-
doxes composed of two crucial structural components termed transcendence and clo-
sure (for a most interesting analysis of such paradoxes in relation to Cantor’s stc, see
Keith Simmons, 1990; unfortunately we don’t have space enough here to do Simmon’s
analysis justice). Transcendence can be thought as analogous to the transcendence in
a self-transcending construction, i.e., it is the operator that drives the sentence’s valu-
ation to transcend what it originally is (e.g., the condition of the substrates). Closure
refers to the way the self-referential structure wraps the sentence around itself so tight
that what the sentence is saying is not about anything else but some defining feature
of itself like its own truth or falsity. For example sentence #1 could have said instead
“This sentence is printed in black letters on a white screen” but that is not self-referen-
tial in the same tightly enclosed way for it is referring to certain qualities the sentence
happens to be referring to and not something at its core like its falsity.

According to Priest, the closure of a self-referential paradox encapsulates the tran-


scendence of the negation which consequently backs up on itself like a vicious circle.
Because of closure in a self-referential paradox, the transcendence has no way out.
But why didn’t that happen with Gödel’s theorem which he admitted was modeled
on paradoxes such as the Liar or the Richards? The answer is because of his code
that managed, by having nested references of numbers for operations on numbers
(operations amounting to logical deductive or decisional steps), to generate the radi-
cally new anti-diagonal expression that expressed the self-evident fact that it was true
while at the same not provable or decidable.

160 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

What Gödel had done can be connected with Priest’s point about how transcen-
dence gets blocked in a self-referential paradox by the closure of the self-referential
structure. What would happen instead to the potency of the driving force of transcen-
dence if the enclosing impediment of the self-referential structure is somehow trans-
muted or loosened? Indeed, this is what happens in the case of Cantor’s stc which
contains both self-reference and negation, but without the former impeding the lat-
ter. A closer inspection of the Cantorian stc shows it is not identical to a self-ref-
erential paradox in spite of both containing self-reference and negation. Remember,
Hofstadter asserted that anti-diagonalization only flirts with, not embraces, paradox.

But before we get to the way out possessed by stc’s, I would like to comment on
a certain similarity between what Priest calls closure and the exact same word used by
Francisco Varela 1974, 1979) as a description of what his theory of autopoeisis claims
about the central fact of a living organism, namely, its self-referential core function
to generate the kind of closure required for what he called biological autonomy. For
Varela, organisms are self-referential in their very essence by consisting of a network
of production processes of components which, through their interactions, regener-
ate and realize the network that produces them. This self-referential, circular causality
operates to create an invariant self-contained identity, a boundary-circumscribed state
of closure, What I say about Varela’s work can also be said to apply to Robert Rosen’s
theory of M/R (Metabolism/Repair) systems (indeed the comparison of Rosen’s and
Varela’s theoretical self-referential schemes has generated an almost cottage industry;
see, e.g. Letelier, Marın, and Mpodozis, 2003). To conceptually support his conceptu-
alization of life as a primary, primordial, foundational self-referential structure Varela
first developed a “calculus” for self-reference built around the mathematical/logical
apparatus of G. Spencer Brown and later through a category theoretical interpreta-
tion of self-reference based on William Lawvere’s famous theorem on closed Cartesian
categories.

To be sure, self- and cross-reference have been involved in characterizing living


organisms at least since Kant’s coining of the term “self-organized” in his characteriza-
tion of organisms in contrast to machines. For Kant, an organism is defined as, “one
in which every part is reciprocally both end and means. In such a product nothing is
in vain, without an end, or to be ascribed to a blind mechanism of nature.”… “...[T]he
part must be an organ producing the other parts—each, consequently, reciprocally
producing the others... Only under these conditions and upon these terms can such
a product be an organized and self-organized being, and, as such, be called a physical
end” (Kant quoted in Keller, 2008a: 48-49). We see cross-reference at the heart of the
living, parts referencing other parts, producing and supporting each other and in the

E:CO 2014 16(2): 116-176 | 161


process even producing each other reciprocally. This kind of cross-referential structure
as a defining property of life stayed as a mainstay in biology, showing up in systems
approaches as feedback loop diagrams.

This approach no doubt supplies important insights into the nature of organisms,
but in my opinion, becomes especially problematic when taken to an extreme which is
what has been done in the respective theoretical biological standpoints found in the
work of Robert Rosen and Francisco Varela. In previous publications, I have critiqued
Rosen’s and Varela’s doctrines when they have veered too close to the kind of closure
that Priest refers to as the tightly wrapped self-referential structure at the heart of
paradoxes. As systems theories in biology, the great merit of Rosen’s, Varela’s, and
others similar approaches to theoretical biology, has been to erect fortifications pro-
tecting the province of biology in studying life against the encroachment of simplistic
input-output reductionist models. However, in so doing their extreme self-referential
scheme has had the effect of robbing “life itself” (the title of one of Rosen’s books) of
exactly what makes “life itself” life and not the parody of death one takes away from
their models. Strangely, although Varela’s and Rosen’s systems biology was supposed
to deal with life, their respective schemes leave out nothing less than: procreation
through self-reproduction, sexual ecstasy, evolutionary change, deep social encounter
and cooperation, ideals lifting us up so as to be able to hear our better angels, cre-
ative and religious experience, wonder at nature, our embeddedness in family, friends,
social networks. There is a kind of strange prudery in Rosen’s and Varela’s dessicated
view of life where ecstasy is just too transcendent for their closure (the most withering
criticism of autopoeisis is no doubt that offered by Rod Swenson, 1992).

One of the reasons for bringing this up is to point out how the formalism of self-
transcending constructions contrasts strongly with this closure scheme, this overly
wrapped tight self-referential closure which opposes its transcendence productive
associations. Even though the representation of the self-referential mapping of the
rationals to themselves, i.e., the diagonal sequence, is negated through the negation
operator construction, this negation is not then trapped, rather the lower level micro-
level origin of the radically new number is what is negated. In the stc, accordingly,
negation is the way out of self-referential closure because it is not bound by self-
reference. Indeed, to the extent that the diagonal sequence is the coding par excel-
lance of self-reference in Cantor’s formalism, anti-diagonalization negates even this
self-reference.

162 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

It is for this reason, namely, that the stc formalism allows self-transcendence to
drive the emergence of the radically novel, that the conception of stc does not require
any kind of paraconsistent or dialetheic type of logic which allow of contradictions
or paradoxes under certain circumstances (see Priest, 2002). Thus dialetheic logic is
Priest’s way out of closure. Yet the price to be paid is high, namely the inclusion of
paradox and contradiction into otherwise rational systems of thought and the con-
comitant frequent contortions of argument that seem required by that allowance. The
formalism of self-transcending constructions, though, doesn’t need such special logi-
cal apparatus since it brings about transcendence by its very constructional method.
Anytime it starts getting too close to paradox, beyond the flirting phase, it has re-
sources for avoiding a full embrace of the illogical.

Processes of emergence just as the operations of the Cantorian stc need to in-
clude this flirting with paradox as one of the sources of the stc’s potency in gener-
ating radical novelty outcomes by way of a series of transformations of substrates
into emergent phenomena. The reason comes down to something very elementary
but typically overlooked concerning the nature of the processes responsible for emer-
gence: if radically unique outcomes are what is wanted, then there must be some-
thing radically unique about the processes leading to those outcomes. Morgan had
brought this down to what I think is the most elementary fact about these processes
of emergence: they display the need for some kind of a new start. This new start is like
a second chance for if the destination is a radically new place, then trying to get there
by following all the deductive rules dominating the anterior substrates simply won’t
work. In the face of all the current calls for renewed hylozoism, panpsychism, actual-
ization of pre-existing propensities, processes of emergence must possess qualities
that transcend the satiric explanation, on the part of one of Moliere’s characters, that
the power of a sleeping draught as due to its “dormative” properties.

Emergent wholes and the transformation of substrates

T
he property of being integrations, collectives, wholes on a higher macro-level in
relation to the component aspect of micro-level substrates is one of the defin-
ing characteristics of emergent phenomena. This higher level wholeness can be
seen in all of the examples of emergence, from macro-“quantum wave” of resistance-
free flow of electric current in superconductivity, to the cooperative social networks
the can emergence in concerted human to accomplish a task, to the evolution of new
higher level organisms from the transformation of parts and functions. The proto-
emergentist C. L. Morgan regarded this feature a “new relatedness” of the parts within
a whole. However, as the always insightful, emergence-friendly, complexity oriented

E:CO 2014 16(2): 116-176 | 163


theoretical biologist and philosopher William Wimsatt (1997) has persistently pointed
out, an emergent whole is not a mere aggregate and therefore how an emergent
whole transcends aggregativity can be clarified with help from our stc formalism.

Imagine a tangram-like puzzle composed of 30 small plastic geometrical pieces


such as triangles (all three types), squares, rectangles, trapezoids, circles, etc. Each
time these parts are combined in a different way, the yield will be a new “relatedness”,
sometimes only a little different such as one rectangle being in the place of one trian-
gle or, at other times, quite different new wholes may result depending on a mixture
of luck and ingenuity. But no matter how clever, the outcomes will remain aggregates
and not emergent wholes, and as aggregates each novel overall structure may exhibit
new relatedness but this actually goes nowhere in the right direction as to what con-
stitutes a radically novel emergent integration.

This is an obvious point but I want to push further as to why this tangram-type of
game cannot, by the very rules (again obvious) which define the pieces and the op-
eration, lead to a novel emergent, macro-level collective. The first rule in playing than
tangram type of game, usually unstated, is that one cannot break the pieces apart, or
add any of these resultant broken pieces back together to form a new shape. Indeed,
if such a rule did not exist the point of the game would be lost -- why start with 30
intact pieces and not just a bunch of ½ inch hard plastic sheets and powerful shears
to cut them, as well as some kind of glue to join them in innovative ways and thereby
produce new pieces. If we consider the whole to be constructed as at the level l, then
the variously shaped pieces as found in the box are at l-1. Hence, we can restate the
rule against cutting and gluing can be restated as: only combinations of pieces at level
are l-1 are permitted. (by the way, another rule might disallow gluing which would
might have led to the additional possibility of creating three dimensional figures).

These simple rules, though, provide two hints at what a shift beyond aggregation
to emergent integration must include. First, they indicate the combinatory strategies
must include a transformation of the substrates or components, and not just a re-
combination of what already exists. That is, the integration of the emergent whole is
comprised of a new congruity made up of transformed substrates not just the new
relatedness of parts. In fact, in an important sense, the novelty of the new emergent
whole is just the congruity of the substrates effectuated by their being transformed.

The second hint is that processes of bringing about a radically novel whole that
transcends a mere aggregate must take place at a level beneath the substrate level.
Thus, if the whole is at level l and the substrates at level l-1, then the recombinatory

164 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

strategies must be at levels l-2 or lower. This means that the substrates themselves
must be able to be decomposed and it is the resulting decompositions which allow
for their transformation so as to be able to form a new congruity. This also means that
the new whole is not something over and above the substrates manifesting as their
new relatedness, it is the new emergent whole. That is, the radically novel emergent
whole is just the transformed substrates in their novel congruity.

We say this same level scheme at work in Cantorian stc in the section on how the
diagonal is formed from the component level comprising the substrates. That is, the
substrate rational number is at the level l-1 because it is a component of the diagonal
sequence which is at l. But it is not just the whole substrate at level l-1 which forms
the diagonal, it is the decomposed digit from the decimal expansion which is mixed
into the diagonal as an ordered pair indexed numeral. Again, it is this mixture of levels
that plays a key role in the transformation and thus the nature of the novel emergent
integration.

The need for decomposition into lower levels and the subsequent transformation
made possible is highlighted in the mathematical theory of categories which Ehres-
mann and Vanderbreesch (2007) have incisively and elucidatingly applied to emer-
gence. In their theory of complexification which describes a category theoretical com-
ing into being of a hierarchy of emergent levels, each higher level is made possible
because the substrates are multifolded which refers to the way lower levels are so con-
stituted as to be decomposed into multiple functions and multiple structures. Without
multifoldedness the emergence of radical novel is not imaginable.

The key is to keep in mind that the novel congruity is made novel precisely due
to the transformation that has occurred with the parts/substrates. Unless these sub-
strates have been substantively transformed, then there can never be any genuinely
novel wholeness, even Morgan’s “new relatedness”. New relatedness connotes that
the same old parts subsist but are now related to each other in a new way, but exactly
how new can this new relatedness amount to if the substrate parts in relation to each
other in the new relatedness have themselves not changed.

This can be appreciated by contrasting this new perspective on emergent whole-


making with the relation of parts and wholes in the traditional formal logic approach
known as mereology. In the latter, wholes are conceived as made-up of parts and
parts relate to one another as discrete particulate entities, remaining unchanged when
they may come together in a different way to constitute a new whole. There is no con-
ceptual possibility in this traditional view for the parts to have changed to the extent

E:CO 2014 16(2): 116-176 | 165


necessary for new wholes to be radically novel or, in our terminology, uncomputable
from knowledge of the parts alone. Even in the mathematical fields of combinatorics
and permuations, combinatorial and recombinatorial operations are not permitted to
change the actual nature of the particular units making up sets or combinations of
object, nor subsets nor sub-subsets; the rules of these fields are limited to merely
grouping together the objects of the sets according to certain rules. This is not in-
tended as a criticism of these mathematical fields which happen, on the contrary, to
be among the most challenging due to the lack of general principles and formulations
with the result that each new context calls for an enormous amount of ingenuity. In-
stead, the point is to call attention to the way parts and wholes are traditionally and
formally imagined, an image that unfortunately seems to have contaminated nearly all
discussions of emergent wholeness especially downward causation.

In his critique of certain of the claims made in traditional mereology, the eminent
philosopher D. H. Mellor (2006) questions the entire strategy of basing conclusions
about or even having intuitions to be followed-up on a priori assumptions about part-
whole relations. For Mellor, trying to conclude something about parts/wholes before
investigation particular instances and their contexts is like trying to decide a priori
whether waves are longitudinal or transverse in their oscillations rather than look-
ing at what the specific waves of which we are interested are actually doing. In my
opinion, most hard core physicalist reductionists like Kim are steeped in a priori view
on parts and wholes that interfere with a clear view of what takes place in the “whole-
making” facet of emergence.

In contrast, I am suggesting that we pay careful heed to what the great Galen had
suggested in the following quotation (see Part 2 where I give credit to Ganeri’s bril-
liant work which brings Galen’s points to the central conceptual place they deserve to
be in relation to emergence, substrates, and transformation): “For anything constitut-
ed out of many things will be the same sort of thing the constituents happen to be…
it will not acquire any novel characteristic… But if the constituents were altered, trans-
formed, and changed in manifold ways, something of a different type could belong to
the composite that did not belong to the elements…something heterogeneous can-
not come from elements that do not change their qualities. But it is possible from
ones that do….” I would add here that not only do the “elements” (substrate) need to
change their qualities the processes of combining and recombining them must also
be changed from customary views of how such processes work.

Consequently, I am proposing that a careful examination of examples of emer-


gence, not speculations, reveals a very different understanding of parts and wholes,

166 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

one very effectively described by the Goethian scientist and philosopher Henri Bortoft
(1996: 12), “The whole is nowhere to be encountered by stepping back to take an
overview, for it is not over and above the parts, as if it were some superior, all-encom-
passing entity. The whole is to be encountered by stepping right into the parts.” The
emergent whole can be so encountered in the parts (substrates) and not, as Bortoft
further emphasizes, dominated by the whole, precisely because these substrates have
been radically transformed so that the subsequent congruity of these transformed
parts comprising the novel emergent whole have subsumed into this novel congruity,
i.e., no longer exist as they did before in the micro-level context. This is what I think
Humphries (1997) was getting at with his notion of emergence as “fusion” whereby
the individual parts are lost in the process. However, the prototype for this fusion
emergence offered by Humphries is quantum mechanical entanglement. But that is
truly a case of what the Scholastics decried of explaining what is obscure by appealing
to something even more obscure.

In summary, then, emergent wholes as congruities of transformed substrates are


not pre-existing wholes requiring to be actualized by appropriate future conditions (as
holists and hylozoists/panpsychists maintain), are not brought about solely through
processes of self-organization as many complexity aficionados believe (although self-
organizing processes may play important roles in certain types of emergence or a
certain phases of emergence), and cannot be computed, deduced, predicted entirely
from the initial conditions of substrates. In Cantor’s stc, the integration exhibited was
constructed to be that way whereas in natural emergence, wholeness results from nat-
ural processes and is possible because it is one of nature’s capacities. Notice here, that
a capacity and a propensity are not the same. Whereas the latter implies some sort of
impetus towards something, the former does not.

Conclusion: Emergence, self-transcending constructions


and the reimagining of nature

W
ittgenstein once remarked, “The dangerous, deceptive thing about the idea:
‘…the set … is not denumerable’ is that it makes the determination of a
concept—concept formation—look like a fact of nature.” The point of this
paper has certainly not been to claim that the Cantorian stc is a fact of nature. But I do
claim that it is a fact that there are natural processes utilizing natural capacities that
can produce emergent outcomes and that the “logic” of such processes can be expli-
cated according to the idea of self-transcending constructions.

E:CO 2014 16(2): 116-176 | 167


The philosopher of mathematics and science, Philip Kitcher (1998) has suggested
that what was going on in the case of Cantor’s work was a mathematical revolution
on the same par as those momentous scientific revolutions whereby the “world” of
the new theory is found to be incommensurable with the old one. Self-transcending
constructions are one result of this Cantorian revolution but they are not restricted to
only Cantor’s formalism. For example, above we briefly went over another kind of stc,
the action of making a Mobius Strip which serves to transform substrates into emer-
gent outcome with vastly different properties. The extent of such self-transcending
constructions is only limited by the ingenuity and determination of researchers and
theorizers in different fields.

In order to appreciate the importance of the formalism of self-transcending con-


structions in the world of mathematics I find it helpful to locate them in the layout of
modern mathematics (see Figure 5) put forward by the late great French mathemati-
cian Rene Thom (1997). Thom was interested in several things, one of which was a way
to distinguish the beautiful and the ugly in mathematics, terms which mathematicians
have found to have much utility in qualifying theorems and other mathematical work.
For example, it seems to me as an outsider that “beautiful” theorems also tend to be

Figure 5 The place of self-transcending constructions in Thom’s layout

168 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

the ones which have the more profound implications and the more influence in shap-
ing where the field is going.

Of course, I had to diagram stc’s as a diagonal downward to the right! However,


I was also trying to illustrate how the self-transcending construction formalism cut
across many of the antitheses to be found in modern mathematics, thereby also il-
lustrating how the idea both binds and separates disparate as well as closely related
realms. For a truly brilliant exposition of many of the main issues being dealt with by
mathematicians for the past 125 years, see the work of the Columbian historian and
philosopher of mathematics Fernando Zalamea (e.g., 2009; for a different but also
incisive perspective on similar trends, cf., Corfield, 2003). Both Zalamea and Corfield
have managed to clear up some of the Ms. Havisham style dust and cobwebs obscur-
ing the substantive issues due to outmoded obsessions with long dead issues from
the so-called foundational crisis at the turn of from the nineteenth into the twentieth
centuries.

Of particular relevance to the use of the stc formalism to reimagine the trans-
formative processes of emergence are the themes of bound versus free, following
versus negating, and generative versus constrained. Thus, when considering above
what might be gained from applying a purely formal construction to the natural phe-
nomena of emergence, I offered the analogy of constructing by a constructor on one
side and natural processes proceeding by means of constraints on the other side. This,
in fact, is what an stc can supply when employed to account for processes of emer-
gence: radical novelty generation which from the perspective of a natural process of
emergence looks like the channeling potency of constraints.

Likewise, constructors constructing constructions are essentially free to create


whatever they want as long, that is, they are consistent when they need to be. Similar-
ly, natural processes are free to generate in all sorts of ways yet also remain bound by
the action of constraints on them. But, again, this only looks bound, if the natural pro-
cess in question has been presumed to exist in some form before the action of a con-
straint on it. In actuality, all natural processes are under the sway of constraints from
the get-go, it is not they are free until some later state when they become bound. This
means that much can be learned about these same natural processes by consider-
ing them from an angle that sees constraints at work all the time, even in supposed
spontaneous processes. This was one of the motivations behind Alicia Juarrero’s 2002
pioneering book, Dynamics in Action and my own, much later and much shorter paper
here in E:CO (Goldstein, 2011).

E:CO 2014 16(2): 116-176 | 169


A related pertinent point concerns how uncomputability, and thus indirectly, self-
transcending constructions and emergence, has been interpreted to mean unformal-
izable or not algorithmically capable of being laid-out. Rosen, e.g., claimed his M/R
systems were unformalizable, a similar claim is found in de Lorenzana’s (1993) work
done in association with Stanley Salthe. This them of non-algorithmically formalizable
is also central to the claims about mentation transcending whatever can be computed
or discovered on the part of John Lucas and later Roger Penrose. Don’t worry, reader,
I am not about to enter this fray except for pointing out that, perhaps paradoxically,
the stc of Cantor as used by Gödel and Turing, is itself a formalism of uncountability,
undecidability, and uncomputability.

The logician Judson Webb (1980) has drawn attention to this claim about the Can-
torian stc on the part of the American mathematical logician Stephen Kleene (who
employed the Cantorian construction in his own highly significant work) who put it
this way, “…Gödel’s essential discovery was...[he] mechanized [formalized] the ...[anti-]
diagonal argument for incompleteness...the imcompleteness theorem shows that as
soon as we have finished any specification of a formalism...we can, by reflecting on
that formalism ... discover a new truth...which not only could not have been discovered
working in that formalism, but –and this is the point usually overlooked –which pre-
sumably could not have been discovered independently of working with that formal-
ism”. According to Webb, these remarks suggest that anti-diagonalization itself is a
formalizable construction of how Gödel’s unprovability and Turing’s uncomputability
can be shown to come about.

Finally, in Part 2, in the context of making a point about the contemporary en-
richening feedback taking place between the idea of emergence and those scientific
endeavors employing the notion, I suggested replacing the term “causality” in the
quote “a” below (from Schlegel, 1974: 14) with the term “emergence” in order to gen-
erate quote “b.)”

a. “Causality is not an a priori principle, or set of principles, but is rather a general


characteristic of scientific knowledge. As such, it partly sets the form of science;
and but also the history of causality amply demonstrates that the content of
science responds back as a determinant of causal principles“
b. “Emergence is not an a priori principle, or set of principles, but is rather a general
characteristic of scientific knowledge. As such, it partly sets the form of science;
and but also the history of emergence amply demonstrates that the content of
science responds back as a determinant of emergent principles.”

170 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Here at the end of this three part paper on reimagining emergence, I suggest
another replacement but this time a double one: replacing “emergence” with “self-
transcending constructions” and “science” with “natural processes and capacities”:

c. “Self-transcending constructions are not a priori principles, or set of principles,


but are rather general characteristic of our knowledge of natural processes
and capacities. As such, they partly set the form of what we know about natural
processes and capacities, but also the history of self-transcending constructions
amply demonstrate that natural processes and its capacities respond back as a
determinant of self-transcending constructional principles.”

References
Becker, O. (1927). “Mathematische existenz,” Untersuchungen zur Logik und Ontologie
Mathematischer Phänomene (Jahrbuch für Philosophie und phänomenologische
Forschung), VIII: 440-809
Bedau, M. (1997) “Weak Emergence,” Philosophical Perspectives, ISSN 1520-8583, 11: 375-399.
Bennett, C.H. (1986). “On the nature and origin of complexity in discrete, homogeneous,
locally-interacting systems,” Foundations of Physics, ISSN 0015-9018, 16(6): 585-592.
Bennett, C.H. (1988). “Logical depth and physical complexity,” in R. Herken (ed.), The Universal
Turing Machine: A Half-Century Survey, ISBN 9783211826379, pp. 227-257.
Bernard-Weil, E. (1995), “Self-organization and emergence are some irrelevant concepts
without their association with the concepts of hetero-organization and immergence,”
Acta Biotheoretica, ISSN 0001-5342, 43(4): 351-362.
Berto, F. (2009). There’s Something about Gödel: The Complete Guide to the Incompleteness
Theorem, ISBN 9781405197670.
Bertoft, H. (1996). The Wholeness of Nature: Goethe’s Way toward a Science of Conscious
Participation in Nature, ISBN 9780940262799.
Borges, J.L. (2000). Selected Nonfictions, ISBN 9780140290110.
Boschetti, F. and Gray, R. (2007). “Emergence and computability,” Emergence: Complexity &
Organization, ISSN 1521-3250, 9(1-2): 120-130.
Broad, C.D. (1925, 2013). The Mind and its Place in Nature, ISBN 9780415488259.
Cai, J.-Y. (2003). Lectures in Computational Complexity, http://pages.cs.wisc.
edu/~jyc/810notes/book.pdf.
Cantor, G. (1891). “On an elementary question of set theory,” in S. Lavine, Understanding the
Infinite, ISBN 9780674921177, pp. 99-102.
Chaitin, G. (2011). “How real are real numbers?” Manuscrito, ISSN 0100-6045, 34(1): 115-141.
Chapline, G., Hohlfeld, E., Laughlin, R.B., and Santiago, D.I. (n.d.) “Quantum phase transitions
and the breakdown of classical general relativity,” http://arxiv.org/abs/gr-qc/0012094.

E:CO 2014 16(2): 116-176 | 171


Cohen, J. and Stewart, I. (1994). The Collapse of Chaos: Discovering Simplicity in a Complex
World, ISBN 9780670849833.
Corfield, D. (2003). Towards a Philosophy of Real Mathematics, ISBN 9780521817226.
Darley, V. (1994). “Emergent phenomena and complexity,” in R.A. Brooks and P. Maes (eds.),
Artificial Life IV: Proceedings of the Fourth International Workshop on the Synthesis and
Simulation of Living Systems, ISBN 9780262521901, pp. 411-416.
Dauben, J. (1979). Georg Cantor: His Mathematics and Philosophy of the Infinite, ISBN
9780674348714.
de Lorenzana, J.A. (1993). “The constructive universe and the evolutionary systems
framework,” in S. Salthe, Development and Evolution: Complexity and Change in Biology,
ISBN 9780262193351, pp. 291-308.
Demopoulos, W., and Clark, P. (2005). “The logicism of Frege, Dedekind, and Russell,” in
S. Shapiro (ed), The Oxford Handbook of Philosophy of Mathematics and Logic, ISBN
9780195148770, pp. 129-165.
Ehresmann, A.C. and Vanbremeersch, J.P. (2007). Memory Evolutive Systems: Hierarchy,
Emergence, ISBN 9780444522443.
FitzPatrick, P.J. (1966). “To Gödel via Babel,” Mind, ISSN 0026-4423, 75(299): 322-350.
Florian (2011). “Why does Cantor’s diagonal argument yield uncomputable numbers?” http://
math.stackexchange.com/questions/28393/why-does-cantors-diagonal-argument-yield-
uncomputable-numbers.
Floyd, J. (2012). “Wittgenstein’s diagonal argument: A variation on Cantor and Turing”, in
P. Dybjer, S. Lindström, E. Palmgren, and B.G. Sundholm (eds.), Epistemology versus
Ontology: Essays on the Philosophy and Foundations of Mathematics in Honour of Per
Martin-Löf, ISBN 9789400744349, pp. 25-44.
Franzen, T. (2005). Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse, ISBN
9781568812380.
Gödel, K., (1992). On Formally Undecidable Propositions of Principia Mathematica and Related
Systems, ISBN 9780486669809.
Gaifman, H. (2005). “Naming and diagonalization from Cantor to Gödel to Kleene,” Logic
Journal of IGPL, ISSN 1367-0751, 14(5): 709-728.
Gaifman, H. (2007). “Gödel’s incompleteness results,” http://www.columbia.edu/~hg17/Inc07-
chap0.pdf.
Goldstein, J. (2001) “Mathematics of philosophy or philosophy of mathematics?” Nonlinear
Dynamics, Psychology, and Life Sciences, ISSN 1090-0578, (5)3: 197-204.
Goldstein, J. (2002). “The singular nature of emergent levels: Suggestions for a theory of
Emergence,” Nonlinear Dynamics, Psychology, and Life Sciences, ISSN 1090-0578, 6(4):
293-309
Goldstein, J. (2003). “The construction of emergence order, or how to resist the temptation
of hylozoism,” Nonlinear Dynamics, Psychology, and Life Sciences, ISSN 1090-0578, 7(4):
295-314.

172 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Goldstein, J. (2006). “Emergence, creative process, and self-transcending constructions,” in


K.A. Richardson (ed.), Managing Organizational Complexity: Philosophy, Theory, and
Application, ISBN 9781593113186, pp. 63-78.
Goldstein, J. (2011). “Probing the nature of complex systems: Parameters, modeling,
interventions-Part 1,” Emergence: Complexity & Organization, ISSN 1521-3250, 13(3): 94-
121.
Greene, M., and Depew, D. (2004). The Philosophy of Biology: An Episodic History, ISBN
9780521643801.
Grim, P., Mar, G., and St. Denis, P. (1998). The Philosophical Computer: Exploratory Essays in
Philosophical Computer Modeling, ISBN 9780262071857.
Halting problem. (2014). Wikipedia, http://en.wikipedia.org/wiki/Halting_problem.
Hellerstein, N. (1997). Diamond: A Paradox Logic, ISBN 9789814287135.
Hendry, R.F. (2010a). “Emergence vs. reduction in chemistry,” in C. Macdonald and G.
Macdonald (eds.), Emergence in Mind, ISBN 9780199583621, pp. 205-221.
Hendry, R.F. (2010b). “Ontological reduction and molecular structure,” Studies in History and
Philosophy of Modern Physics, ISSN 1355-2198, 41: 183-291.
Hintikka, J. (2000). On Gödel, ISBN 9780534575953.
Hodges, A. (2002, 2013). “Alan Turing,” Stanford Encyclopedia of Philosophy, http://www.
science.uva.nl/~seop/entries/turing/.
Hodges, A. (2012, Centenary Edition). Alan Turing: The Enigma, ISBN 9780691155647.
Hofstadter, D. (1979, 1999). Gödel, Escher, Bach: An Eternal Golden Braid, ISBN
9780394745022.
Hofstadter, D. (1985). Metamagical Themas: Questing for the Essence of Mind and Pattern,
ISBN 9780465045662.
Horava, P. (2009). “Spectral dimension of the universe in quantum gravity at a Lifshitz Point,”
http://arxiv.org/abs/0902.3657.
Humphries, P. (1997). “How properties emerge,” Philosophy of Science, ISSN 0031-8248, 64:
1-17.
Jacquette, D. (2002). “Diagonalization in logic and mathematics,” Handbook of Philosophical
Logic, ISBN 9789401704663, pp. 55-147
Juarerro, A. (2002). Dynamics in Action: Intentional Behavior as a Complex System, ISBN
9780262600477.
Kamke, E. (1950). Theory of Sets, ISBN 9780486601410.
Kampis, G. (1995). “Computability, self-reference, and self-amendment,” Communication and
Cognition - Artificial Intelligence, ISSN 0773-4182, 12(1-2): 91-109.
Kaufmann, F. (1978). The Infinite in Mathematics: Logico-Mathematical Writing, ISBN
9789027708472.
Keller, E.F. (2008a). “Organisms, machines, and thunderstorms: A history of self-organization,
Part one,” Historical Studies in the Natural Science, ISSN 1939-1811, 38(1): 45-75.

E:CO 2014 16(2): 116-176 | 173


Keller, E.F. (2008b). “Organisms, machines, and thunderstorms: A history of self-organization,
Part two,” Historical Studies in the Natural Sciences, ISSN 1939-1811, 39(1): 1-31.
Kitcher, P. (1998). “Mathematical change and scientific change,” in T. Tymoczko (ed.), New
Directions in the Philosophy of Mathematics: An Anthology, ISBN 9780691034980, pp.
215-242.
Laughlin, R., and Pines, D. (2000). “The theory of everything,” Proceedings of the National
Academy of Sciences, ISSN 1091-6490, 97(1): 28-31.
Lavine, S. (1998). Understanding the Infinite, ISBN 9780674921177.
Letelier, J., Marin, and G. Mpodozis, J. (2003) Autopoietic and (M,R) systems,” Journal of
Theoretical Biology, ISSN 0022-5193, 222: 261-272.
Levine, J. (1983). “Materialism and qualia: The explanatory gap,” Pacific Philosophical
Quarterly, ISSN 1468-0114, 64: 354-361.
Mainwood, P. (2006). Is More Different? Emergent Properties in Physics, doctoral dissertation
Oxford University.
Mellor, D. (2006) “Wholes and parts: The limits of composition,” South African Journal of
Philosophy, ISSN 0258-0136, 25(2): 138-145.
Moore, A.W. (2001). The Infinite, ISBN 9780415252850.
Morrison, M. (2012). “Emergent physics and micro-ontology,” Philosophy of Science, ISSN
0031-8248, 79: 141-166.
Odifreddi, P. and Cooper, S. B. (2012) “Recursive functions,” Stanford Encyclopedia of
Philosophy, http://plato.stanford.edu/entries/recursive-functions/.
Oriti, D. (2007). “Group field theory as the microscopic description of the quantum spacetime
fluid: A new perspective on the continuum in quantum gravity,” http://arxiv.org/
abs/0710.3276.
Ormell, C. (2006). “The continuum: Russell’s moment of candor,” Philosophy, ISSN 0031-8191,
81(318): 659-668.
Pavarini, E., Koch, E. and Schollwölk, U. (2013). Emergent Phenomena in Correlated Matter,
ISBN 9783893368846.
Peregrin, J. (n.d.). “Diagonalization,” http://jarda.peregrin.cz/mybibl/PDFTxt/587.pdf.
Petzold, C. (2009). The Annotated Turing: A Guided Tour through Alan Turing’s Historic Paper
on Computability and the Turing Machine, ISBN 9780470229057.
Priest, G. (1994). “The structure of the paradoxes of self-reference,” Mind, ISSN 0026-4423,
103(409): 25-34.
Priest, G. (2002). Beyond the Limits of Thought, ISBN 9780199244218.
Rabinowitz, N. (2005). Emergence: An Algorithmic Formulation, Honors Thesis. University of
Western Australia.
Raja, N. (2009). “Yet another proof of Cantor’s theorem,” in J. B”ziau and A. Costa-Leite (eds.),
Dimensions of Logical Concepts, Colecao CLE, Vol. 54, Campinas Brazil, pp 1-10.

174 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions

Roque (née Juarerro), A. J. (1988). “Non-linear phenomena, explanation and action,”


International Philosophical Quarterly, ISSN 0019-0365, 28(3): 247-255.
Ryan, A. (2006). “Emergence is coupled to scope, not level,” http://arxiv.org/pdf/nlin/0609011.
pdf.
Sainsbury, R.M. (2008). Paradoxes, ISBN 9780521720793.
Salthe, S. (1993). Development and Evolution: Complexity and Change in Biology, ISBN
9780262193351.
Sawyer, K. (2005). Social Emergence: Societies as Complex System, ISBN 9780521606370.
Seager, W. (2012). Natural Fabrications: Science, Emergence, and Consciousness, ISBN
9783642295980.
Shagrir, O. (2007). “Gödel on Turing on computability,” in A. Olszewski, J. Wolenski and R.
Janusz (eds.), Church’s Thesis After Seventy Years, ISBN 9783938793091, pp. 393-419.
Shankar, S. (1987). Wittgenstein and the Turning Point in the Philosophy of Mathematics, ISBN
9780887064838.
Simmons, K. (1990). “The diagonal argument and the liar,” Journal of Philosophical Logic, ISSN
0022-3611, 19: 277-303.
Sorkin, D.R. (2003). “Causal sets: Discrete gravity,” http://arxiv.org/abs/gr-qc/0309009.
Swenson, R. (1992). “Autocatakinetics, Yes; Autopoiesis, No: Steps toward a unified theory
of evolutionary ordering,” International Journal of General Systems, ISSN 0308-1079, 21:
207-228.
Thom, R. (1997). “The hylemorphic schema in mathematics,” in E. Agazzi and G. Darvas (eds.),
Philosophy of Mathematics Today, ISBN 9789401064002, pp. 101-113.
Tiles, M. (1989). The Philosophy of Set Theory, ISBN 9780486435206.
Turing, A. (1937). “On computable numbers, with an application to the
Entscheidungsproblem,” Proceedings London Mathematical Society (series 2), ISSN 0024-
6115, 42(1): 230-265.
Varela, F. (1974). “A calculus for self-reference,” International Journal of General Systems, ISSN
0308-1079, 2: 5-24.
Varela, F. (1984). “Self-reference and fixed points: A discussion and an extension of Lawvere’s
Theorem,” Acta Applicandae Mathematicae: An International Survey Journal on Applying
Mathematics and Mathematical Application, ISSN 0167-8019, 2(1): 1-19.
Wang, H. (1974). From Mathematics to Philosophy, ISBN 9780710076892.
Webb, J. (1980). Mechanism, Mentalism and Metamathematics: An Essay on Finitism, ISBN
9789048183579.
Weisstein, E. (n.d.) “Mobius strip,” http://mathworld.wolfram.com/MoebiusStrip.html.
Wimsatt, W. (1997). “Aggregativity: Reductive heuristics for finding emergence,” Philosophy of
Science (Proceedings), ISSN 0031-8248, 64: S372-384.
Yanofsky, N. (2003). “A universal approach to paradoxes self-reference,” Bulletin of Symbolic
Logic, ISSN 1079-8978, 9(3): 362-386.

E:CO 2014 16(2): 116-176 | 175


Yudkowsky, E. (2007). “The futility of emergence,” http://lesswrong.com/lw/iv/the_futility_of_
emergence/.
Zalamea, F. (2013). Synthetic Philosophy of Contemporary Mathematics, ISBN 9780956775016.

Jeffrey A. Goldstein is Full Professor, Adelphi University; Author/editor of numerous


books and papers including: Complexity and the Nexus of Leadership: Leveraging
Nonlinear Science to Create Ecologies of Innovation; Complexity Science and Social
Entrepreneurship: Adding Social Value through Systems Thinking; Complex Systems
Leadership Theory; Classic Complexity; Annual Volumes of Emergence: Complexity
&Organization; and The Unshackled Organization; Brainwaves. Co-editor-in-chief of
the journal Emergence: Complexity & Organization (since 2004); Member of Board of
Trustees of the journal Nonlinear Dynamics, Psychology, and the Life Sciences; Workshop
and Seminar Leader, Lecturing at eminent universities in countries throughout the
world including England, Canada, Russia, Israel, Sweden, Brazil, Norway, Italy, Cuba,
Greece, China, Germany, Spain, and Austria. Consultant for many public and private
organizations.

176 | Goldstein

Vous aimerez peut-être aussi