Académique Documents
Professionnel Documents
Culture Documents
116 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
W
hile talking with a prominent classicist and philosopher around fifteen years
ago, the idea of emergence happened to come-up in the conversation
whereupon she suddenly and vehemently declared it a “weasel word.” This
left me a bit disconcerted since I had a great deal of respect for her erudition and
thinking skills, and I was increasingly drawn to what I took as the conceptual promise
of the idea of emergence. A “weasel word” refers to verbiage claiming to be saying
something specific and meaningful but turning out to be ambiguous and trite—the
metaphor coming from the observation that a weasel has a unique ability to suck an
egg out of its shell leaving it intact yet hollow. Of course, since its inception the idea of
emergence has been the subject of numerous kinds of criticism, some of it stemming
from stringent reductionists, some from the all too often deficient way emergence has
been phrased and framed by its own adherents. And much more recently, as I have
called attention to previously, the credibility of emergence is being undermined, not
by its opponents, but instead by recent converts to it from within the particle physics
and cosmologist camp who only a short time before had considered the idea anath-
ema. Here are four excerpts from papers which exemplify this usage: “… our current
understanding of string theory, in which the macroscopic spacetime … can often be
viewed as an emergent concept” Hořava (2009); “The notion that general relativity
might be an emergent property in a condensed-matter-like quantum theory of grav-
ity“ (Chapline et al., 2000); “We then expect the emergent superspace to be some sort
of group manifold… “ (Oriti, 200); “A basic tenet of causet theory is that spacetime
does not exist at the most fundamental level, that it is an ‘emergent’ concept …” (Sor-
kin, 2003).
As far as I have been able to tell from the papers containing these excerpts plus
similar ones found at Arxiv, “emergent” is being used as a stand-in for “being de-
rived from,” “secondary to,” or to point to phenomena on a “higher” level “caused” by
more fundamental or underlying dynamics. Although these connotations might make
it seem that emergents are merely epiphenomena, a position that has in fact come
and gone over the years among the idea’s critics, the current embracers are instead
touting the notion of an emergent as a significant step on the way of resolving long
standing “origin” problems such as how space, time, space-time, gravity, and so on
have come about.
Now suppose I should say that gravity is explained by “arisence” or that chemistry is an
“arising phenomenon” from physics, and [claim this is] explaining something important.
… what more do we know after we say something is emergent?… It feels like you believe
a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated,
but it has not been fed...
What is very easy to gloss over is that the very claim that emergents exist at all
requires at least a modicum of understanding of those processes which bring them
about. To simply assume some universal principle of “emergence” as a catch-all ex-
planation is as weasel wordy as the graduate student in Moliere’s play “The Imaginary
Invalid” who explained the efficacy of opium in putting people to sleep as due to it
containing a “dormative principle.” Unless one can imagine how emergence happens,
then any claims of its explanatory significance should fade away.
The English philosopher, C. D. Broad (1925; see also the clear presentation of
Broad’s tenets in Mainwood, 2006), the most philosophically rigorous among the
118 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
It might have been anticipated that Emergent Evolutionism, coming along a half
century after Lewes, would further adumbrate on the processes of emergence begun
by Mill and Lewes. This didn’t take place for several reasons. First, since emergence
was being defined in the via negativa fashion of not-predictable, not-deducible, not-
mechanical (the preferential term in early emergentism) and not-computable, this had
to result in an image of emergence as mostly not explicable, an attitude not particu-
larly helpful in spurring on imaginal endeavors to account for it. Not explicable is
equivalent to not being imaginable and this deficit of the imagination was emblematic
both within and outside the camp of emergentists. One of the culprits for this via neg-
ativa was the Bergsonian-Manichean perspective of contrasting drastically antithetical
Second, before the coming along of the sciences of complex systems, there was a
dearth of suitable laboratory means for studying emergence. In fact, how was such a
laboratory for emergence even conceivable if the primary prototypes for emergence
were only momentous, even “cosmic” origins, e.g., the origin of life, the origin of sen-
tience, even the origin of space and time (way before modern cosmology this was a
preoccupation of the Emergent Evolutionist Samuel Alexander). Only recently do we
have laboratories for emergence—computational, chemical, biological, social and so
on—where we can observe emergence as it is happening.
Third was the proclivity for some of emergence’s most prominent proponents
to appeal to more-than-natural sources to explain how emergents emerge. Some
of this proclivity stemmed no doubt from the dramatic nature of the prototypes of
emergence which would require equally momentous causes. Morgan, for example,
posed at least two such more-than-natural origins of emergents: a supra-naturally
sourced “Directing Activity” behind the leaps of emergence; and a differentiation be-
tween naturalistically conceived causation of ordinary change and the supra-naturally
driven causality behind emergent saltations although I cannot fathom, after repeated
attempts, what he was trying to say except some kind of strange occasionalism. At
the same time, Alexander was espousing “natural piety” as the sentiment and com-
mitment appropriate for the study of emergence (I am supposing this phrase referred
to something more spiritually specific and laudable at that day and age). There was
also Alexander’s proposal for a cosmic nisus energizing and guiding emergence in
the direction of a final apotheosis of an emergent deity, an idea that was to influence
Whitehead’s later, more overtly theological take on emergence.
Not being an acolyte of the appallingly uninformed cadre known as the “new athe-
ists,” I find nothing conceptually suspect here except that insult to theology called “the
God of the Gaps”, which has accomplished the stunning feat of denigrating nature
and the divine at the same time. I am not referring here to the overt and intentional
appropriations of emergence found in “cosmic” metaphysical/theological emergentist
positions, for example, the Emergent Evolutionist, John Boodin’s neo-“harmony of the
spheres” or the cosmic emergence system of the paleontologist and Jesuit Teilhard de
Chardin or even certain recent Whiteheadian process theologies. Instead, I am refer-
ring to the kind of “God of the gaps” incisively parodied in Sidney Harris’s well-known
cartoon depicting two scientists/mathematicians standing in front of a chalk board
120 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
filled with equations. One of them is pointing to a gap between two sets of equations
where it is written in chalk “THEN A MIRACLE OCCURS” and the caption beneath has
one of the scientists saying, “I think you should be more explicit here …”
Y
et, some kind of diremption seems a necessary element in emergence to the
extent that a defining characteristic of emergent phenomena is the presence
of an explanatory gap. In scientific and philosophical inquiries a chief aim is the
elimination of gaps in explanation, indeed these gaps are what prod inquiry to begin
with. But in the case of emergence, the presence of an explanatory gap is what under-
scores the nature of the radical novel nature of emergent phenomena (I am restrict-
ing my take to diachronic and not synchronic emergence since, as far as I can tell, the
latter is relevant to philosophy of mind and consciousness and downward causation
which I intentionally excluded from my considerations in Part 1 and 2). The explanato-
ry gap, in other words, is what indicates the presence of a transformation that eludes
traditional causal and change processes that allow deducibility and predictability, a
process of transformation that involves higher level organizing factors and constraints
which transcend micro-level deduction.
In some ways, the explanatory gap of emergence might seem akin to the explana-
tory gap that the philosopher Joe Levine (1983) called attention to between expe-
riential qualia or consciousness and physicalist explanations. Motivating Levine and
then later employed by David Chalmers with this “hard problem of consciousness”
is a pressure emanating by a commitment to a “closed” physicalist orientation which
doesn’t allow the presence of something like experiential qualia so different in charac-
ter than something like the merely physical. It is a “hard” problem due to the explana-
tory pressure that the physicalist viewpoint must be able to incorporate what seems
to be so unphysical.
In the case of the explanatory gap of emergence, though, we don’t have the same
situation of some phenomena on one side of the divide and some explanatory scheme
on the other. Instead, we have the divide between the origin and the terminus of pro-
cesses that take the origin phenomena of substrates and transform them into the
terminus phenomena of emergent outcomes. Yes, the terminus has radical different
properties than the origin but that is precisely why there is a gap. If this situation was
just like the explanatory gap of consciousness there would be pressure from some
ontological or metaphysical assumption that both origin and terminus falling under
the same assumption. The explanatory gap of emergence does not carry that kind of
The explanatory gap of consciousness though does carry an important lesson for
imagining the processes of emergence in regard to the nature of the substrates. Hard
core physicalists keep coming back to primordial particulate physical objects as the
kind of thing that might count as substrates. But I don’t see how such physicalist con-
ceived substrates make sense outside of physicalist emergence. For example, if we
are talking about social emergence (see Sawyer, 2005) particulate physical units are
simply not pertinent as substrates. It would be like claiming elephants emerge out of
stainless steel ball bearings. The primordial substrate units of social emergence would
instead need to be human beings and already existing social groupings of various
types. One could, of course, take the tack of rejecting the possibility of social emer-
gence altogether because of the lack of particulate physical social substrates. It seems
obvious to me, however, as it was to George Herbert Mead and his theory of the social
self and social emergence, that certain social groupings are indeed emergent integra-
tions with the potential of behaving in radically novel ways. In terms of conscious-
ness, although I have questioned the idea of consciousness itself being an emergent
phenomena, I didn’t at the same time renounce the possibility of specific contents of
consciousness being emergent, e.g., thoughts, feelings, perceptions, and so on. If the
latter are emergent, then one would likewise not be on the right track to believing
that the viable substrates of these contents of consciousness are particulate physical
objects like quarks, and so forth.
122 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
T
here’s no question that a chief spark for the resurgence of interest into emer-
gence in the past few decades has been the computational emergence found
in artificial life and comparable simulations. Because of its computational infra-
structure, this type of emergence has spawned a variety of computationally related
constructs and methods. The idea of defining emergent phenomena in terms of un-
computability comes in part from this computational setting, the other main sources
being computational complexity theory in which Alan Turing’s work on uncomputable
numbers has provided a cornerstone, and the development of various measures of
complexity, e.g., the metric of algorithmic complexity.
As we’ll see below, none of the early work done on uncomputability had anything
directly or intentionally to do with emergence, nor, looking back on it, was there any
such indication. They were pursued in very different camps with different agendas
and different objectives. It was only afterwards that the idea started being applied to
characterize emergent phenomena and that was for the most part in the wake of the
limitative theorems of Gödel and Turing on undecidability and uncomputability. That
work in logic, in turn, relied on a certain mathematical construction going back to the
great mathematician Georg Cantor in the third quarter of the nineteenth century on
the existence of transfinite sets (these terms will be defined below) but Cantor’s work
also had nothing to do with emergence and there is no reason to think he was even
aware of the notion or similar notions in Germany at the time (e.g., by Wundt and oth-
ers), nor if he was aware of these, he would have paid the slightest attention to it.
124 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
fore (see Darley, 1994). For instance, algorithmic complexity (see just about anything
written by Chaitin who is one of the pioneers in its development; also see Rabinowicz,
2005, for a clear exposition of the main tenets) measures the length of the shortest
possible description of those computational instructions (or bit string if we are talking
about the computational massaging of data streams) able to reproduce the outcome
being measured. An example is how the algorithmic complexity of a random bit string
generated by a stochastic process would have an algorithmic complexity the size of
the random bit string of the data itself since there is no shorter set of steps to gener-
ate that particular bit string less than the actual run itself of the randomization. The
algorithmic complexity in this case can’t be compressed but must be as long as the
ostensive manifestation of what the random series displays.
Darley defined “a true emergent phenomenon” as one for which the optimal
means of prediction is the actual simulation or running of the program itself. This
means that in the case of emergence some sort of accurate analytic deduction from
pre-given parameters will not yield any improvement in the ability to predict what will
happen over just observing what happens. Of course, this is just a way of expressing
the ostensiveness property of emergent phenomena. For Darley, two implications of
this definition are: 1. Emergence involves a kind of “phase change” in the amount of
computation necessary, that is, it must consist of much more than a simple unfold-
ing of what is given; 2. Large scale behavior of a system emergent out of lower level
interacting substrates will not be capable of possessing any clear explanation in terms
of those interacting substrates. In our terminology, this means that for “true emergent
phenomena” there must be an explanatory gap between substrate and emergent out-
comes.
W
e can trace the development of uncomputability back to the aforemen-
tioned mathematical formalism devised by Cantor in 1891 in his proof for
the existence of transfinite sets, that is, sets containing more members than
our typical conception of a countable infinity or sets (the members of the set can be
counted off by following the counting numbers 1, 2, 3, 4, 5, 6, 7, …; see Cantor, 1891;
Dauben, 1979; Lavine, 1994). Since our concern in this paper does not involve the na-
ture of infinity, we can concentrate instead on Cantor’s method which went on to play
an indispensable role in the later theorems on undecidability and uncomputability.
It was from a commentary on Cantor’s proof that I first came upon the phrase “self-
transcending construction” which I subsequently could function as an especially apt
phrase for the processes of emergence.
Second, the phrase “self-transcending construction” (stc) that was used to de-
scribe how Cantor’s proof method worked carried certain associations which seemed
expressly suitable for describing emergence as well. Chief among these had to do
with the difference between the prefix “self-“ in “self-organization” and in “self-tran-
scending constructions.” The “self-“ in “self-organization” indicates the locus of what is
driving the organizing or structuring activities taking place, an image of an internally-
driven dynamism coming out of a system’s own inner resources and accordingly not
resulting from an externally-imposed order or organization (see Bernard-Weil, 1995).
This image of self-organization has persisted since the time of Kant’s original phrase
“self-organized” and its later extension by Schelling (see Keller, 2008a, 2008b; Heuser-
Kessler, 1992; Greene & Depew, 2004). This sense of inner agency also carries asso-
ciations of spontaneity (or in the extreme form “order-for-free”) since the system is
imagined as not requiring an external imposition for its new order.
126 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
claim of it being a spontaneous inner-driven process. Moreover, the phrase has, since
Kant, carried an emphasis on self-regulation, the apotheosis of which can be found in
the cybernetics idea of systems driven to restore equilibrium after being disturbed.
The image of a equilibrium-seeking system is quite difficult to square with emergen-
tist claims for the possibility of generating radically novel outcomes and not mere
restoration, although there have been a few valiant, even persuasive, reworkings of
self-organization to account for novelty production such as Kampis’s (1995) notion of
self-modifying systems.
My claim is that this mathematical excursion can help us uncover a “logical” tem-
plate or scaffolding involved in Cantor’s methods which was later incorporated into
Turing’s proof on uncomputable numbers and that can demonstrates how the pro-
duction of a radically novel, uncomputable outcome can be generated. The strategy
here is the following: by unearthing the “logic” underlying a mathematical production
of an uncomputable outcome, we gain insight into how this same “logic” may guide
the production of an uncomputable outcome in the non-mathematical realm of emer-
gence. Of course, moving from the purely mathematical sphere to that of emergence
will require appropriate translational means to open the imagination up to the “logic”
I think this breach is not unsurmountable if we look at the situation in the light of
the following little “thought experiment”: how could one tell the difference between
the action or effect of a higher level organizing constraint (like put forward in Juarerro,
2002; Goldstein, 2011) and the action of a natural process? Look, for example, at the
natural processes of emergence at work generating, forming, and shaping the famous
hexagonally shaped Benard Convection Cells which Prigogine and others had so in-
tensively and extensively studied. The usual story is to account for the emergent form
of the cells by invoking the “mechanism” of self-organization, a purported spontane-
ous natural process. But what do “natural” and “spontaneous” mean in this context?
Presumably, they imply that nature will take its course when left to its own devices in
contrast to shaping that is intentionally imposed or constructed to be a certain way
such as hexagonally shaped. These emergent forms are explained as coming about
when the system is driven far-from-thermodynamic equilibrium through the means of
heat being applied so that what then results is a more efficient way of heat transfer-
ence through the system by way of convection rather than diffusion.
But why the specific emergent order of hexagonal cells? One answer is the influ-
ence of the constraining effect of the size and shape of the container in which the
“self-organization” takes place. It turns out that the influence of the constraints in
shaping the order of the emergent cells is much greater than that of the physical con-
tainer, an influence that can, from a not too great shift of perspective, be likened to a
constructor constructing the order to be a certain way. In Goldstein (2011), I discussed
two types of higher level organizing constraints acing in this system, the mathematical
constraint involved in packings on the plane or in three dimensional space, the other
having to do with certain differential topological constraints. Here I will only remark
on the first, the geometrical constraints on packing circles on a two dimension plane
surface.
According to the account given by D’Arcy Thompson: circles are most closely
packed via six circles around a central one when there is uniform stress forces opera-
tive, in this case emanating from growth or expansion within and a uniform constrict-
ing pressure from outside. Such constraints will “push” the circles from their contigu-
ous point in common to lines representing the surfaces of contact, a process which
thereby converts the closely packed circles into hexagonal shaped cells. Thompson
went so far as asserting, in a manner quite prescient of the idea of universality in
128 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
phase transitions and quantum protectorates, to be mentioned below, that the micro-
level details about the inner or outer forces were irrelevant to the ensuing hexagonal
shape as long as they were uniform in their action and thereby aimed at a state in
which their surface tends towards an area minima. Thompson also pointed out that
the brother of Lord Kelvin (of the Second Law of Thermodynamics fame), James Thom-
son, saw a similar “tessellated structure” in the soapy water of a wash tub meeting in
three partitions where vertices of a hexagon meet. Sounding like a modern complexity
aficionado of self-organization, Thompson stated that the Bernard’s tourbillons cel-
lulaires “make themselves” but he definitely didn’t mean that they “make themselves”
without the presence of the constraints which he had spent so much creative intel-
ligence in revealing.
I don’t think it is a particularly far cry from talk about cells making themselves
to them being constructed under the influence of constraints. There are higher or-
der pattern-organizing constraints that effect whatever is going on so that “natural
processes” are equivalent to processes under the influence of constraints from the
very first instance these processes originate. Similarly, there are no phenomena at
all, whether substrate or emergent, which are not natural processes acting and be-
ing shaped by the contextual ambience of the constraints making or constructing the
order of the emergent phenomena. Hence insight into formal operations under the
sway of intentionally constructed constraining influences are, at least phenomenologi-
cally, no different from natural processes in natural systems also under the sway of the
constraints, which channel the construction of emergent order.
A
s mentioned, I first came upon the expression “self-transcending construc-
tions” in a description of Cantor’s method offered by the German mathemati-
cian and historian Oskar Becker (quoted by the Austrian-American philosopher
of mathematics and law Felix Kaufmann, 1978). Becker coined the term as a neutral
description for the ingenious mathematical construction Cantor devised in his proof
for the existence of transfinite sets. Although Cantor’s formalism turned out to be
of paramount importance in twentieth century mathematics and logic, it had noth-
ing intentionally or overtly to do with emergence per se. What had caught my atten-
tion, though, was that the method called by Becker a “self-transcending construction”
enabled Cantor’s formalism to generate an outcome with radically novel features in
comparison to the substrates from which it was generated.
...no construction can ever lead beyond the domain determined by the principle
underlying it….the diagonal procedure [see below on “diagonalization”] … will lead to
more and more new ‘mathematical objects’, but we must at each stage remain within
the framework of the most general formation law according to which the progression
runs. The progression is determined as an unfolding of this and no other law… (p. 135)
(By the way, Kaufmann’s own logic here was incorrect since what Cantor wanted to
prove might have been proven by a different, sounder method.)
But the soundness of Cantor’s conclusion is not relevant to our purpose which is
instead to probe how this stc was capable of generating radically novel outcomes.
Consider Kaufmann’s term “unfolding” from the passage above. It brings to mind the
same term with a cognate meaning discussed in Part 1: Roque’s exposition of the
deductive-nomological approach to reduction as an explanation involving an unfold-
ing on a planar surface of something that had already been a priori convoluted. Such
an image presumes predictability and deducibility or, as we now can call it, “comput-
ability”. For Kaufmann, a mathematical construction such as Cantor’s self-transcending
construction must conform to the same circumscriptions as a deductive-nomological
explanation, it is not allowed to involve anything but an unfolding of what has been
already folded-up.
As I delved deeper into how Cantor’s stc, it dawned on me that it worked precisely
because it did not fit Kaufmann’s image of unfolding. That is, it had to depart from this
unfolding enough for radically novel outcomes to ensue. Nevertheless I was still puz-
130 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
zled about whether Kaufmann’s disparaging appraisal of it was correct, that “no con-
struction can ever lead beyond the domain determined by the principle underlying
it”? As we’ll see below, our exploration of Cantor’s formalism will consequently also
need to inquire into the issue of contradictoriness and the related notion of paradox.
In order to investigate these questions further, it is necessary to go into some depth
and detail as to how Cantor’s method operated (although the following will get a bit
technical, it will not require anything more than elementary mathematics).
In his earlier research Cantor had become interested in the continuity of continu-
ous functions, particularly in the magnitude or cardinality of the sets of the points
representing continuous functions, e.g., the cardinality of this set {1, 3, 6, 9, 100, 2112,
50000456} is seven because it has seven members. The property of cardinality does
not have anything to do with the nature of the sets’ members nor the criteria for set
membership. Certain cardinalities have become standard such as the countably infi-
nite cardinality of the set of all counting numbers {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 …} (the
three final dots indicating the counting can go on indefinitely). Later in his famous
1891 proof of transfinite sets (there were earlier, more indirect versions), Cantor want-
ed to know if the set of rational numbers (i.e., numbers made from ratios of integers
such as 5/6, 3/4, …) and the set of real numbers (all the rationals plus the irrationals,
transcendentals and so forth) have the same cardinality. It was here that Cantor in-
troduced his famous anti-diagonal construction or anti-diagonalization method (as
I’ll justify below, I add the prefix “anti-“ to be more accurate) in order to generate a
radically novel real number (the analogy is to the emergent outcome) from a set of
rational numbers (the analogy is the lower level substrates) so that his stc represented
a transformational process from substrate to outcome.
He had already shown that the numerosity of the rationals was a countable set by
demonstrating how to match, count off, or map each rational number to each count-
ing number in a manner that would not allow repetition (i.e., 2/3, 4/6, and 12/18 are
counted only once since they are the same number) nor leave out any of the rational
numbers. A mapping operation consists of a function that takes one member of one
set to a member of another set. If the sets have the same cardinality, in the case of
the rational numbers, for instance, establishing that the set of rationals has the same
cardinality can come about by way of a map that shows a one-to-one isomorphism of
the rationals to the other set, e.g., the counting numbers.
Today, Cantor’s stc is usually depicted as an array of three elements (see Figure 2
below): a vertical column exhibiting an exhaustive list of rational numbers; horizontal
rows of the decimal expansion of each of the rational numbers on the vertical; and
Figure 2 is one such possible array, with H referring to the decimal expansion
going from left to right, V referring to the list of rational numbers going down (only
rationals between 0 and 1 are used because the cardinality of the set of rationals be-
tween 0 and 1 are equivalent to all rational numbers, a strange characteristic of infinite
sets), a diagonal sequence marked by underlining, and a vertical list to the far left
132 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
which indicates the matching between the counting numbers and the rationals (the
magnitude of a set is arrived at by a type of counting using the counting numbers).
What each horizontal row contains is simply the decimal rendition of the same frac-
tion on the vertical list, for example, the vertically listed rational number 1/3 becomes
the decimal expansion 0.3333333333… This entails that the sets of rationals making-
up both the vertical columns and horizontal rows have the same cardinality, that of
the counting numbers or the kind of “simple” infinity usually thought of in regards to
the notion of a potential infinity.
It is important to notice that because the vertical columns and the horizontal
rows contain the set of the rational numbers, and that the diagonal sequence (the
underlined numbers in Figure 2) marks the intersection which is the representation of
the mapping of these two sets against one another, the array can be thought of as a
self-referential mapping/matching of the set of the rational numbers to itself. In other
words, each rational number is matched or refers to each rational number including
itself. Furthermore, the diagonal sequence can be interpreted as a kind of code of self-
reference since the numerical values on the diagonal represent the point of intersec-
tions constructed by the self-referential mapping of the set of rationals onto itself (see
Hofstadter, 1985). We’ll say more about how the diagonal is composed as a represen-
tation of self-reference below.
Since it is easy to get lost among the trees and lose sight of the forest, let us recall
the purpose of laying-out Cantor’s stc: we are appropriating it as a logical formalism
that, as we’ll shortly see, represents the generation of a radically novel outcome. This
formalism is intended to serve as an aid for imagining the passage from substrate to
emergent outcome according to the following presumptions: the rational numbers
are the lower level, anterior substrates which are set up in the array in order to display
the countable cardinality property of the set of rationals; the array itself serves as a
conceptual device which both indicates an exhaustive and complete list representing
a self-referential mapping of set of the rational numbers to itself and how the diago-
nal sequence is established as a kind of integration of the separate H and V sides of
the array, this integration acting as a kind of “code” of self-reference. The diagonal is
consequently the representation of a combinatory operation which plays a key role in
the process of radical novelty generation in Cantor’s stc.
Here are three examples of the diagonal sequence from Figure 2: the second frac-
tion/rational number down the vertical list (counting #2) is 1/3 whose decimal expan-
sion as shown in the figure is 0.3333333333… The diagonal is made precisely from
going down and to the right the exact same number of places. Here it is two down
In each case, then, we go down and to the right the same number of digit places.
The resulting diagonal sequence then represents a mapping of the rational numbers
on the vertical and the rational numbers on the horizontal, the mapping being a way
to talk about the intersection, the point of meeting, the place where the previously
separate component is integrated, so to speak, with another cognate component. As
pointed out by Simmons (1990) there is nothing sacred about using a diagonal form
like that shown by the underlined numbers in Figure 2; the only crucial requirement
is that the diagonal is constructed so as not to leave out any specific mapping of V
to H, that is, captures all the rationals on both the horizontal and vertical means of
representation. By the way, following mathematical practice, we are to accept that this
diagonal sequence continues indefinitely, an infinitely large set (the three dots indicat-
ing this) just as the vertical and horizontal lists are.
134 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
numbers and does so in a way that is exhaustive of all possible permutations of the
two values found respectively on the horizontal and vertical sides. If put in the form of
algebraic notation (which Cantor’s proof was phrased in), the number at the diagonal
would become a variable with an ordered pair of subscripts, say, Dh v as in Figure 3.
The first subscript h stands for the numerical value that is h steps out from the left
on the horizontal decimal expansion of the rational number which in turn is v steps
down the vertical list of rational numbers. The diagonal sequence, consequently, con-
tains those variables (or numerical values) when the two subscripts h and v are identi-
cal. This implies that the diagonal construction, unlike either the horizontal or the ver-
tical sequences, is an integration or melding of the previously isolated situation of the
horizontal and vertical sequences (for similar perspectives, see also Hofstadter, 1979,
1985; Webb, 1980). This might seem a bit confusing but I think that a large measure
of this confusion stems from the diagonal sequence being a confounding of the previ-
ous separate rows and columns, “confounding” in the sense of a “pouring together”
of what was previous separate which in turn implies a transformation of the substrates
(Goldstein, 2002).
This is all well and good but what are all the appurtenances of the arrays supposed
to accomplish? To answer that we have to recognize the specific reductio ad absurdum
proof format that Cantor used since it will show what was motivating him to the next
crucial steps. In a reductio ad absurdum type of proof, in order to prove a proposition
With the diagonal construction, Cantor now had the substrate at the right meso-
scopic level since if we consider the digits on H or V to be at the microscopic level,
then a shift from H or V to the diagonal represents a movement upward to the meso-
scopic level. It was at this juncture that Cantor was now in a position to construct his
counterexample to the falsified proposition which began his reductio style of proof.
This counterexample is the radically novel number that is to be constructed out of the
rational numbers but which violates their cardinality of countability and whose con-
struction is made possible now that the diagonal sequence is at hand.
However, there is one other crucial element of the level or scale at which Can-
tor’s construction of the diagonal (and eventual real number) “dwelt” that needs to be
pointed out since it is a crucial factor in the composition of an emergent whole in con-
trast to a mere aggregate. First, let’s say “l” is the level of the diagonal series, in other
words, the level of the diagonal’s representation of the entire set of rational numbers,
which also entails that “l” will be the level of the radically novel number soon to be
constructed.
If l is the level of the entire set of rational numbers, then we should interpret
each rational number functioning as a component of the entire set a level one step
downwards, l-1 (see this kind of level classification in Salthe, 1993). Thus, each rational
number member on the vertical side as well as the diagonal slash of the array is at the
level l-1. But next consider horizontal rows as the decimal expansion of each rational
number which is at l-1. The decimal expansion is constituted by the numerical digits
136 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
found through some algorithm like long division as mentioned above. This means
that each of these numerical digits, as components making-up the horizontal row,
must be at a still lower level, (l-1) - 1, or l-2. Indeed, these numerical digits comprising
the horizontal rows of the decimal expansion are a kind of strange mixed substrate
since they are neither individual rational number members making-up the set of the
rationals numbers such as the individual rational numbers listed in the vertical column
which are at the level l-1 nor are they the same as what they become as one of the
ordered pairs indexing the diagonal numbers according to Jacquette’s scheme. Rather
their nature is a function of being ingredients somewhat arbitrarily designated by the
context of the operation of decimal expansion. I use the term “arbitrarily” here since
other numerical digits could also been used for a similar purpose in indexing each of
the placements on the diagonal, e.g., instead of a decimal expansion, the horizontal
row might have been constituted as binary digits coming from an alternative binary
expansion of each rational number on the vertical list. The resulting diagonal number
then would have been a different sequence but could have save the same purpose.
There is also another facet of Cantor’s proof that must all be kept in mind, the
statement of which may seem unnecessary to assert, namely, that each action of
construction leading to some new feature of the array was, of course, the intentional
stroke on the part of the deviser or constructor of the proof, that is, that numbers ob-
viously don’t have tendencies to list themselves, or to combine with other numbers. I
bring this up since when it comes to applying the stc formalism to natural emergence,
we must be able to find fitting cognates happening naturally to what in the proof is
constructed to be the way it is. This point was touched upon above in terms of the
little thought experiment concerning how difficult it is distinguish the act or effect of a
higher level organizing constraint and a natural process.
138 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
W
ith the diagonal construction, Cantor’s stc had accomplished the establish-
ment of a representation of the set of all rational numbers that exhaustively
insured all the rationals would be included. Furthermore this diagonal se-
quence had a countable cardinality since Cantor had previously shown the cardinality
of the set of rational numbers was itself a countable infinity, a surprising finding since
the set of the counting numbers {1, 2, 3, 4, 5, 6, …} only consisted of each succeeding
integer whereas the rational numbers included in addition the immensely larger infi-
nite magnitude of the set of fractions between each integer (e.g., between the integers
2 and 3 there is the set {2 1/2, 2 1/3, 2 1/4, 2 2/3, 2 5/7, 2 16/19, 2 3987/6789215…}).
The stage was set but as of yet, the preceding acts were only preparations for the next
scene.
It was the mathematical construction which Cantor next devised that was both
the masterly tour de force of his proof and the principal reason justifying our turn to
Cantor’s stc as a formalism to be applied to the processes of emergence. The heart
of this potent construction hinged on a particular type of negation, a broader opera-
tion than negating a truth into a falsehood, or some other means for producing an
antithesis. The negation of Cantor’s stc amounted to any kind of change as long as
this change is applied consistently across the board of what is being negated. What
the negation operator of Cantor’s stc changed was the numerical values constituting
the diagonal sequence. For example, consider the diagonal sequence marked by the
underlined numbers going from upper left to bottom right in Figure 2: 536000002.
Negating this number can be achieved by consistently adding 1 to each number in
the sequence yielding the anti-diagonal sequence 647111113. Anti-diagonal then
refers to the applying of negation directly to each of the numbers on the diagonal
and anti-diagonalization refers to the process of negating the entire infinite series of
the diagonal numbers. These “anti-“ prefixed modifications are to be preferred, in my
opinion, since although the establishment of the diagonal sequence is a key move of
Cantor’s stc, by itself it cannot lead to the radical novelty generation that Cantor was
after—for that negation in this sense was mandatory.
Now what is exactly is the big deal about the new anti-diagonal number, whether
it is the outcome of adding 1, or 3, or 7 or subtracting 2, or 5, or whatever? Isn’t the
outcome just another rational number and therefore in that sense at least not radically
novel? Indeed, with the formalism of the diagonal array, Cantor had already shown
how to present the rational numbers exhaustively by way of a diagonal construction
that demonstrated a way to integrate them via a self-referential mapping. Because it
Consider the first digit of the anti-diagonal sequence. Following the action of ne-
gation and Jacquette’s interpretation of the diagonal number as a variable with an or-
dered pair for subscripts (see Figure 3), one subscript taken from the vertical and one
from the horizontal, if we look at the first digit of the newly constructed anti-diagonal
sequence, this number cannot be the same as the first rational number going down
the list since it is constructed to be different than that; moreover, the number of this
first digit on the diagonal of number cannot be found in the first place of the first digit
of the decimal expansion going left to right on the horizontal because it has been
constructed not to be equivalent. Next, consider the second digit of the anti-diagonal
sequence: it is constructed by negation to be different than the second digit of the
second rational number on the vertical list as well as not being equivalent to the sec-
ond digit of the second rational number’s decimal expansion. This same pattern of
disallowing the numerical value of the diagonal to match any digit placement of the
rational numbers is continued descending down the diagonal direction—they are all
constructed to be, via negation, different than each of the rational numbers corre-
sponding to the placement of the digits on the diagonal. In general, in the case of
the anti-diagonal sequence, its nth element has been constructed to be different than
every nth digit of the countable set it was constructed from.
But this implies that the anti-diagonal sequence cannot be included on any total
list of a complete set of rational numbers even though it was constructed out of a list
of rational numbers! To repeat, even though the diagonal sequence before the opera-
tion of negation could be such another rational number and thus capable of being in-
cluded among the list of all possible rational numbers, after the action of the negation
operator, the anti-diagonal cannot be included because it has been constructed not
to be! One cannot cry “unfair” here since this is just an example of how mathemati-
cal artifice typically works: it constructs novel mathematical objects in order both to
resolve mathematical problems and to probe and create new mathematical “spaces”
along the way as long as such moves are logically consistent.
If the new anti-diagonal number cannot be included in a complete list of the ra-
tionals then, even though it was generated out of those same rational numbers via
Cantor’s anti-diagonalization stc, it must be radically novel in relation to them. This
also implies that any new set fashioned by appending the new anti-diagonal number
140 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
to the set of the rationals must contain more than a countable cardinality, this un-
countability then being a radically novel property. For instance, if what was added to
the list of rationals was just any succeeding diagonal number from the sequence of
diagonal numbers (i.e., before the action of negation in creating an anti-diagonalized
new number) the result would remain a countably infinite set since any succeeding
diagonal number is just one more rational number. But the anti-diagonal has been
constructed to not be just any succeeding rational number, instead it is constructed
to be, via Cantor stc, a radically new number with the capacity of transforming the
cardinality property from countable to transfinite. Moreover, the operation of anti-
diagonalization can be performed on any newly appended set with a similar outcome,
namely, a radically new number not able to be included in the newly appended set.
Moreover, we need not get caught up in the issue of claim of the uncountability of
the transfinite sets Cantor believed his proof demonstrated, since such mathematical
luminaries as Henri Poincare, Luitzen Brouwer, Ludwig Wittgenstein, and others, who
were skeptical of the supposed transfinite implications of the proof, did accede it was
a powerful “recipe” for generating a radically new number (for Poincare’s reaction, see
Moore, p. 136; for Brouwer’s, see Dauben, 1979, and for Wittgenstein’s see Shankar,
1987). Kaufmann himself acknowledged that the anti-diagonal procedure could de-
termine a sequence of numbers other than those contained in the original sequence.
Ormell (2006) has an interesting fix on a transfinite-free interpretation of Cantor’s
novel anti-diagonalized number that fits our appropriation: he calls them lawless and
As Berto (2009) points out in his masterful clear study of Gödel’s proofs on im-
completeness, the Cantorian stc is quite general: “The procedure holds for whichever
way one tries to construct the real numbers in a list: we can always produce an ele-
ment (whose identity will certainly vary according to the way the list is constructed)
that cannot appear as an item in the list” (p. 34) and agrees with Priest’s (1995) re-
marks, that the anti-diagonal method’s action on the diagonal consists in its “system-
atically destroying the possibility of its identity with each object on the list” (p. 119).
Priest’s comment can be said to confirm Humphries’s claim that during the emergence
process of “fusion” the substrates disappear into the new emergents. This is what hap-
pens in the case of transformation: substrates are so changed as to not be reducible
to what they were before emergence.
We can also see in this activity of the negation operation, or diagonalizing out,
joosting out, or the action of a switch function, in the generation of an explanatory
gap according to the following viewpoint. Envision the process of constructing the di-
agonal sequence as a building-up from building blocks at each numerical value at the
intersection of the horizontal and vertical sides of the array. From Jacquette’s perspec-
tive this is equivalent to the consecutive construction of each ordered-pair dual sub-
scripts. Correspondingly, for the anti-diagonal construction we simultaneously negate
each of the numerical values of the diagonal as we go along. Thus we first build up the
diagonal as representative of the countable rational set and then the anti-diagonal as
the representative of a new set with its radically novel cardinality property. From this
perspective, we can see the stc ant-diagonal construction as similarly a bottom-up
process (or more literally in accordance with Figure 2 since the diagonal descends to
the left, a bottom-down building-up process!)
142 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
Yet, the negation operator subverts this sense of some new construction being
constructed bottom-up and accordingly followed in a piece-meal fashion since the
negation operators negates each expected value of the diagonal. What this implies is
that although we can follow along deductively as the diagonal “progression” is “un-
folded” (Cantor’s terminology) or, in other words, as the sequence is constructed, the
same following along is not possible for the anti-diagonal construction. Sure it seems
like we are following it the same way we follow the diagonalization, but the negation
insures that instead each step of following is negated. This entails, in turn, that even
though the diagonal sequence can be reversed or traced backwards to its origin in
the sense that any particular diagonal number achieved along the way can in fact be
included back on the list of the rational numbers, such is not the case for the anti-
diagonal number for it has been constructed to not be possible. Hence, the radically
novel anti-diagonal number which is intentionally constructed to be that way, stops
the building up process from being reversed or traced back or reduced to what was
already “unrolled”.
What I am trying to get at here is that the anti-diagonal construction, the heart
of Cantor’s stc operates like a ratchet illustrated in Figure 4. The building-up of the
diagonal sequence is analogous to the “forward” motion of the saber-tooth wheel
in a clockwise direction to the right permitted by the ratchet mechanism since the
phalange “a” will not get caught by the prong “b” but instead will only slide along the
upper side of “b”. But the anti-diagonalization construction, on the contrary, functions
corresponding to the way a ratchet blocks the reverse direction. That is, each time
the negation operator operates on each succeeding digit of the diagonal sequence,
it acts like the over-hanging hooked “tooth-like” spur of “b” that stops the saber saw-
like wheel from going in the reverse or counter clockwise direction. Each “hook” of
“b” illustrates each negation of the digit along the diagonal. And it is this blockage
to the reverse direction which keeps the newly constructed anti-diagonal from being
be included on the list of the rationals, or in other words, from being susceptible to
reduction to the diagonal sequence it is constructed out of. The difference between
the smooth “a” is analogous to following the diagonal in building up the rest of the
sequence while the hooked shape of “b” is what insures an explanatory gap is gener-
ated at each act of the negation operator in anti-diagonalization. It is an explanatory
gap generation since the anti-diagonal sequence cannot be contained in the list of
the substrate from which it is generated. Here it is the irreducibility aspect of the
explanatory gap which is receiving the emphasis, that is the untraceability of the anti-
diagonalization sequence backwards to the substrates. Shortly we’ll look more at the
undeducibility side of the explanatory gap.
The “break or stoppage effectuated by the negation operator can also be seen in
other formal constructional approaches such as the twist and gluing involved in the
making of a Mobius Strip that was mentioned in Part 1. The “mechanism” of the stc is
cognate to the “fold”, “twist”, and “glue” operations involved in the construction of a
Mobius strip from a flat piece of rectangular paper. In Part I, I suggested that only if we
consider the actual construction of a Mobius strip (Ryan 2006) with its unique global-
level properties (Weisstein, no date), would it count as an example of emergence. First
consider the given substrate: a two dimensional bounded surface such as a rectangu-
lar strip of paper. This surface has the crucial property of orientability (or directionality
if you prefer) which means that any figure “written” on the surface in a certain direc-
tion, e.g., an arrow, cannot be moved around the space in a continuous fashion to
eventually wind back its original position as the mirror image of itself, i.e., as an arrow
pointing now in a different direction.
Yet, a seemingly quite simple procedure can result in a radical different property,
the procedure consisting of folding, gluing or taping one side of the rectangle to its
opposite side (by curling it, e.g.) but while doing so twisting the surface around be-
fore the gluing. This transformative process generates a surface with a radically novel
topological property, non-orientability. This new surface has the characteristic that a
figure written on it and moved around can come back to its same original position but
now as a mirror image of what it was, the arrow pointing in the opposite direction.
Furthermore, whereas before the operation of twisting and gluing, the surface
had two sides, front and back, after the operation the surface has only side! This radi-
144 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
cally novel new property of one-sidedness can be seen if one draws a line down the
middle of paper and continues this line along the middle. Eventually one will see just a
line down the middle of the surface no matter how you turn the surface. In this other
example of a two-into-one operation, two sides have been combined to create one
side, at least one side in the sense of just moving along continuously along the strip
although this continuing is actually happening in a dimension one up from the local
experience, the scope or macro property is not evident from the perspective of the
going along the strip alone. Moreover, the original two sides have been subsumed
into the remaining one side only. The original two sides just don’t exist as they did
before (again, like Humphries, 1997, and his “fusion” take on emergence).
The twist step in the creation of a Mobius strip is analogous to a narrative plot
twist in a mystery or detective novel or film. Drawing the reader further and further in
because of its beguilingly uncomputability from the anterior plot line, the twist rep-
resents a transgression of what took place before, i.e., a digression away from “an
unfolding of this and no other law.” The negation operator, the Mobius twist, makes it
evident that for a process to lead to an unexpected outcome, this process must itself
include unexpected shifts, turns, digressions, divergences. In other words, the pro-
cesses responsible for radically novel outcomes must themselves partake of radically
novel shifts.
While it is indeed true that the specific arrangement depicted in Figure 2 is one
way to guarantee that all the rationals are included, this specific arrangement is mere-
ly contingent so that others that can accomplish the same guarantee are also ac-
ceptable. For instead of the way the order is arranged in Figure 2, 1/2, 1/3, 2/3, 1/4,
3/4, 1/5, 2/5, 3/5,..., it could just as easily been ordered as: 3/5, 2/5, 1/5, 3/4, 1/4, 2/3,
1/3, 1/2, …; or 1/9, 2/9, 4/9, 5/9, 7/9, 8/9,…; or 156/157, 155/157, 154/157, 153/157,
This conclusion, of course, follows from the nature of it being a deliberate con-
struction. However, regarding the point made above concerning how constructions
can be translated into natural constraints, we would expect that in naturally occurring
emergence, there would be a corresponding independence, i.e., explanatory gap, be-
tween emergent phenomena and their micro-level substrates even though the former
is transformed out of, or constrained to be, the latter. In fact, we can see this kind of
independence or explanatory gap between macro- and micro-levels in the emergent
“quantum protectorates” such as superconductivity in solid state physics (see the re-
marks of Laughlin & Pines, 2000; and the insightful discussion on it by Morrison, 2012)
. Moreover, this is why Laughlin and Pines point out that many crucial features of
these emergent quantum protectorates cannot be deduced or predicted from funda-
mental equations but rather must be measured in each experimental context which
the emergent phenomena occurs within. They stress that this is not due to some spe-
cial mathematical intractability but rather to the independence of the emergent state
to its anterior substrates. The formal structure supplied through the idea of an stc will
always need to be appended by the empirical research into each particular case of
emergence.
Emergent phenomena may form into prototypes but they will, if they are really
emergents, elude prefabricated algorithms or explanatory deductions from anterior
fundamentals and this is why empirical research is always necessary. According to
Laughlin and Pines, things can be shown to be clearly that “cannot be deduced by
direct calculation from supposed underlying micro-scale founded theoretical proper-
ties expressed mathematical which can only yield approximations but “exact results
cannot be predicted by approximate calculations”. Instead, these results are found
contingently by experimental measures of the actual prototypes, non-deducible fea-
tures of these emergent phenomena. Laughlin and Pine offer some examples of this
“explanatory gap” manifested as an inability to deduce emergent outcome from lower
level substrates and instead demonstrate a need to measure the system in context:
“simply electrical measurements performed on superconducting rings determine to
high accuracy the quantity of the quantum of magnetic flux hc/2e; the magnetic field
146 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
There is a direct analog to this in chemistry according to Hendry (2010a and b).
using appropriate micro-level Coulomb Schrödinger wave equations even modified by
the method of the Born-Oppenheimer approximation, will not yield molecular struc-
ture (remember that molecules are at a higher level while the Schrödinger equations
operate at the level of assemblages of electrons and nuclei. For Hendry the problem is
not one of mathematical intractability but because the molecular structure is not there
to begin with as data for the Coulomb Schrödinger equations.
Finally, it is worthwhile to point out that there are actually two phases in which
Cantor’s construction can effectuate self-transcendence. The first is the initial opera-
tion of anti-diagonalization that enables the construction of the anti-diagonal real
number transcending the countable cardinality property of the rational numbers from
which it is constructed. But there is also the possibility of a continual reapplication
of the anti-diagonalization operation to any new set comprised of the original list
appended with the newly generated number. In discussing this second aspect of self-
transcendence Kaufmann cited the work of the distinguished American mathemati-
cian Oswald Veblen who had used a version of Cantor’s stc in his own work on con-
structing ordinal numbers which, in contrast to cardinal numbers, express the order
and not magnitude of sets. According to Kaufmann, “What is characteristic of this new
type [of Veblen’s take on Cantor’s construction] consists in its being in principle undis-
closed: a determinate constructional principle, however widely conceived, will never
lead to the goal, but it is only in the course of constructional activity itself that new
instructions for continuing the procedure emerge” (p. 135). But if the stc is to retain
its radical novelty producing potency, these new “instructions” arise at each repeated
reapplication of anti-diagonalization to each new set constructed out of the original
substrate list appended with the radically novel number. This procedure of the stc is
“undisclosed” since the new set to which it is reapplied is unknown before the actual
anti-diagonal construction operation is accomplished with each new reapplication.
W
e have gone over the inner workings of Cantor’s self-transcending construc-
tion, whose potency in generating radically novel outcome by way of com-
paratively simple processes is well captured by this quip about it from the
mathematical logician Nathaniel Hellerstein (1997), “Has ever so much be gotten from
so little?” Going deeply into this particular mathematical construction was motivated
by two factors. The first were the allusions to it, e.g., the relation between emergent
and transfinite sets offered presciently the proto-emergentist Oliver Reiser, an intima-
tion of the critical role of a set’s cardinality proposed by one of the chief architects of
today’s understanding of emergence, John Holland, and so forth. The second motiva-
tion was indirect, namely, the conceptual linchpin role of Cantor’ stc in Darley’s defini-
tion of emergent phenomena according to uncomputability, an idea with roots in vari-
ous strands of mathematics, logic, computer science, and most importantly the vastly
influential limitative theorems of mathematical devised in the nineteen thirties by first
Kurt Gödel and shortly after Alan Turing with side trips along the way before and
after in the works of other mathematical logicians. Most pertinent here was Turing’s
demonstration of the existence of uncomputable numbers, work building on Gödel’s
conclusions about undecidability in his incompleteness theorems.
The first thing to note is just how far the conceptual context of Cantor’s stc had
come during its passage from its original set theoretical framework. For Cantor, the
formalism’s power lay in its ability to produce an entirely new kind of quantity, the
quantitative measure of numerosity or cardinality. When it had come to mathematical
logic, though, the context had shifted to non-quantitative issues of the decidability or
solvability of mathematical problems in general (although quantity as such could be
seen in them to a much lesser extent such as in Gödel suggesting the same countable
cardinality for listing propositions according to their match with the natural numbers,
a countably cardinal set—see Gödel, 1992: 39, fn. 7). These theorems were formulated
against the backdrop of David Hilbert’s famous Entscheidungsproblem or “Decision
Problem” which contended that all well-posed mathematical problems were solvable
or decidable by some fixed method, an idea held by many mathematicians at the
time. According to Hilbert, if a mathematical proposition is true then there must be
some way to decide or prove or demonstrate it is true. To attack this problem, Hil-
bert argued that all mathematical problems could be put into an appropriate formal
construction. The central issue of the decidability problem was whether it can always
be decided if a given proposition of the formal system is a theorem, that is, can it be
proven true (Peregrin, n.d.) by following the steps of deduction according to the rules
of the formal system in question.
148 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
Second and related to the first, the combinatorial strategy in constructing the di-
agonal had to also shift to what would make sense in the new context of deductive/
decisional progressions and not mere progression by increase of counting numbers.
Here the genius of Gödel was revealed in full measure with his self-referential “coding”
scheme (later usages refer to a “diagonal lemma” thus drawing an association be-
tween what Gödel did and Cantor’s diagonal). From this Gödel devised a true proposi-
tion that was evidently true because of the way it was presented but was not provably
true, that is, there was no definite method to decide whether or not it was true or
false, In other words, its evident truth was undecidable. Cantor’s stc ability to tran-
scend substrate cardinality, thus had become transposed into transcendence of de-
cidable methods confined to a substrate level composed of logical and mathematical
resources.
I point out these analogies to emphasize the role of self-reference which originally
goes back to Cantor’s stc where the diagonal as a representation of the self-referential
mapping is both a sequence in itself making up a particular diagonal number whose
cardinality should be the same as those horizontal and vertical arrays from it is com-
bined. The relevance of this will become later when I compare the “logic” of what is at
work in stcs to the self-referential closure of other complexity models who overem-
phasize self-reference.
But this meant that the sentence entailed a true but unprovable proposition!
150 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
resources from formalisms not so restricted, that is, sets of formalisms that could con-
tinually be transcended by appended radically new numbers constructed via a Cantor-
like stc, or as Gödel put it himself (1992: 62, footnote 48a):
The true source of the incompleteness (undecidability) attaching to all formal systems
of mathematics...is the fact that higher types can be continued into the transfinite...
whereas in every formal system at most denumerably many types occur. It can be
shown, that is, that the undecidable propositions here presented always become
decidable by the adjunction of suitable higher types...
This is indeed what I have been meaning by the term “self-transcendence” in regard to
emergence. Accordingly, emergence can be thought of as precisely that which evades
formalization with any particular model and thus must be formalized by transcending
that very model. That this is not just a trick of words will be shown below
Turing devised his own self-referential type code (analogous to Gödel’s) for rep-
resenting any possible algorithm or decisional procedure, a machine version of what
could be accomplished by an actual “human computer.” He described his machines as
doing a calculation, a “human computer”, a primitive input-output device which was
termed by later commentators “Turing machines.” Each possible algorithm could be
represented via the Turing machine construction, the code expressing the decisional
steps or methods needed for computing or solving a number, not matter how long
this chain of deductions or logical inferences as long as only a finite “alphabet” for
representing a particular computation was available (Petzold, 2009). In an algorithm
like the long division above in Figure 1, each step in the chain of reasoning arguing
arose from the succeeding one, hence is machines were “deterministic” just as it is
the operation of logic gates in computer softward, embedded in hardware, that are
responsible for the deductive flow of a program at work. As will become relevant be-
low, we can see in the deciding or deducing operations making up the inner workings
of an algorithm as analogy to a series of deductions going from some substrate and
computing some solution or outcome from it, again from an origin to a terminus.
Since, the set of numbers that are computable are the ones that are comput-
able by algorithms, and using his machines Turing had defined algorithms as definite
instructions that could be encoded in a finite “alphabet”, this meant the set of com-
putable numbers must be countable (to relate all of this back to Cantor’s focus on
cardinality). But this wasn’t enough to use a direct form of Cantor’s stc operating on
countable numbers since it is also true that most real numbers are not computable.
Oddly enough, though, some real numbers are computable, e.g., the transcendental
number pi is calculable since there is a definite decisional procedure to calculate it
(Florian, 2011).
This meant for Turing that he first had to prove he was only capturing computable
producing algorithms for his “arrays” which is why he then introduced what was later
called the Halting Problem. This demonstrated it was not possible in general by some
algorithmic to decide ahead of time which numbers would turn out to be computable.
That is, there had to be uncomputable numbers. The term “Halting” was later used to
refer to the impossibility of knowing if a particular algorithm would or would not halt
at a correct solution ahead of time (some attribute the term to the computer scientist
Martin Davis). Various implications followed from the proof of the Halting Problem,
e.g., there is no general algorithm that decides whether a given statement about natu-
ral numbers is true or not since a proposition having it that a certain program will
halt can be converted into an equivalent proposition about natural numbers (“Halting
Problem,” Wikipedia). If we had an algorithm that could solve every statement about
natural numbers, it could certainly solve this one; but that would determine whether
the original program halts, which is impossible, since the halting problem is undecid-
able.
152 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
Yet despite the need to demonstrate the uncomputability of the Halting Problem
and thereby the existence of uncomputable numbers, the basic outline Turing fol-
lowed was Cantor’s stc and its “moving parts”. As described by the British mathema-
tician Andrew Hodges (2002), author of an masterful and provocative biography of
Turing,
Turing’s proof can be recast in many ways, but the core idea depends on the self-
reference involved in a machine operating on symbols, which is itself described by
symbols and so can operate on its own description. …However, the “diagonal” method
has the advantage of bringing out the following: that a real number may be defined
unequivocally, yet be uncomputable. It is a non-trivial discovery that whereas some
infinite decimals (e.g., p ) may be encapsulated in a finite table, other infinite decimals
(in fact, almost all), cannot. (p. 4)
Thus, even though there is a formal structure for proving the fact about the halt-
ing problem, this same formal structure cannot be used to derive a priori which spe-
cific algorithms can be found germane to solving particular problems. To be sure there
are always rules of thumb that can be appealed to in order to devise appropriate al-
gorithms, the means of arriving at the useful ones making up a large part of computer
science inquiry. Applied to emergence this implies that although a formal structure
like the stc developed here can provide needed insight, at the same time it mandates
the presence of an non-eliminable explanatory gap. We will return to how this gap
can be understood and handled contextually below.
T
here is a close connection, as Chaitin has repeatedly emphasized, between al-
gorithmic complexity and the theorems of Gödel and Turing (and indirectly to
Cantor’s stc). In Darley’s view, the property of uncomputabiity is equivalent to
the algorithmic complexity being so high that emergent outcomes can only be known
ostensively, i.e., letting the program run.
We can also see a close connection between Turing’s theorem and Cohen and
Stewart’s (1994) Existence Theorem for Emergence, which hinges on a kind of uncom-
putable intractability involved in deriving emergent features from lower level laws,
one of the clues that led me to consider Cantor’s stc role in emergence to begin with.
Cohen and Stewart justified emergence through their Turing-like contention that “in
any sufficiently rich rule-based system there exist simple true statements [by which
they are referring to emergent features] whose deduction from the rules is necessarily
enormously complicated.”
More specifically, they point to how the computational emergents found in the
Game of Life have been proven capable of generating a programmable computer
which can be shown to possess the crucial feature of undecidability associated with
the halting problem. That is, one could set-up the Game of Life in such a way that one
of its “creatures”, a glider, say, is annihilated if it halts. However, and this is the key
point they want to emphasize, Turing proved there was no way to tell, ahead of time,
if the program would indeed halt and therefore if the computational emergent would
be annihilated. As they write, “This kind of uncomputability occurs because the chain
of logic that leads from a given initial configuration to its future state becomes longer
and longer the further you look into the future, and there are not short cuts” (their
emphasis). They concluded, “...[emergent phenomena] are not outside the low-level
laws of nature; they follow from them in such a complicated manner that we can’t
see how” (p. 438) and, consequently, emergence “transcends its internal details, and
there’s a kind of scale transcendence” (pp. 440, 441). The transcendence of the details
shows up in the explanatory gap of the quantum protectorates, which we’ll be saying
more about below.
The phrases “such a complicated manner”, “whose deduction from the rules is
necessarily enormously complicated” and “refractorily long” can be interpreted as
nonspecific complexity measures that speak to a kind of uncomputability of emer-
gence along the lines of Darley’s algorithmic complexity metric. To be sure, the aim of
154 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
Cohen and Stewart was not to devise such a complexity measurement but rather lay-
out informally what a formal “existence proof” for emergence would look like. Indeed,
relying on specific complexity metrics to emphasize the uncomputability property of
emergence comes with disadvantages if it is presumed one size fits all. For example,
even algorithmic complexity, as useful as it might be in certain cases, displays an un-
wanted correlation between the degree of complexity and the order in a system. Thus,
algorithmic complexity is at its highest in random systems, where it clearly is the case
that any description of the program length cannot be less than the actual size of the
data stream the metric is measuring, i.e., the algorithmic complexity of random series
is said to be uncompressible. This implies that the random series is uncomputable but
emergence is not produced by just stochastic processes. It may indeed be true that
one facet of the processes involved in generating emergent phenomena may incor-
porate some kind of stochasticity and this will be displayed in the eventual emergent
patterns in some form or another, but randomness generation cannot be a leading
player in the production of emergents since it is pattern and order which are signs of
emergent phenomena and not lack of pattern and disorder. That is, emergent phe-
nomena are supposed to be all about novel order, structure and pattern, not random
bit strings. So how is uncomputability, if restricted to what algorithmic complexity
measures, to be of help in characterizing emergence?
Adding the Cantorian anti-diagonal-like step to the calculation of the metric en-
ables it to bring out how the data from the emergent structures “contain information
about their own depth” (hints at it being another instance of a kind of self-reference)
such as obvious redundancies in emergent order (redundancy being one crucial sign
of order as opposed to disorder), unequal digit frequencies, aspects of the outcome
S
ince Cantor first devised his anti-diagonalization construction, an association
with paradox has prompted much commentary. Earlier on, for example, were
Cantor’s Paradox of the Set of All Sets, and Russell’s infamous Barber Paradox.
This relationship to paradox became re-emphasized with the central role of Cantor’s
stc in the limitative theorems of Gödel, Turing, and others. Indeed, Gödel admitted
this his theorems were patterned in part on the notorious Liar and Richards para-
doxes (Grim, Marr, and St. Denis, 1998). As Douglas Hofstadter (1985) has trenchantly
commented, “To some people, [anti-] diagonalization seems a bizarre exercise in arti-
ficiality, a construction of a sort that would never arise in any realistic context. To oth-
ers, its flirtation with paradox is tantalizing and provocative, suggesting links to many
deep aspects of the universe” (p. 335; my emphasis; Hofstadter’s magnum opus Gödel
Escher Bach examines this theme in his inimitable and enlightening fashion).
Taking a closer look at this “flirting with paradox” in the Cantorian stc can aid us
in two ways: first we can better appreciate the potency of stc’s in producing radically
novel outcomes; second, we can see why it will not be necessary to turn to paracon-
sistent or paradoxical logics for a cogent and naturalistic reimagining of emergence.
156 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
in the consistent changing of each numerical value on the diagonal which enabled the
resulting number to “diagonalize out,” “lever out,” “joost” from” or “switch” from the
diagonal sequence. It was the action of this negation operator which generated the
radically novel real number not capable of being included in the list of rationals and
thus displaying the radically novel cardinality property of an uncountable set.
Although they are obviously closely related, paradoxes and contradictions are dif-
ferent logical forms. Thus, when Felix Kaufmann rejected the very possibility of a self-
transcending construction because “...no construction can ever lead beyond the do-
main determined by the principle underlying it”, he was in effect claiming that the idea
of stc was absurd for implying this contradictory syllogism:
1. Any kind of construction must follow and stay within the conceptual arena
defined by the principle underlying the construction;
2. The “self-transcending” part of the phase “self-transcending construction”
entails that the construction in question is not staying within the conceptual
arena defined by the principle underlying the construction, i.e., it transcends or
diverges from this normative conceptual arena;
3. Hence, the expression “self-transcending construction” is a contradiction;
4. Therefore, a self-transcending construction is impossible (which meant for
Kaufmann that what the stc was used to prove, i.e., the existence of sets with
transfinite cardinality, was also impossible).
Negation obviously plays critical play a role in a contradiction so that the presence
of a contradiction emanating from a sentence that was presumed to be true, negates
the truth of the statement containing it. Moreover, if a system of related statements
includes even one contradiction, then the system is considered inconsistent, or is said
to “explode”, what the Scholastics called ex contradictione quodlibet or “from a false-
hood, anything follows” since allowing one contradiction leads to the ability to entail
whatever one wants (Sainsbury, 2008). Since the fact that the truth valuation of a sen-
tence containing a contradiction is false, and thus the presence of contradictions can
wreak havoc on recently various “paraconsistent”, “paradoxical,” or “dialetheic”, log-
ics have been developed to incorporate certain contradictions into acceptable logical
statements for certain purposes, a topic we will discuss below.
A paradox has a different kind of logical structure, the most prominent feature of
which is the inclusion of a negation operator within a self-referential structure. For in-
stance in the famous self-referential paradox of the Liar, the negation operates inside
the circle created by the self-reference, e.g., here is one version of the Liar.
Sentence 1. has a self-referential structure due to the indexical “this” which wraps what
is being referenced in the sentence, that is, the truth valuation of the sentence, around
back to itself. The negation is applied inside the self-referential structure so that the
negation of “false” negates the sentence’s own truth valuation. If we consider the sen-
tence’s truth valuation through a series of deliberations about it, its truth valuation,
which is presumably what the sentence is about, oscillates from true to false to true
to false…. this loop going on indefinitely. Thus, if we assume that sentence 1. Is a true
statement then what it says about itself being false must be a true statement. But if
we deliberate that it is true that it is false, then what the sentence is saying about itself
must be false. This means that it is false that that sentence is false. This then lead to a
deliberation that it must be true and the round robin goes around again. When rep-
resented by a dynamical system (see Grim, Mar, St. Denis, 1998; and Goldstein, 2001)
with appropriate adjustments made to fit with the nature of a dynamical system, the
truth valuation is first an attractor, then a repeller, then an attractor, then a repeller,…
There can be no final settled truth valuation since the self-referential structure has cre-
ated a complete closure to the system so that nothing else but this unstable wobbling
can take place.
Now it might be thought that sentence 1 is already self-referential since it states that
I (Jeff) am, or is, reading what I am writing. But it is not really self-referential since it is
referring only to a paper on the computer screen, not me. Thus we need to proceed
by substituting the whole sentence (1) for ex to yield:
Yet, strictly speaking, sentence (2) is not yet self-referential for it merely asserts that
Jeff is reading sentence (1) not sentence (2). This means that to generate complete
self-reference requires wrapping what is being referred completely around back to the
“self”, a feat necessitating a kind of bending around of the referring completely back
158 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
to itself. As Smullyan points out, applied to sentence (1), strict self-reference would
not be instantiated until we reach this rather strange beast of a sentence:
(3) Jeff is reading the diagonalization of “Jeff is reading the diagonalization of ‘”Jeff is
reading “Reimagining Emergence, Part 3’”.
“...yields a sentence with property P when appended to its own quotation,” yields a
sentence with property P when appended to its own quotation.
It is also important to realize that cross-reference, e.g., two or more sentences refer-
ring to each other, can be interpreted as just a more indirect form of self-reference, re-
taining the special features of the latter. Consider a conversation between Tweedledee
and Tweedledum sitting next to each other in the cafeteria engaged in a conversation
with their friends at the table (adapted and revised from Grim, Mar, and St. Denis,
1998):
Tweedledee: You, Tweedledum, are speaking truthfully right now because I can see that
you are wearing your glasses today which you usually forgets.
Tweedledum: I can see you, Tweedledee, sitting in the chair next to mine and you are
wearing a blue shirt.
What they are saying to each other refers to things relevant to each, and there is
clearly no issue about the truth valuation of what they are saying (from a purely logi-
cal point of view—empirically, it might be the case that Tweedledum is color-blind and
often gets it wrong as to whether something is green or blue).
Here we see an unstable truth valuation parallel to that shown to take place in the
Liar. Ultimately though, both instances of the cross referential conversation can be col-
lapsed to self-reference by using something like proxy variables for each speaker’s ut-
With these ideas on self-reference in mind, let’s turn to the logician Graham Priest’s
(1994, 2002) proposal for a kind of universal blueprint describing self-referential para-
doxes composed of two crucial structural components termed transcendence and clo-
sure (for a most interesting analysis of such paradoxes in relation to Cantor’s stc, see
Keith Simmons, 1990; unfortunately we don’t have space enough here to do Simmon’s
analysis justice). Transcendence can be thought as analogous to the transcendence in
a self-transcending construction, i.e., it is the operator that drives the sentence’s valu-
ation to transcend what it originally is (e.g., the condition of the substrates). Closure
refers to the way the self-referential structure wraps the sentence around itself so tight
that what the sentence is saying is not about anything else but some defining feature
of itself like its own truth or falsity. For example sentence #1 could have said instead
“This sentence is printed in black letters on a white screen” but that is not self-referen-
tial in the same tightly enclosed way for it is referring to certain qualities the sentence
happens to be referring to and not something at its core like its falsity.
160 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
What Gödel had done can be connected with Priest’s point about how transcen-
dence gets blocked in a self-referential paradox by the closure of the self-referential
structure. What would happen instead to the potency of the driving force of transcen-
dence if the enclosing impediment of the self-referential structure is somehow trans-
muted or loosened? Indeed, this is what happens in the case of Cantor’s stc which
contains both self-reference and negation, but without the former impeding the lat-
ter. A closer inspection of the Cantorian stc shows it is not identical to a self-ref-
erential paradox in spite of both containing self-reference and negation. Remember,
Hofstadter asserted that anti-diagonalization only flirts with, not embraces, paradox.
But before we get to the way out possessed by stc’s, I would like to comment on
a certain similarity between what Priest calls closure and the exact same word used by
Francisco Varela 1974, 1979) as a description of what his theory of autopoeisis claims
about the central fact of a living organism, namely, its self-referential core function
to generate the kind of closure required for what he called biological autonomy. For
Varela, organisms are self-referential in their very essence by consisting of a network
of production processes of components which, through their interactions, regener-
ate and realize the network that produces them. This self-referential, circular causality
operates to create an invariant self-contained identity, a boundary-circumscribed state
of closure, What I say about Varela’s work can also be said to apply to Robert Rosen’s
theory of M/R (Metabolism/Repair) systems (indeed the comparison of Rosen’s and
Varela’s theoretical self-referential schemes has generated an almost cottage industry;
see, e.g. Letelier, Marın, and Mpodozis, 2003). To conceptually support his conceptu-
alization of life as a primary, primordial, foundational self-referential structure Varela
first developed a “calculus” for self-reference built around the mathematical/logical
apparatus of G. Spencer Brown and later through a category theoretical interpreta-
tion of self-reference based on William Lawvere’s famous theorem on closed Cartesian
categories.
This approach no doubt supplies important insights into the nature of organisms,
but in my opinion, becomes especially problematic when taken to an extreme which is
what has been done in the respective theoretical biological standpoints found in the
work of Robert Rosen and Francisco Varela. In previous publications, I have critiqued
Rosen’s and Varela’s doctrines when they have veered too close to the kind of closure
that Priest refers to as the tightly wrapped self-referential structure at the heart of
paradoxes. As systems theories in biology, the great merit of Rosen’s, Varela’s, and
others similar approaches to theoretical biology, has been to erect fortifications pro-
tecting the province of biology in studying life against the encroachment of simplistic
input-output reductionist models. However, in so doing their extreme self-referential
scheme has had the effect of robbing “life itself” (the title of one of Rosen’s books) of
exactly what makes “life itself” life and not the parody of death one takes away from
their models. Strangely, although Varela’s and Rosen’s systems biology was supposed
to deal with life, their respective schemes leave out nothing less than: procreation
through self-reproduction, sexual ecstasy, evolutionary change, deep social encounter
and cooperation, ideals lifting us up so as to be able to hear our better angels, cre-
ative and religious experience, wonder at nature, our embeddedness in family, friends,
social networks. There is a kind of strange prudery in Rosen’s and Varela’s dessicated
view of life where ecstasy is just too transcendent for their closure (the most withering
criticism of autopoeisis is no doubt that offered by Rod Swenson, 1992).
One of the reasons for bringing this up is to point out how the formalism of self-
transcending constructions contrasts strongly with this closure scheme, this overly
wrapped tight self-referential closure which opposes its transcendence productive
associations. Even though the representation of the self-referential mapping of the
rationals to themselves, i.e., the diagonal sequence, is negated through the negation
operator construction, this negation is not then trapped, rather the lower level micro-
level origin of the radically new number is what is negated. In the stc, accordingly,
negation is the way out of self-referential closure because it is not bound by self-
reference. Indeed, to the extent that the diagonal sequence is the coding par excel-
lance of self-reference in Cantor’s formalism, anti-diagonalization negates even this
self-reference.
162 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
It is for this reason, namely, that the stc formalism allows self-transcendence to
drive the emergence of the radically novel, that the conception of stc does not require
any kind of paraconsistent or dialetheic type of logic which allow of contradictions
or paradoxes under certain circumstances (see Priest, 2002). Thus dialetheic logic is
Priest’s way out of closure. Yet the price to be paid is high, namely the inclusion of
paradox and contradiction into otherwise rational systems of thought and the con-
comitant frequent contortions of argument that seem required by that allowance. The
formalism of self-transcending constructions, though, doesn’t need such special logi-
cal apparatus since it brings about transcendence by its very constructional method.
Anytime it starts getting too close to paradox, beyond the flirting phase, it has re-
sources for avoiding a full embrace of the illogical.
Processes of emergence just as the operations of the Cantorian stc need to in-
clude this flirting with paradox as one of the sources of the stc’s potency in gener-
ating radical novelty outcomes by way of a series of transformations of substrates
into emergent phenomena. The reason comes down to something very elementary
but typically overlooked concerning the nature of the processes responsible for emer-
gence: if radically unique outcomes are what is wanted, then there must be some-
thing radically unique about the processes leading to those outcomes. Morgan had
brought this down to what I think is the most elementary fact about these processes
of emergence: they display the need for some kind of a new start. This new start is like
a second chance for if the destination is a radically new place, then trying to get there
by following all the deductive rules dominating the anterior substrates simply won’t
work. In the face of all the current calls for renewed hylozoism, panpsychism, actual-
ization of pre-existing propensities, processes of emergence must possess qualities
that transcend the satiric explanation, on the part of one of Moliere’s characters, that
the power of a sleeping draught as due to its “dormative” properties.
T
he property of being integrations, collectives, wholes on a higher macro-level in
relation to the component aspect of micro-level substrates is one of the defin-
ing characteristics of emergent phenomena. This higher level wholeness can be
seen in all of the examples of emergence, from macro-“quantum wave” of resistance-
free flow of electric current in superconductivity, to the cooperative social networks
the can emergence in concerted human to accomplish a task, to the evolution of new
higher level organisms from the transformation of parts and functions. The proto-
emergentist C. L. Morgan regarded this feature a “new relatedness” of the parts within
a whole. However, as the always insightful, emergence-friendly, complexity oriented
This is an obvious point but I want to push further as to why this tangram-type of
game cannot, by the very rules (again obvious) which define the pieces and the op-
eration, lead to a novel emergent, macro-level collective. The first rule in playing than
tangram type of game, usually unstated, is that one cannot break the pieces apart, or
add any of these resultant broken pieces back together to form a new shape. Indeed,
if such a rule did not exist the point of the game would be lost -- why start with 30
intact pieces and not just a bunch of ½ inch hard plastic sheets and powerful shears
to cut them, as well as some kind of glue to join them in innovative ways and thereby
produce new pieces. If we consider the whole to be constructed as at the level l, then
the variously shaped pieces as found in the box are at l-1. Hence, we can restate the
rule against cutting and gluing can be restated as: only combinations of pieces at level
are l-1 are permitted. (by the way, another rule might disallow gluing which would
might have led to the additional possibility of creating three dimensional figures).
These simple rules, though, provide two hints at what a shift beyond aggregation
to emergent integration must include. First, they indicate the combinatory strategies
must include a transformation of the substrates or components, and not just a re-
combination of what already exists. That is, the integration of the emergent whole is
comprised of a new congruity made up of transformed substrates not just the new
relatedness of parts. In fact, in an important sense, the novelty of the new emergent
whole is just the congruity of the substrates effectuated by their being transformed.
The second hint is that processes of bringing about a radically novel whole that
transcends a mere aggregate must take place at a level beneath the substrate level.
Thus, if the whole is at level l and the substrates at level l-1, then the recombinatory
164 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
strategies must be at levels l-2 or lower. This means that the substrates themselves
must be able to be decomposed and it is the resulting decompositions which allow
for their transformation so as to be able to form a new congruity. This also means that
the new whole is not something over and above the substrates manifesting as their
new relatedness, it is the new emergent whole. That is, the radically novel emergent
whole is just the transformed substrates in their novel congruity.
We say this same level scheme at work in Cantorian stc in the section on how the
diagonal is formed from the component level comprising the substrates. That is, the
substrate rational number is at the level l-1 because it is a component of the diagonal
sequence which is at l. But it is not just the whole substrate at level l-1 which forms
the diagonal, it is the decomposed digit from the decimal expansion which is mixed
into the diagonal as an ordered pair indexed numeral. Again, it is this mixture of levels
that plays a key role in the transformation and thus the nature of the novel emergent
integration.
The need for decomposition into lower levels and the subsequent transformation
made possible is highlighted in the mathematical theory of categories which Ehres-
mann and Vanderbreesch (2007) have incisively and elucidatingly applied to emer-
gence. In their theory of complexification which describes a category theoretical com-
ing into being of a hierarchy of emergent levels, each higher level is made possible
because the substrates are multifolded which refers to the way lower levels are so con-
stituted as to be decomposed into multiple functions and multiple structures. Without
multifoldedness the emergence of radical novel is not imaginable.
The key is to keep in mind that the novel congruity is made novel precisely due
to the transformation that has occurred with the parts/substrates. Unless these sub-
strates have been substantively transformed, then there can never be any genuinely
novel wholeness, even Morgan’s “new relatedness”. New relatedness connotes that
the same old parts subsist but are now related to each other in a new way, but exactly
how new can this new relatedness amount to if the substrate parts in relation to each
other in the new relatedness have themselves not changed.
In his critique of certain of the claims made in traditional mereology, the eminent
philosopher D. H. Mellor (2006) questions the entire strategy of basing conclusions
about or even having intuitions to be followed-up on a priori assumptions about part-
whole relations. For Mellor, trying to conclude something about parts/wholes before
investigation particular instances and their contexts is like trying to decide a priori
whether waves are longitudinal or transverse in their oscillations rather than look-
ing at what the specific waves of which we are interested are actually doing. In my
opinion, most hard core physicalist reductionists like Kim are steeped in a priori view
on parts and wholes that interfere with a clear view of what takes place in the “whole-
making” facet of emergence.
In contrast, I am suggesting that we pay careful heed to what the great Galen had
suggested in the following quotation (see Part 2 where I give credit to Ganeri’s bril-
liant work which brings Galen’s points to the central conceptual place they deserve to
be in relation to emergence, substrates, and transformation): “For anything constitut-
ed out of many things will be the same sort of thing the constituents happen to be…
it will not acquire any novel characteristic… But if the constituents were altered, trans-
formed, and changed in manifold ways, something of a different type could belong to
the composite that did not belong to the elements…something heterogeneous can-
not come from elements that do not change their qualities. But it is possible from
ones that do….” I would add here that not only do the “elements” (substrate) need to
change their qualities the processes of combining and recombining them must also
be changed from customary views of how such processes work.
166 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
one very effectively described by the Goethian scientist and philosopher Henri Bortoft
(1996: 12), “The whole is nowhere to be encountered by stepping back to take an
overview, for it is not over and above the parts, as if it were some superior, all-encom-
passing entity. The whole is to be encountered by stepping right into the parts.” The
emergent whole can be so encountered in the parts (substrates) and not, as Bortoft
further emphasizes, dominated by the whole, precisely because these substrates have
been radically transformed so that the subsequent congruity of these transformed
parts comprising the novel emergent whole have subsumed into this novel congruity,
i.e., no longer exist as they did before in the micro-level context. This is what I think
Humphries (1997) was getting at with his notion of emergence as “fusion” whereby
the individual parts are lost in the process. However, the prototype for this fusion
emergence offered by Humphries is quantum mechanical entanglement. But that is
truly a case of what the Scholastics decried of explaining what is obscure by appealing
to something even more obscure.
W
ittgenstein once remarked, “The dangerous, deceptive thing about the idea:
‘…the set … is not denumerable’ is that it makes the determination of a
concept—concept formation—look like a fact of nature.” The point of this
paper has certainly not been to claim that the Cantorian stc is a fact of nature. But I do
claim that it is a fact that there are natural processes utilizing natural capacities that
can produce emergent outcomes and that the “logic” of such processes can be expli-
cated according to the idea of self-transcending constructions.
168 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
the ones which have the more profound implications and the more influence in shap-
ing where the field is going.
Of particular relevance to the use of the stc formalism to reimagine the trans-
formative processes of emergence are the themes of bound versus free, following
versus negating, and generative versus constrained. Thus, when considering above
what might be gained from applying a purely formal construction to the natural phe-
nomena of emergence, I offered the analogy of constructing by a constructor on one
side and natural processes proceeding by means of constraints on the other side. This,
in fact, is what an stc can supply when employed to account for processes of emer-
gence: radical novelty generation which from the perspective of a natural process of
emergence looks like the channeling potency of constraints.
The logician Judson Webb (1980) has drawn attention to this claim about the Can-
torian stc on the part of the American mathematical logician Stephen Kleene (who
employed the Cantorian construction in his own highly significant work) who put it
this way, “…Gödel’s essential discovery was...[he] mechanized [formalized] the ...[anti-]
diagonal argument for incompleteness...the imcompleteness theorem shows that as
soon as we have finished any specification of a formalism...we can, by reflecting on
that formalism ... discover a new truth...which not only could not have been discovered
working in that formalism, but –and this is the point usually overlooked –which pre-
sumably could not have been discovered independently of working with that formal-
ism”. According to Webb, these remarks suggest that anti-diagonalization itself is a
formalizable construction of how Gödel’s unprovability and Turing’s uncomputability
can be shown to come about.
Finally, in Part 2, in the context of making a point about the contemporary en-
richening feedback taking place between the idea of emergence and those scientific
endeavors employing the notion, I suggested replacing the term “causality” in the
quote “a” below (from Schlegel, 1974: 14) with the term “emergence” in order to gen-
erate quote “b.)”
170 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
Here at the end of this three part paper on reimagining emergence, I suggest
another replacement but this time a double one: replacing “emergence” with “self-
transcending constructions” and “science” with “natural processes and capacities”:
References
Becker, O. (1927). “Mathematische existenz,” Untersuchungen zur Logik und Ontologie
Mathematischer Phänomene (Jahrbuch für Philosophie und phänomenologische
Forschung), VIII: 440-809
Bedau, M. (1997) “Weak Emergence,” Philosophical Perspectives, ISSN 1520-8583, 11: 375-399.
Bennett, C.H. (1986). “On the nature and origin of complexity in discrete, homogeneous,
locally-interacting systems,” Foundations of Physics, ISSN 0015-9018, 16(6): 585-592.
Bennett, C.H. (1988). “Logical depth and physical complexity,” in R. Herken (ed.), The Universal
Turing Machine: A Half-Century Survey, ISBN 9783211826379, pp. 227-257.
Bernard-Weil, E. (1995), “Self-organization and emergence are some irrelevant concepts
without their association with the concepts of hetero-organization and immergence,”
Acta Biotheoretica, ISSN 0001-5342, 43(4): 351-362.
Berto, F. (2009). There’s Something about Gödel: The Complete Guide to the Incompleteness
Theorem, ISBN 9781405197670.
Bertoft, H. (1996). The Wholeness of Nature: Goethe’s Way toward a Science of Conscious
Participation in Nature, ISBN 9780940262799.
Borges, J.L. (2000). Selected Nonfictions, ISBN 9780140290110.
Boschetti, F. and Gray, R. (2007). “Emergence and computability,” Emergence: Complexity &
Organization, ISSN 1521-3250, 9(1-2): 120-130.
Broad, C.D. (1925, 2013). The Mind and its Place in Nature, ISBN 9780415488259.
Cai, J.-Y. (2003). Lectures in Computational Complexity, http://pages.cs.wisc.
edu/~jyc/810notes/book.pdf.
Cantor, G. (1891). “On an elementary question of set theory,” in S. Lavine, Understanding the
Infinite, ISBN 9780674921177, pp. 99-102.
Chaitin, G. (2011). “How real are real numbers?” Manuscrito, ISSN 0100-6045, 34(1): 115-141.
Chapline, G., Hohlfeld, E., Laughlin, R.B., and Santiago, D.I. (n.d.) “Quantum phase transitions
and the breakdown of classical general relativity,” http://arxiv.org/abs/gr-qc/0012094.
172 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
174 | Goldstein
Reimagining emergence, Part 3: Uncomputability, transformation, and self-transcending constructions
176 | Goldstein