Vous êtes sur la page 1sur 17

Page |1

[ROUGH DRAFT]
Technology and Our Epistemic Situation: Two Problems of Ignorance
In this paper, I argue that there are two distinct problems of ignorance: a problem
of size and a problem of type. Both are more pressing today than ever before, given
the extraordinary expansion of collective human knowledge, and both pertain to
epistemic limitations intrinsic to our evolved cognitive systems. After delineating
these problems in detail, I examine one possible way of overcoming our “relative”
and “absolute” ignorance about the universe: enhancement technologies. I then
argue that, given our epistemic situation, resources currently being spent on normal
research would be far better spent on developing cognition-enhancing technologies
– technologies that promise to help solve the size and type problems previously
sketched.
1. Distinguishing Between Size- and Type-Ignorance
Knowledge is like a sphere, the greater its volume, the larger its contact with the
unknown. – Blaise Pascal
While the concept of knowledge has held center stage in the theater of
Western Philosophy for some twenty-five thousand years, ever since Plato
articulated his tripartite analysis of it, considerably less has been said about its
epistemological antagonist: ignorance. The issue of ignorance is, I believe, more
germane today than ever, not just because of well-known studies concerning
scientific illiteracy and uninformed voters (Mooney and Kirshenbaum 2009;
Skenkman 2008), but because of the extraordinary expansion of collective human
knowledge. But note a difference between these two phenomena: the former is
entirely contingent, since the sort of nescience about “elementary” facts of the
universe and all it envelopes is clearly rectifiable, for example, by ameliorating the
educational system, instilling in students a passion for learning, and so on. In
contrast, the second is an issue of necessitation, since it is no longer possible, given
limits intrinsic to the human mind,i for even the most erudite of individuals to fully
comprehend even a single domain of scientific or humanistic inquiry. There is
simply too much to know.
In addition to (principled) reasons why a single individual can no longer
internalize more than a relatively tiny sliver of the knowledge spectrum, there is a
further issue complicating our epistemic predicament. Consider the case of
cognitive neuroscience: tremendous advancements have led to a fairly
sophisticated understanding of how the brain works. But while there are promising
solutions to what some philosophers, specifically David Chalmers, have labeled
the “easy” problems of consciousness, e.g., the problems pertaining to how our
nervous systems “discriminate, categorize, and react to environmental stimuli”
(Chalmers 1995), there remains the so-called “hard” problem: subjective
experience. The crucial point here is that, given the peculiarity of consciousness –
Page |2

a phenomenon apparently unlike anything else in the universe – several notable


philosophers have argued that understanding conscious experience may indeed be
possible but not by us humans. That is to say, limitations intrinsic to our evolved
primate brains may forever preclude us from grasping the various concepts
necessary to make sense of conscious experience. We are thus said to be
“cognitively closed” to the solution of consciousness.
The notion of cognitive closure also applies to the first problem of ignorance
described above, although the issue there is not an inability to grasp the relevant
concepts (if the concepts involved were not graspable by humans, then the theories
containing them would not be constitutive of collective knowledge). Rather, the
issue pertains to the human ability to remember, synthesize, make proper
connections between, and so on, bits of information from a rapidly growing
multiplicity of sub-sub-sub-disciplines (recursively insert ‘sub-’ here as necessary),
the totality of which comprise the human intellectual enterprise. Thus, this issue is
not conceptual, but it is nonetheless cognitive. To be explicit, then, the above
discussion suggests a distinction between two (non-mutually exclusive) reasons
why a given problem might be abstruse: first, it might be conceptually easy to
grasp but involve too many component parts for the human mind to keep in order;
and second, it might be componentially simple but involve concepts too difficult
for the human mind to grasp. Call the former the problem of size and the latter the
problem of type.ii
In this paper, I argue that each source of abstruseness could potentially be
overcome through the creation of cognitive enhancement technologies. Such
technologies seem to offer the (only reasonable) possibility of not just
quantitatively augmenting the mind – enhancing our capacity to remember
informational items, increasing the speed of cerebration,iii and so on – but of
qualitatively changing it as well – making cognitively accessible concepts that are
currently beyond our epistemic reach. Given this possibility in combination with
our epistemic situation (as here depicted), I argue that present resources would be
far better spent on projects to develop cognition-enhancing technologies rather
than on further expanding the already vast territory of human knowledge.iv Indeed,
if there are problems that involve concepts with respect to which our species is
cognitively closed, then such technologies would need to be developed at some
point anyway. In terms of a spatial-geographical metaphor used throughout this
paper, further horizontal growth will likely require us to expand our conceptual
capacities (the problem of type); but we also desperately need vertical growth as
well, that is, we need to acquire the ability to peer back down at the epistemic
landscape, to better understand where we were, where we are now, and where we
are going (the problem of size).v Let us consider these issues in turn, and then
consider the possibility of a technological solution.
Page |3

2. The Problem of Size


When one examines the brain, one finds a vast manifold of hierarchically
organized processes, the sum total of which forms a complex causal network
(Torres 2009). Considered individually, each of these processes is accessible to the
human mind with relative ease. The phenomenon of long-term potentiation (LTP),
for example, involves just a few different entities that engage in only a few
different activities (Craver 2007). It doesn’t take a rocket scientist to understand
LTP, one might say. The same goes for many other mechanistic phenomena,
including synaptic transmission, synaptogenesis, and so on. But when one attempts
to understand a whole ensemble of neurons, acting and interacting through waves
of temporary electrochemical disequilibria, the details become immediately
overwhelming. Indeed, the brain is said to be the most complex object in the
known universe, with around 100 billion neurons (and 10 times more glial cells),
each of which makes at least 10,000 connections to neighboring neurons. The
problem here is, therefore, one of extraordinary complexity – one of size.vi
An exactly similar point could be made with respect to the vast mosaic of
increasingly specialized micro-disciplines in academia today (both the sciences and
the humanities). As one author recently put it, summarizing Woolfolk and Lehrer,
“it was possible as recently as three hundred years ago for one highly learned
individual to know everything worth knowing. By the 1940s, it was possible for an
individual to know an entire field, such as psychology. Today the knowledge
explosion makes it impossible for one person to master even a significant fraction
of one small area of one discipline” (Jacobs 2003, 22). Or, in a more sententious
form: everyone today knows almost nothing about most things.vii The reason for
this pertains to two ostensible facts: first, human knowledge understood as a
collective phenomenon appears to be growing at something like an exponential
rate. (Whether or not the growth is actually exponential is immaterial for present
purposes.) And second, despite this rapid expansion of collective knowledge, the
capacities of the individual remain in some crucial sense fixed and finite.viii
Thus, if one relativistically defines individual ignorance as the difference
between what the collective whole of humanity knows (contained in textbooks,
academic journals, individual minds, internet websites, and so on) and what the
individual person knows, it is irrefragable that ignorance is growing at an
accelerating rate.ix And again, this is not just because of contingencies like laziness,
poor education or rampant anti-intellectualism in the U.S., but because of
limitations intrinsic to human cognition: even if everyone were perfectly studious
all of the time, individual ignorance would still be rapidly expanding.
There is probably not much need to argue for why ignorance of this sort is
undesirable.x Most academics today would probably concur that, for example,
interdisciplinary work is a good thing: just as polyglotism is correlated with
Page |4

enhanced creativity and intelligence, since individuals who speak two or more
languages can linguistically (re-)formulate problems in different ways, so too can
the individual familiar with multiple disciplines approach a given problem from
different angles or perspectives.
Unfortunately, though, interdisciplinary work is often restricted to domains
of inquiry that are more-or-less contiguous. When one ventures out beyond the
small cluster of disciplines surrounding one’s field of expertise, one quickly
encounters the obstacle of what we might call – borrowing from Kuhn (1996) –
disciplinary incommensurability.xi This incommensurability may be not just
methodological but observational and semantic in nature as well: people trained in
different intellectual traditions often see the world in radically different ways, and
indeed the further one gets from “home base” the more radical differences in the
argot used by distant disciplines become.xii This makes communication between
individuals from different knowledge-areas extremely arduous, if not impossible.
On a personal note, I have often been frustrated – as a neuroscientist and
philosopher trained in the analytic tradition – by discussions with individuals of a
postmodernist bent. This is not in the least because of any prior bias against their
preferred tradition: I am genuinely interested in understanding the postmodern
approach. The problem is that my Po-Mo interlocutors and I each use terms, such
as ‘epistemology’ and ‘ontology’, in different ways, we each employ significantly
different methods in our research (Analytic Philosophy is mostly centered around
conceptual analysis, not deconstruction), and indeed our most basic orientations
toward reality often fail to align well enough to permit an intelligible exchange of
ideas.xiii
Interdisciplinarity thus ends up being for the most part limited to
(sub-)disciplines not too distant on the globe of intellectual inquiry (with each of
C.P. Snow’s “two cultures” located at the poles). And as the many specialized
fields of academia continue to rapidly ramify, the very possibility of making
significant connections across large areas of knowledge becomes less feasible in
proportion: there is simply too much to know for any one individual, or even any
small team of individuals exhibiting some division of cognitive labor (Weisberg
and Muldoon forthcoming), to master the relevant domains of human knowledge.
There are several specific reasons for this.xiv Consider, on the one hand, the
issue of time: no individual lives long enough to master more than one, if one,
knowledge-domain, even if she wants to. We are perennially trapped in, to borrow
a term from Christopher Cherniak, our “finitary predicament” (Cherniak 1990).
This predicament yields what I will call the breadth-depth tradeoff: the more one
knows about any single topic, the less topics one knows about; and the more topics
one knows about the less one knows about any single topic. Fortunately, this
vexatious problem is more practical than principled, and indeed the development
Page |5

of life-extension technologies (see de Grey et al. 2002) may significantly mitigate


it as a source of epistemic boundedness. What is not superable, though, is the
cosmic issue, studied by physical eschatologists, of a dying universe: the second
law of thermodynamics necessitates that the universe will eventually die an
“entropy death,” at which point matter and energy will be evenly distributed
throughout an eternally cold, dark and lifeless cosmos. This puts a nonnegotiable
temporal constraint on how much we humans could know, and indeed on how long
our progeny might be alive and kicking.
Putting cosmology aside, though, there is another notable source of the
breadth-depth tradeoff already alluded to above, a cognitive source: the size and
complexity of human knowledge today far transcends the individual’s ability to
remember, synthesize, make proper connections between, and so on, bits of
information from the quickly growing multiplicity of micro-disciplines out there.
That is to say, even if de Grey and company succeed in cracking the immortality
code, our cognitive limits would preclude us from making the sort of
interdisciplinary connections we would ideally like to make. This leaves each one
of us, so to speak, epistemically blind to the terrain of collectively acquired
knowledge surrounding each one of our tiny fields of expertise. Thus, not only can
the contemporary thinker not master even a single sub-domain of knowledge, as
Jacobs notes above, but one cannot even properly locate or orient oneself on the
broader map of human inquiry.
What the human intellectual enterprise needs now more than ever is a way to
rise above the topographical micro-features peculiar to one’s own locale and see in
sufficient detail the surrounding environs, with all its epistemic contours. But this
requires thinkers and theorists with greater cognitive capacities to remember bits of
information, synthesize them together in meaningful ways, learn new facts with
greater efficiency – ultimately, to acquire something like a “God’s eye” view of the
epistemic landscape so far mapped out. At present, though, all we have are busily
working cartographers marking down every minute detail of their increasingly
small areas of research: here is a pebble, and there is a blade of grass. But how
much different the world looks when peering down from an airplane or skyscraper,
or from outer space!
3. The Problem of Type
The issue discussed in the above section concerns the vertical growth of
human knowledge – that is, the ability of individual humans to look down on and
make sense of the intricate topography of collective human knowledge. But there is
another issue that concerns the very possibility of further horizontal growth.
Consider the task of science, crudely couched in a Rumsfeldian terminology:
scientific advancement proceeds via the conversion of “unknown unknowns” to
“known unknowns” to “known knowns.” There are, of course, many things we
Page |6

don’t know about; our Homo ancestors from the Pleistocene were ignorant of dark
matter, just like we are. The difference between us and them is, of course, that we
have positive knowledge about our negative ignorance – following the apophatic
theologians (such as Nicholas of Kusa), one might call this a kind of learned
ignorance. (Or, in keeping with the “dark matter” metaphor, call it a kind of
enlightened benightedness). The next step then is to convert this state of semi-
knowledge to one of genuine knowledge by constructing an explanatory theory that
adequately accounts for this mysterious and ubiquitous substance.
The point is that the tripartite distinction made above falls entirely within the
supercategory of knowables. By definition, this supercategory contains truths that
we human beings can come to know in principle. In contrast, there almost certainly
exists another (species-relative) supercategory of unknowables.xv Also by
definition, this supercategory contains truths that we cannot come to know in
principle. Using a more philosophically respectable (and non-Rumsfeldian)
phraseology, token puzzles falling within the former category may be called
“problems” and those falling within the latter “mysteries.” Consider, for example,
the case of the bat.xvi While the bat has been evolutionarily optimized to navigate
caves via its sonic sense of echolocation, no matter how hard it might try it could
never learn to do basic arithmetic, nor could it ever form the concept of (e.g.) a
black hole. This is a modal claim: it concerns what is and what is not in principle
possible for the bat, given its particular cognitive apparatus.
As finite beings with an evolutionary history of our own, it stands to reason
that there (may) exist entire constellations of facts, phenomena, theories, or
whatever, that we Homo sapiens could never come to know no matter how hard we
might try, since we lack the right mental machinery to form the requisite concepts
(despite our flattering binomial self-description as “wise men”).xvii In the
transcendental naturalism, or “New Mysterianism,” of Colin McGinn, we are said
to be cognitively closed to such facts, phenomena and theories as the result of
limits intrinsic to our minds.xviii As alluded to in section one, McGinn argues that
the venerable mind-body problem (i.e., What exactly is the connection between
minds and bodies, given that each seems so profoundly different from the other?)
falls within the class of mysteries, and other cognitive scientists, such as Noam
Chomsky, have tentatively suggested that the “causation of behavior” might also
be permanently insoluble (Chomsky 1975, 157), as well as the origin of the
language organ (see Dennett 1996, 389). Whether or not these particular claims
end up being veridical or not, though, is completely independent of the more
general claim that cognitive closure is a real feature of our biological situation.
Thus, there appear to be areas of knowledge that are accessible to some
possible minds but inaccessible to the actual human mind.xix And from this it
follows that, even if we had unlimited time to conduct research in the sciences and
Page |7

humanities, the horizontal expansion of human knowledge would eventually have


to stop: it would encounter regions – maybe vast regions – of the epistemic
landscape that are forever unexplorable, given the limits of our mental software
and neural hardware. To put this point a more philosophical way, consider the
proposition that if physicalism (the metaphysical view that everything in the
universe is physical) is true, then there exists a complete explanatory description of
the cosmos, or final theory. Many philosophers held this to be true. But the
consequent of the above if-then construction contains an important ambiguity in
the term ‘theory’: if a theory is something “finitely stateable in a language we can
understand,” then it clearly follows that physicalism does not entail the existence
of a final theory (Stoljar 2009). There may indeed be concepts that one must grasp
to make sense of the complete theory that are beyond our ken – maybe concepts
relating to the ten dimensions posited by superstring theory, or concepts relating to
the link between qualitative experience (“qualia”) and the electrochemical activity
of neurons.
If one defines ‘theory’ more loosely, though, as something stateable in some
possible language, then there may indeed exist a final theory of the universe –
though it might be epistemically off-limits to us humans. It is in this sense that
regions of the knowledge-terrain may be permanently inaccessible to our species,
but not to all possible cognitive agents. With the right conceptual resources and
mental make-up, a sufficiently intelligent being (organismic or machinicxx) may be
able to explore these regions and thus discover truths about the universe that we
could never understand, no matter how assiduous and organized our efforts. We
are to such truths as the bat is to basic arithmetic, or the chimpanzee is to natural
language. This is the second principled problem of ignorance, a problem that
concerns the types of questions that could be asked rather than just their size.xxi
4. Cognitive Enhancement as Solution
Before continuing, we should note an important difference in the meaning of
‘ignorance’ as used in the second and third sections above: while ignorance is
explicitly understood in section two as the difference between what the collective
whole and the individual knows, the third section takes ignorance to be the
difference between what would be known by an omniscient being and what is
actually known by us humans, as a collective whole, at any given point in time.
That is, the latter sense of ignorance pertains to the difference between the theories
we have thus far devised of the universe and the so-called final theory. If
classificatory terms help, the first sense of ignorance is relative and – according to
our spatial metaphor – corresponds to the vertical axis, while the second is
absolute and corresponds to the horizontal. Or, at the risk of belaboring the point,
the first leads one to ask the question “How much can any one of us make sense of
what we, the collective whole, already know about the cosmos?,” while the second
Page |8

leads one to ask the grander question “How much can any one of us, or the
collective whole of humanity, ever know about the cosmos?” In both cases we have
identified principled constraints on our individual and collective capacities to know
arising from our evolved cognitive apparatuses – constraints that preclude us from
making certain progress along either of the two aforementioned axes.xxii
I would now like to look at possible solutions to this epistemological
conundrum. Although I am rather pessimistic about the technological future of
humanity, given the explosion of types (not to mention tokens) of existential risks
anticipated in the twenty-first century ([author citation]), when one focuses
parochially on the knowledge-problem at hand there seems to be a single salient
and promising solution: technology. Before exploring this possible fix, though,
consider first the only other solution available: good old-fashioned Darwinian
evolution. On the assumption that evolutionary change is gradualistic, it follows
that humans acquired (the ability to grasp) highly abstract concepts like electron
and social justice – concepts that are, apparently, only available to us – through a
naturalistic process of piecemeal cognitive development.xxiii Thus, it stands to
reason that further cognitive development of this sort may make available to our
phylogenetic descendants knowledge of our world that is, at present, permanently
beyond our ken. If “encephalization” were to continue, in other words, our
descendants might be able to explore at least some regions of the epistemic
landscape that we Homo sapiens cannot traverse (nor, in some cases, even see are
there: we can’t recognize what we can’t even cognizexxiv).
The problem with this possibility is that there is – or at least there appears to
be – no significant pressure in our highly artificialized selective environment for
the development of a “more advanced” neocortex (although this was not always
the case, as I discuss below).xxv That is to say, the more intelligent individuals
among us are not any fitter than those who are less intelligent.xxvi Thus, it follows
that such evolution would have to occur through the analogous intentional process
that Darwin termed “artificial selection” (as observed in the case of many
domesticated animals). But, when applied to human beings, artificial selection is
nothing other than eugenics, and eugenics has long been rightly relegated to the
trash bin of ethical opprobrium.xxvii So, we can cross the Darwinian option of
cognitive enhancement off the list of acceptable possibilities.
The only remaining option, aside from humbly accepting our circumscribed
epistemic condition, is technological in nature. To begin, then, let us underline that
the idea and practice of cognitive enhancement is not as radical and revolutionary
as it may initially seem. As Bostrom and Sandberg write:
Most efforts to enhance cognition are of a rather mundane nature, and some
may have been practiced for thousands of years. The prime example is
education and training, where the goal is often not only to impart specific
Page |9

skills or information, but also to improve general mental faculties such as


concentration, memory, and critical thinking. Other forms of mental training,
such as yoga, martial arts, meditation, and creativity courses are also in
common use. Caffeine is widely used to improve alertness. Herbal extracts
reputed to improve memory are popular, with sales of Ginko biloba alone in
the order of several hundred million dollars annually in the U.S. (Bostrom
and Sandberg forthcoming)
The authors contrast such “conventional” approaches to enhancement with more
experimental methods “such as ones involving deliberately creating nootropic
drugs, gene therapy, or neural implants” (Bostrom and Sandberg forthcoming). The
take-home point is that, despite what one might think prima facie, enhancement
strategies have been around for a long time – if anything, the means might be
changing, but the end is nothing new and out-of-the-ordinary. (If the reader would
like, he or she may devise a mnemonic device to help remember this point.)
It is also worth making explicit two additional points: first, technology has,
since the Homo genus first made its appearance on the evolutionary scene, played a
wholly integral role in the development of our cognitive systems. For example, the
earliest technologies – stone tools or “lithics” – had a significant enhancing effect:
as our ancestors became increasingly dependent upon such lithics for survival, the
invisible hand of natural selection began to “pick out” those individual organisms
who exhibited a higher aptitude, relative to their conspecifics in the population, for
fashioning such tools. As a result, nature established a powerful positive feedback
loop such that increased intelligence led to an increase in intelligence, with each
positive gain in intellectual ability resulting in an even greater subsequent gain.
This brought about the “Great Encephalization,” as some have called it.
And second, there is an increasingly influential movement within cognitive
science that rejects the traditional notion that minds are physically contained within
the intuitive boundaries of “skin and skull.” According to Andy Clark and David
Chalmers, who inaugurated the extended cognition tradition with their 1998 co-
written paper, the physical “vehicle” of the mind (Baker forthcoming) is not
confined to any region in space, but is actually extendible beyond the cranium and
integument. On this model, cognitive extension occurs when entities initially
located in one’s externality come to reliably (even if transiently) instantiate a
functional role such that, if this artifact were located inside the head, it would
normally be considered internal to the individual’s cognitive system. (This is the
so-called “parity principle.”) For example, Clark and Chalmers argue that the
notepad of an individual with memory problems might come to have the exact
same function as the corresponding neural structures in a health individual’s brain.
And because of this isomorphism – that is, because the notepad can store
dispositional beliefs about the world just like (say) the hippocampus normally does
P a g e | 10

– the notepad should count as literally part of the individual’s “coupled” cognitive
system.xxviii Furthermore, Clark and Chalmers argue that such couplings, or
biotechnological hybrids, are far more common than one might think. In fact,
extensions of cognition are traceable back to the very first humans many millions
of years ago. This leads Clark, in a separate publication, to assert that Homo faber
(“man the maker”) has always been to some degree technologically-constituted: we
are, as he puts it, “natural-born cyborgs” (Clark 2004).xxix
Thus, with this attempt to normalize both the technological and
enhancement aspects of technological enhancements,xxx let us return to the two
problems delineated above: size and type. As an issue of size, the former problem
is quantitative in nature: what we need to overcome it are cognitive capacities
different from what we already have only in degree. In contrast, as an issue of type,
the latter problem is qualitative in nature: what we need to overcome it are
cognitive capacities different in kind. This means, essentially, that a solution to the
latter problem would entail a redefinition of the venerable boundary between
mysteries and problems, as understood by the New Mysterians. Thus, while
science is busy converting “unknown unknowns” to “known unknowns” and then
to “known knowns,” the aim of enhancing the quality of the human mind would
entail gentrifying, so to speak, the supercategory of “unknowables” (with all of its
“unknowns”) such that truths once domiciled within it would subsequently find
their home in the alternate supercategory of “knowables.” With the right sort of
techno-change to the brain, such truths would be catapulted within our epistemic
reach.
Now, I need not, for the present purposes, provide any strong argument to
support the prognostication that, if pursued, future technology will effectuate a
change in the boundary separating mysteries and problems (the qualitative problem
of type), or for that matter will significantly increase our capacity for
memorization, or the speed of cerebration, and so on (the quantitative problem of
size). All one needs to accept is that the artifactual products of the inchoate
genetics, nanotechnology and robotics revolutionxxxi might very well bring about
changes in our cognitive systems as extraordinary and profound as those brought
about by Darwinian evolution – say, in the past 2.6 million years, since Homo
habilis first began manufacturing tools around Olduvai Gorge.xxxii This relatively
“weak” assertion about technological possibility is perfectly adequate as a first
premise in an argument for “enhancement” technologies.xxxiii More formally put,
this argument goes as follows:
Premise 1: Future technologies offer the serious possibility of unlimited, or
at least less limited, knowledge-growth along both the vertical and
horizontal axes;
P a g e | 11

Premise 2: Our epistemic situation is such that (a) individual ignorance is


growing rapidly as the territory of collective human knowledge expands
exponentially, and (b) the human mind itself imposes more-or-less
significant conditions of permanent ignorance with respect to particular
domains of possible knowledge (some of which we can glimpse no more
than a bat can glimpse the operations of basic arithmetic);
Conclusion: Resources – including time, money and intellectual energy –
would be far better spent on developing safe and effective cognition-
enhancing technologies than on further mapping out the topographical
minutia of those regions of the epistemic landscape that are already
accessible to us: not only are we bound to encounter vicinities of knowledge
that are forever off-limits to us, but we are rapidly losing – indeed, have
already lost – the ability to meaningfully locate ourselves on the vast map of
collective human knowledge.
The telos here is, of course, to solve the two formidable problems discussed in
sections two and three. Directing resources away from the quotidian, business-as-
usual sort of research that typically occurs in the sciences (and humanities) and
instead towards the realization of effective cognitive enhancement technologies
promises not only to change the truth-value of the claim that ‘everyone today
knows almost nothing about most things’, but to place within our epistemic reach
the ideas and concepts necessary for us, as a collective body or individually, to
grab hold of a (more) complete theory of the cosmos. In the terminology here
developed, the former effort would (aim to) mitigate or eliminate what we have
called relative ignorance, and the latter would (aim to) eliminate absolute
ignorance. As Bostrom notes, “a ‘superficial’ contribution that facilitates work
across a wide range of domains can be worth much more than a relatively
‘profound’ contribution limited to one narrow field, just as a lake can contain a lot
more water than a well, even if the well is deeper. No contribution would be more
generally applicable than one that improves the performance of the human brain”
(Bostrom 2008).
Without committing to any utopian illusions in which technology constitutes
a catholicon, or universal remedy, for all our epistemic woes, the argument put
forth here attempts to ground itself in reasonable expectations, empirical
observations and pragmatic considerations.xxxiv It is an argument for a change in
practice based on perceived limitations in principle. Unless we as a species are
willing to humbly accept our state of multiple cognitive closure (one need not be a
rabid anti-intellectual or technophobic Luddite to respect this option, by the way),
cognitive enhancement technologies appear to be the only viable option.

Works Referenced:
P a g e | 12

[author citation]

Allhoff, F., L. Patrick and J. Steinberg. 2009. Ethics of Human Enhancement: An


Executive Summary. Science and Engineering Ethics. Online First.

Baker, Lynne Rudder. Persons and the Extended-Mind Thesis. Forthcoming in


Zygon: Journal of Religion and Science.

Bostrom, Nick. 2008. Three Ways to Advance Science. Nature. Podcast, URL =
<http://www.nickbostrom.com/views/science.pdf>.

Bostrom, Nick and Anders Sandberg. Cognitive Enhancement: Methods, Ethics,


and Regulatory Challenges. Forthcoming in Science and Engineering Ethics.

Bostrom, Nick and Rebecca Roache. Smart Policy: Cognitive Enhancement and
the Public Interest. Forthcoming in Enhancing Human Capabilities.

Caplan, Bryan. 2001. Rational Ignorance vs. Rational Irrationality. URL =


<http://economics.gmu.edu/bcaplan/ratirnew.doc>.

Chalmers, David. 1995. Facing Up to the Problem of Consciousness. Journal of


Consciousness Studies 2(3): 200-219.

Cherniak, Cherniak. 1990. Minimal Rationality. Cambridge: MIT Press.

Clark, Andy. 2004. Natural-Born Cyborgs: Minds, Technologies, and the Future
of Human Intelligence. Oxford: Oxford University Press.

Clark, Andy and David Chalmers. 1998. The Extended Mind. Analysis. 58(1): 7-
19.

Craver, Carl. 2007. Explaining the Brain. Oxford: Oxford University Press.

Dawkins, Richard. 1992. Progress. In Keywords in Evolutionary Biology. Evelyn


Fox Keller and Elisabeth Lloyd (eds). 263-272. Cambridge: Harvard University
Press.

Dawkins, Richard. 2006. The God Delusion. New York: Mariner Books.
P a g e | 13

de Grey, A., B. Ames, J. Andersen, A. Bartke, J. Campisi, C. Heward, R.


McCarter, and G. Stock. 2002. Time to talk SENS: Critiquing the immutability of
human aging. Annals of the New York Academy of Sciences 959: 452-62.

Dennett, Daniel. 1991. Review of McGinn, The Problem of Consciousness. The


Times Literary Supplement. May 10, 1991.

Dennett, Daniel. 1996. Darwin’s Dangerous Idea: Evolution and the Meanings of
Life. New York: Simon & Schuster.

Fodor, Jerry. 1983. The Modularity of Mind. Cambridge: MIT Press.

Godfrey-Smith, Peter. 2009. Abstractions, Idealizations, and Evolutionary Biology.


In Mapping the Future of Biology. Anouk Barberousse, Michel Morange and
Tomas Pradeu (eds). 47-56. Dordrecht: Springer Netherlands.

Ihde, Don. 1990. Technology and the Lifeworld: From Garden to Earth. Indiana:
Indiana University Press.

Jacobs, Gregg. 2003. The Ancestral Mind: Reclaim the Power. London: Penguin.

Kelly, Kevin. 2008. The Expansion of Ignorance. The Technium. URL =


<http://www.kk.org/thetechnium/archives/2008/10/the_expansion_o.php>.

Kevles, Daniel. 1992. Eugenics. In Keywords in Evolutionary Biology. Evelyn Fox


Keller and Elisabeth Lloyd (eds). 92-94. Cambridge: Harvard University Press.

Kuhn, Thomas. 1996. The Structure of Scientific Revolutions. Chicago: University


of Chicago Press; 3rd edition.

Kurzweil, R. 2005. The singularity is near: When humans transcend biology. New
York: Viking.

McGinn, Colin. 2000. The Mysterious Flame: Conscious Minds In A Material


World. New York: Basic Books.

McGinn, Colin. 2006. Can We Solve the Mind-Body Problem? In The Philosophy
of Mind: Classical Problems/Contemporary Issues. Brian Beakley and Peter
Ludlow (eds). 321-337. Cambridge: MIT Press.
P a g e | 14

Migliore, Daniel. 2004. Faith Seeking Understanding: An Introduction to


Christian Theology. Grand Rapids, MI: Wm. B. Eerdmans Publishing Company;
2nd edition.

Mooney, Chris and Sheril Kirshenbaum. 2009. Unscientific America: How


Scientific Illiteracy Threatens our Future. New York: Basic Books.

Nagel, Thomas. 1974. What Is It Like to Be a Bat? The Philosophical Review.


83(4): 435-450.

Retherford, Robert and William Sewell. 1988. Intelligence and Family Size
Reconsidered. Social Biology. 35(1-2): 1-40.

Ruse, Michael. 1996. Monad to Man: The Concept of Progress in Evolutionary


Biology. Cambridge: Harvard University Press.

Schneider, S. 2008. Future Minds: Transhumanism, Cognitive Enhancement and


the Nature of Persons. URL = <http://repository.upenn.edu/cgi/viewcontent.cgi?
article=1037&context=neuroethics_pubs>.

Sellars, W. 1956. Empiricism and the Philosophy of Mind. In The Foundations of


Science and the Concepts of Psychoanalysis, Minnesota Studies in the Philosophy
of Science, Vol. I. H. Feigl and M. Scriven, pp. 127-196. Minneapolis, MN:
University of Minnesota Press.

Skenkman, Rick. 2008. Just How Stupid Are We?: Facing the Truth About the
American Voter. New York: Basic Books.

Stoljar, Daniel. 2009. Physicalism. The Stanford Encyclopedia of Philosophy.


Edward N. Zalta (ed). URL =
<http://plato.stanford.edu/archives/fall2009/entries/physicalism/>.

Theobald, Robert. 1996. Who said we wanted an information superhighway?


Internet Research. 6(2/3): 90-92.

Torres, Phillip. 2009. A Modified Conception of Mechanisms. Erkenntnis 71(2):


233-251.

Weisberg, Michael and Ryan Muldoon. Epistemic Landscapes and the Division of
Cognitive Labor. Forthcoming in Philosophy of Science.
P a g e | 15

Winner, Langdon. 1977. Autonomous Technology: Technics-out-of-Control as a


Theme in Political Thought. Cambridge: MIT Press.

Wright, Ronald. 2004. A Short History of Progress. New York: De Capo Press.
i
This pertains to the “breadth-depth tradeoff” that I discuss below.
ii
I borrow this distinction, made in a slightly different context, from McGinn 2006, 331.
iii
Thus, allowing one to learn more per increment of time.
iv
Note: I will bracket a number of extremely important issues concerning the ethicality of cognitive
enhancement (Bostrom and Sandberg forthcoming; Allhoff et al. 2009), cognitive enhancement and personal
identity (Schneider 2008), etc. What concerns me at present is only the possibility of solving the size and type
problems of ignorance via cognition-enhancing methods. Further discussion would indeed examine the ethical
and philosophical implications of cognitively enhancing the human mind.
v
Note that the term ‘vertical knowledge’ has been used in a number of different disciplinary contexts, such as
contemporary digital humanities. The definition in this field, though, is almost exactly opposite the way I use the
metaphor of verticality in this paper: someone who has “vertical knowledge” has tremendous knowledge about a
single topic – he or she is an expert. My sense refers more to wide knowledge, to interdisciplinarity or, at the
ideal extreme, the ability for one “to understand how things in the broadest possible sense of the term hang
together in the broadest possible sense of the term” (Sellars 1956, 37). That is verticality.
vi
This problem is typically approached in the sciences through abstraction and idealization. See Godfrey-Smith
2009 for discussion.
vii
This has led to a theory in economics and decision theory called “rational ignorance theory.” See, e.g., Caplan
2001.
viii
Ronald Wright quotes an unidentified person who “once defined specialists as ‘people who know more and
more about less and less, until they know all about nothing” (Wright 2004, 29). Another memorable witticism
comes from Robert Theobald, who contends that “when information doubles, knowledge halves and wisdom
quarters” (Theobald 1996). According to a calculation made by Kevin Kelly and the Google economist Hal
Varian, in fact, “world-wide information has been increasing at the rate of 66% per year for many decades”
(Kelly 2008). Yet another datum to add to the heap.
ix
As the critic of technology Langdon Winner writes: “If ignorance is measured by the amount of available
knowledge that an individual or collective ‘knower’ does not comprehend, one must admit that ignorance, that is
relative ignorance, is growing” (Winner 1977, 283).
x
There are, of course, the obvious political and social costs of ignorance – costs that are in no way trivial.
xi
Kuhn’s notion of incommensurability applies diachronically to single disciplines. In contrast, the sense used
here applies synchronically to multiple disciplines. Of course, all that is needed to overcome this kind of
incommensurability is greater familiarity with the methodological, observational and semantic peculiarities of
those disciplines foreign to some individual – but therein lies the problem!
xii
Consider the case of ‘faith knowledge’, a term I came across in Daniel Migliore’s book Faith Seeking
Understanding. Migliore specifies as a principle of Christology that “Knowledge of Jesus Christ is not simply
‘academic’ or historical knowledge; it is faith knowledge” (Migliore 2004, 167; emphasis in original).
Similarly, the Vatican Council, III, iv, states that “the Catholic Church has always held that there is a twofold
order of knowledge, and that these two orders are distinguished from one another not only in their principle but
in their object; in one we know by natural reason, in the other by Divine faith; the object of the one is truth
attainable by natural reason, the object of the other is mysteries hidden in God, but which we have to believe and
which can only be known to us by Divine revelation.” From the perspective of analytic philosophy, though, the
collocation of ‘faith’ and ‘knowledge’ is utterly oxymoronic – a nonsensical locution, since faith and knowledge
are exact epistemological opposites.
xiii
One almost feels as if she were in the Quinean predicament of radical translation, with her interlocutor
gesturing at rabbits, or undetached rabbit parts, or time-slices of rabbits, and so on, shouting “Gavagai!” But
which of these disjuncts is the “real” referent seems, at times, inscrutable.
xiv
That is, there are several senses in which the human situation is fixed and finite.
xv
Truths within this supercategory are, of course, unknown to us humans because they are unknowable.
xvi
I allude here to Thomas Nagel’s (1974) famous paper on the phenomenology of bat echolocation – a “what it
is like” that seems to permanently lie outside the realm of purely objective science.
xvii
As Dawkins insightfully writes: “[I want to pursue the point] that the way we see the world, and the reason
why we find some things intuitively easy to grasp and others hard, is that our brains are themselves evolved
organs: on-board computers, evolved to help us survive in a world – I shall use the name Middle World – where
the objects that matters to our survival were neither very large nor very small; a world where things either stood
still or moved slowly compared with the speed of light; and where the very improbable could safely be treated as
impossible. Our mental burka window is narrow because it didn’t need to be any wider in order to assist our
ancestors to survive” (Dawkins 2006, 367-368).
xviii
In a slightly different terminology, Jerry Fodor (1983) labels this same idea “epistemic boundedness.” Thus,
we are epistemically bounded from exploring certain regions of possible knowledge about the cosmos.
xix
Interestingly, McGinn distinguishes between relative and absolute cognitive closure. He writes: “A problem is
absolutely cognitively closed if no possible mind could resolve it; a problem is relatively closed if minds of
some sorts can in principle solve it while minds of other sorts cannot” (McGinn 2006, 329). For the purposes of
this paper, I want to avoid getting entangled in the net of abstruse issues concerning the possibility of absolute
cognitive closure.
xx
Note the semantic origin of ‘organism’: it comes from ‘organ’, which derives from the Greek etymon organon.
In Greek, this word meant “tool, instrument, engine of war, …” (OED). Thus, from the etymological
perspective, the metaphor “organisms are artifacts” is analytically true.
xxi
In fact, McGinn conjectures that the mind-body problem might be rather simple, even though completely
opaque to us humans.
xxii
See [author citation] for a thorough critique of techno-progressionism, especially as it manifests itself in the
contemporary transhumanist movement. Obviously, if there is a final theory, then any movement towards this
end would indeed count as scientific progress in the strongest sense. But one should always be weary of the
millennialist accretions that build up around notions of absolute progress. See also Ruse 1996 for a thorough
discussion of progressionism.
xxiii
See also the “Baldwin effect.”
xxiv
Any allusion here to Donald Davidson’s “Swampman” is merely incidental.
xxv
See Dawkins’ 1992 article on evolutionary progress.
xxvi
In fact, there appears to be a negative correlation between measured intelligence and fertility rate. See, e.g.,
Retherford and Sewell 1988. The well-known Flynn effect appears to be the result of environment rather than
genes.
xxvii
Today, a “softer” kind of eugenics is finding expression in the growing field of reprogenetics – a
portmanteau of ‘reproductive’ and ‘genetics’. See Kevles 1992 for an informative overview of eugenics.
xxviii
Clark and Chalmers also argue that their thesis entails that the self is itself extendible beyond traditional
organismic boundaries – that is, if one takes the self to be constituted not just by occurrent beliefs (those beliefs
one has right now) but by dispositional beliefs too.
xxix
Don Ihde (1990) has much to say about the phenomenological relations between human users and the
technologies used; there are connections to be made between Ihde and Clark/Chalmers, no doubt, though no one
has yet made them.
xxx
Of course, to say that technology has, as a matter of fact, played a part in our evolution is not to say that it
ought to have played such a role, or that it ought to play such a role in the future. The transhumanist project must
be justified.
xxxi
See Kurzweil 2005 and [author citation] for more on the “GNR” revolution.
xxxii
As Dennett writes in a hostile review of McGinn: “His thesis about the likely limitations of our brains would
be uncontroversially true if it weren't for our clever trick of expanding the powers of our naked brains by off-
loading much of the work to artifacts we have designed and built just for this purpose. The brains we were born
with are no doubt quite incapable of grasping long division--let alone calculus or photosynthesis--without the aid
of pencil and paper or chalk and blackboard. We have to work to acquire some of our concepts, but we don't
have to do all the work in our heads, thank goodness. One might think, then, that in order to defend a thesis
about the outer limits of our powers, one should at least take a peek at the concepts made available to those who
have armed themselves with the new technology” (Dennett 1991). Note that Dennett’s target is not cognitive
closure per se, but the claim that the mind-body problem is off-limits for us humans. The present thesis is
actually defended in the spirit of Dennett’s critique.
xxxiii
That is, we can’t know for sure until we’ve tried. And the argument here is that resources would be far better
spent trying than continuing to work on the profusion of micro-problems that currently occupy our minds.
xxxiv
These roughly correspond, of course, to the three premises above.

Vous aimerez peut-être aussi