Vous êtes sur la page 1sur 289

Digimodernism

This page intentionally left blank


Digimodernism
How New Technologies Dismantle the Postmodern
and Reconfigure Our Culture

Alan Kirby
2009

The Continuum International Publishing Group Inc


80 Maiden Lane, New York, NY 10038

The Continuum International Publishing Group Ltd


The Tower Building, 11 York Road, London SE1 7NX

www.continuumbooks.com

Copyright © 2009 by Alan Kirby

All rights reserved. No part of this book may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means, electronic, mechanical,
photocopying, recording, or otherwise, without the written permission of the
publishers.

ISBN: 978-0-8264-2951-3 (hardcover)


ISBN: 978-1-4411-7528-1 (paperback)

Printed in the United States of America


Contents

Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1. The Arguable Death of Postmodernism . . . . . . . . . . . . . . . . . . . . . . 5


Children’s Postmodernism: Pixar, Aardman, Dreamworks
Killing Postmodernism: Dogme 95, New Puritans, Stuckists
Burying Postmodernism: Post-Theory
Succeeding Postmodernism: Performatism,
Hypermodernity, and so on
Cock and Bull

2. The Digimodernist Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50


Reader Response
The Antilexicon of Early Digimodernism

3. A Prehistory of Digimodernism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Industrial Pornography
Ceefax
Whose Line is It Anyway?
House
B. S. Johnson’s The Unfortunates
Pantomime

4. Digimodernism and Web 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


Chat Rooms (Identity)
Message Boards (Authorship)
Blogs (Onwardness)
Wikipedia (Competence)
YouTube (Haphazardness)
Facebook (Electronic)

v
vi Contents

5. Digimodernist Aesthetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124


From Popular Culture to Children’s Entertainment
The Rise of the Apparently Real
From Irony to Earnestness
The Birth of the Endless Narrative

6. Digimodernist Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166


Videogames
Film
Television
Radio
Music
Literature

7. Toward a Digimodernist Society? . . . . . . . . . . . . . . . . . . . . . . . . . . 225


The Invention of Autism
The Return of the Poisonous Grand Narrative
The Death of Competence

Conclusion: Endless . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Works Cited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Introduction

Now . . . bring me that horizon.


Pirates of the Caribbean: The Curse of the Black Pearl, 20031

Since its first appearance in the second half of the 1990s under the impetus
of new technologies, digimodernism has decisively displaced postmodern-
ism to establish itself as the twenty-first century’s new cultural paradigm.
It owes its emergence and preeminence to the computerization of text,
which yields a new form of textuality characterized in its purest instances
by onwardness, haphazardness, evanescence, and anonymous, social and
multiple authorship. These in turn become the hallmarks of a group of
texts in new and established modes that also manifest the digimodernist
traits of infantilism, earnestness, endlessness, and apparent reality. Digi-
modernist texts are found across contemporary culture, ranging from
“reality TV” to Hollywood fantasy blockbusters, from Web 2.0 platforms
to the most sophisticated videogames, and from certain kinds of radio
show to crossover fiction. In its pure form the digimodernist text permits
the reader or viewer to intervene textually, physically to make text, to add
visible content or tangibly shape narrative development. Hence “digimod-
ernism,” properly understood as a contraction of “digital modernism,” is a
pun: it’s where digital technology meets textuality and text is (re)formulated
by the fingers and thumbs (the digits) clicking and keying and pressing in
the positive act of partial or obscurely collective textual elaboration.
Of all the definitions of postmodernism, the form of digimodernism
recalls the one given by Fredric Jameson. It too is “a dominant cultural
logic or hegemonic norm”; not a blanket description of all contemporary
cultural production but “the force field in which very different kinds of
cultural impulses . . . [including] ‘residual’ and ‘emergent’ forms of cultural

1
2 DIGIMODERNISM

production . . . must make their way.”2 Like Jameson, I feel that if “we do
not achieve some general sense of a cultural dominant, then we fall back
into a view of present history as sheer heterogeneity, random difference . . .
[The aim is] to project some conception of a new systematic cultural
norm.”3 Twenty years later, however, the horizon has changed; the domi-
nant cultural force field and systematic norm is different: what was post-
modernist is now digimodernist.
The relationships between digimodernism and postmodernism are
various. First, digimodernism is the successor to postmodernism: emerg-
ing in the mid-late 1990s, it gradually eclipsed it as the dominant cultural,
technological, social, and political expression of our times. Second, in its
early years a burgeoning digimodernism coexisted with a weakened,
retreating postmodernism; it’s the era of the hybrid or borderline text (The
Blair Witch Project, The Office, the Harry Potter novels). Third, it can be
argued that many of the flaws of early digimodernism derive from its con-
tamination by the worst features of a decomposing postmodernism; one of
the tasks of a new digimodernist criticism will therefore be to cleanse its
subject of its toxic inheritance. Fourth, digimodernism is a reaction against
postmodernism: certain of its traits (earnestness, the apparently real)
resemble a repudiation of typical postmodern characteristics. Fifth, histor-
ically adjacent and expressed in part through the same cultural forms,
digimodernism appears socially and politically as the logical effect of post-
modernism, suggesting a modulated continuity more than a rupture. These
versions of the relationship between the two are not incompatible but
reflect their highly complex, multiple identities.
On the whole I don’t believe there is such a thing as “digimodernity.”
This book is not going to argue that we have entered into a totally new
phase of history. My sense is that, whatever its current relevance in other
fields, postmodernism’s insistence on locating an absolute break in all
human experience between the disappeared past and the stranded present
has lost all plausibility. The last third of the twentieth century was marked
by a discourse of endings, of the “post-” prefix and the “no longer” structure,
an aftershock of 1960s’ radicalism and a sort of intellectual millenarianism
that seems to have had its day. Like Habermas, my feeling is that, ever
more crisis ridden, modernity continued throughout this period as an
“unfinished project.” Although the imponderable evils of the 1930s and 40s
could only trigger a breakdown of faith in inherited cultural and historical
worldviews such as the Enlightenment, the nature and scale of this reac-
tion were overstated by some writers. In so far as it exists, “digimodernity”
Introduction 3

is, then, another stage within modernity, a shift from one phase of its his-
tory into another.
Certain other kinds of discourse are also not to be found here. I won’t be
looking at how digitization actually works technically; and I won’t do more
than touch on the industrial consequences, the (re)organization of TV
channels, film studios, Web start-ups, and so on, which it’s occasioned. I’m
a cultural critic, and my interest here is in the new cultural climate thrown
up by digitization. My focus is textual: what are these new movies, new TV
programs, these videogames, and Web 2.0 applications like to read, watch,
and use? What do they signify, and how? Digimodernism, as well as a break
in textuality, brings a new textual form, content, and value, new kinds of
cultural meaning, structure, and use, and they will be the object of this
book.
Equally, while digimodernism has far-reaching philosophical implica-
tions with regard to such matters as selfhood, truth, meaning, representation,
and time, they are not directly explored here. It’s true that these arguments
first saw the light of day in an article I wrote for Philosophy Now in 2006,
but the cultural landscape was even then my primary interest.4 In that arti-
cle I called what I now label digimodernism “pseudo-modernism,” a name
that on reflection seemed to overemphasize the importance of certain
concomitant social shifts (discussed here in Chapter 7). The notion of
pseudomodernity is finally a dimension of one aspect of digimodernism.
The article was written largely in the spirit of intellectual provocation;
uploaded to the Web, it drew a response that eventually persuaded me the
subject deserved more detailed and scrupulous attention. I’ve tried to
address here a hybrid audience, and for an important reason: on one side,
it seemed hardly worth discussing such a near-universal issue without
trying to reach out to the general reader; on the other, it seemed equally
pointless to analyze such a complex, multifaceted, and shifting phenome-
non without a level of scholarly precision. Whatever the result may be, this
approach is justified, even necessitated, by the status and nature of the
theme. Finally, considerations of space precluded extensive discussion of
postmodernism, and the text therefore assumes that we all know well
enough what it is/was. Anyone wishing for a fuller account is advised to
read one of the many introductions available such as Simon Malpas’s
The Postmodern (2005), Steven Connor’s Postmodernist Culture (1997, 2nd
edition), or Hans Bertens’s The Idea of the Postmodern (1995).
I begin by assessing the case for the decline and fall since the mid-late
1990s of postmodernism, in part as a way of outlining the context within
4 DIGIMODERNISM

which its successor appeared. In Chapter 2 I discuss digimodernism’s most


recognizable feature, its new textuality; I then sketch a prehistory of its traits
in the period before the emergence of its technological basis. Chapter 4
examines instances of digimodernism on the Internet, while Chapter 6
considers its impact on preexisting cultural and textual forms. They are
separated by a study of aesthetic characteristics common to all digimod-
ernist textual modes, electronic or not. I finish with some remarks on the
possibility of a digimodernist society.
I would like to thank David Barker and everyone at Continuum for their
support and assistance, and the staff at Oxford’s Bodleian and (public)
Central Libraries for their indefatigable help on what must sometimes have
seemed a strange project. My thanks also go to Kyra Smith for guidance
through the world of videogames, and to Keith Allen, Wendy Brown, and
Mark Tyrrell. I wish to express my lasting gratitude to Peter Noll, whose
energy and brilliance awoke me from my slumbers, and to Brian Martin.
Above all, I wish to thank my parents and Angeles, without whose support
and love this book would never have appeared. It’s dedicated to Julian,
whose horizon, in some form, this will be.
1
The Arguable Death of Postmodernism

[T]wenty years ago, the concept “postmodern” was a breath of fresh air, it suggested
something new, a major change of direction. It now seems vaguely old-fashioned.
Gilles Lipovetsky, 20051

How might you know that postmodernism was dead? To call anything
“obsolete,” “finished,” or “over” is clearly to fuse an allegation of previous
existence with one of contemporary absence, and let us leave to one side for
now any objections to the first proposition. Assuming that postmodernism
was once alive, what would it mean to say it was dead? This is partly a
request for evidence, but there is a more fundamental problem here, to do
with the fixing of criteria for the claim, which doesn’t apply to calling a
sentient being deceased or an event concluded. We don’t really know what
the criteria for such a claim are. Yet cultural or historical periods do end:
nobody seriously believes that terms such as the Stone Age or the Dark
Ages, the Renaissance or Romanticism are appropriate or useful in the def-
inition of social or artistic trends at the start of the twenty-first century.
Despite this, it can still be felt that some of the traits of expired eras linger
on, possibly in subsumed or mutated form, and this can be asserted com-
pellingly; it can also be asserted, though, and in slightly different ways, for
sentient beings and events. As a result, it can be argued with absolute assur-
ance that a day will come when postmodernism is over as an appropriate
or useful category to define the contemporary, even if some of its traits were
to survive. It will only be a question of working out when this happened.
Discussing the possible eclipse of postmodernism with some students, I saw
one or two of them bridle as though irritated by or contemptuous of the
idea. But this can only be historical ignorance or fear of the unknown: of
course, one day it must be gone. And the problem of knowing when that
day is, is also the problem of deciding on the criteria for such a death.

5
6 DIGIMODERNISM

If it is so hard either to be certain that postmodernism is dead or to set


the criteria for such a claim, you might wonder why I should bother with
the point. However, I am arguing here precisely that digimodernism has
succeeded it as the contemporary cultural-dominant, and to study the hab-
its of the current monarch presupposes the passing of his or her predeces-
sor. It will be clear that I am not advancing the absolutist view that no trace
of postmodernism can any longer be found in our culture; indeed, facets
of postmodernism have now found a place, often embedded as “common
sense,” as that which “goes without saying,” within the digimodernist
landscape. The culture of the first decade of this century can be seen as,
strictly speaking, “early digimodernist,” and one of the characteristics of
this period is its inflection by assumptions derived from postmodernism:
the particular nature of Web 2.0 is strongly influenced by postmodernism’s
love of “more voices,” for instance.
But if postmodernist traits linger on today, often damagingly as I will
argue later, it is no longer the cultural-dominant. You can analyze Big
Brother, Web 2.0, or the Harry Potter franchise in its terms if you insist,
but you’ll miss what’s most interesting about them; forcing them into a
model framed a quarter of a century earlier can only shortchange and dis-
tort them. If this book, then, is about the emergence of a new cultural-
dominant, it is also in a minor sense about the passing of another, and in
this chapter I am going to assess what the argument that postmodernism is
dead would look like were it to be made at the end of the first decade of the
new millennium.
In June 2008 I put the term “postmodernism” and its cognates into the
database of Oxford University’s Bodleian Library whose holdings include,
by law, a copy of every book published in the United Kingdom. I found
scores of texts published in the previous five years or so with the term in
their title or subtitle; but on closer inspection almost all fell into a narrow
range of categories. There were books of literary criticism dealing with
accepted postmodernist writers like Auster, Barthelme, Borges, DeLillo,
Gray, and Pynchon. There were also books introducing postmodernism
to a new generation of students by authors like Richard Appignanesi,
Christopher Butler, Ian Gregson, Simon Malpas, and Glenn Ward, along
with series titles by publishers such as Cambridge University Press and
Routledge. However, and surprisingly perhaps, a huge proportion of these
texts considered postmodernism in relation to theology and Christian
doctrine in general. Among countless examples were: Above All Earthly
Pow’rs: Christ in a Postmodern World, The Experience of God: A Postmodern
Response, The Next Reformation: Why Evangelicals Must Embrace Postmo-
dernity, Opting for the Margins: Postmodernity and Liberation in Christian
The Arguable Death of Postmodernism 7

Theology, Postmodernism and the Ethics of Theological Knowledge, and


Who’s Afraid of Postmodernism? Taking Derrida, Lyotard, and Foucault to
Church.
This number of books is far less impressive than it may seem. Literary
criticism and student guides were equally being published for Romanticism
and its accepted artists and authors; were postmodernism extinct, such
titles would nonetheless have appeared. As for the third grouping, I am
reminded of Francis Schaeffer, an evangelical pastor-cum-theologian
who argued in the 1960s that post-Enlightenment despair had gradually
spread across the West from the realm of philosophy through art into
music, on into general culture, finally reaching theology.2 He might then
have regarded the current popularity of postmodernism among the theo-
logians as a sign of its arrival at its intellectual and historical terminus. Be
this as it may, none of these books can be seen to extend or refurbish post-
modernism; instead, they take the term as a fixed quantity, they assume its
meaning rather than contesting or renewing it. For them, as for the student
guides, it’s a settled and known matter, not a dynamic and growing entity.
En bloc, they tend to confirm Ernst Breisach’s observation, made in 2003,
that “postmodernism has been, for some time now, in the aftermath of its
creative period.”3
There is, indeed, a fourth category of recent book, with titles like Drama
and/after Postmodernism, Guy Davenport: Postmodern and After, Martin
Amis: Postmodernism and Beyond, Philosophy after Postmodernism, Pho-
tography after Postmodernism, and Reclaiming Female Agency: Feminist Art
History after Postmodernism. None of these is very enlightening about the
period succeeding the eclipse of postmodernism, or even very definite
about whether we are in such a period. That publishers have recently
become fond of such titles is, though, a sign of the times: fifty years is a
decent lifespan for an aesthetic movement (compare realism or modern-
ism), and postmodernism was born in the 1950s; it is also a long enough
period for philosophical concerns to shift and for seismic sociohistorical
changes to seem to demand new ways of thinking and experiencing. This
of course proves nothing in itself, but then again this is going to be a chap-
ter in which “proof ” is very hard to come by. Think of it, perhaps, as a first
draft of the final chapter of a history of postmodernism written by some-
one unknown a century hence.
I am going to look at reflections of the status of postmodernism since
the mid- to late-1990s in cinema, literature, art, cultural theory, and philos-
ophy. In places it can be seen to have reached (or passed) saturation point
in its penetration of the arts, becoming mainstream and conventional; else-
where it has seemed played out; or it has been rejected; or its possible
8 DIGIMODERNISM

successor has been announced (as by the present author). In each case I
shall ask whether this constitutes evidence of the death of postmodernism
and, if so, how much. Alert readers may notice that the cases are, as the
chapter wears on, increasingly self-conscious about the fatal implications
for postmodernism of their agenda; how this relates to the persuasiveness
of that agenda is another matter entirely, though.

Children’s Postmodernism: Pixar, Aardman, Dreamworks

Pixar’s release of Toy Story in 1995 was a digimodernist landmark: the first
entirely computer-generated film. Technically it looked stunningly new,
and was immediately acclaimed as much a turning point in the history of
cinema as when The Jazz Singer had introduced sound. Yet, as the first
postmodern children’s movie, Toy Story’s interest lies more in its content
than in its quickly superseded technological innovations. Its postmodern-
ism derives partly from its hybridity, its fusing of children’s and grown-ups’
fictive modes: it blends traditions in children’s cinema (animation, the
child’s perspective, magic [toys that come alive], themes of loss and resto-
ration) with jokes for adults about Picasso, allusions to horror movies like
Freaks or The Exorcist, and a speed of dialogue and cutting not dissimilar
to that of MTV. It generates an apparent “cleverness” which is more like
street-smartness; it’s sharp and knowing, but in a largely negative and
uninformed manner (seeing through bogusness), which had a lasting
influence on the next decade’s cartoons; it led, for instance, to the faultfind-
ing reaction of the cubs to a story they are told in Blue Sky’s Ice Age: The
Meltdown (2006)—a destructive rather than an enabling “cleverness”
because it has been stripped of actual knowledge. Ironic, knowing, skepti-
cal, aware of and ambivalent about narrative conventions and codes, the
tone and mood of Toy Story are pervasively postmodern.
Hitherto, the heroes and heroines of animation had tended to be legend-
ary or mythopoeic characters drawn from traditional fairy-tale or adven-
ture sources. Those of Toy Story, a children’s fiction about children’s fictions,
however, are merchandising, action figures bought by parents in the wake
of visits to the cinema or purchases of videos; each one therefore com-
memorates, and brings to the film, his batch of preceding texts. Woody is a
cowboy toy who imports into the film the world of the heroic Western; he
is a hero, a commanding and resourceful leader, and calls up a raft of cul-
tural and cinematic memories and references. The first part of the film
focuses on his apparent supercession in his owner’s affections by the space-
man toy Buzz Lightyear, which, on one level, suggests the cultural shifts in
The Arguable Death of Postmodernism 9

the United States from the kind of “pioneer” heroes erected by the 1950s
(Roy Rogers, etc.) to those of the 1970s (Neil Armstrong, Buzz Aldrin,
etc.). This story strand places the narrative closer to the experience of par-
ents (probably born about 1954–64) than to that of their children. Buzz
Lightyear, as his name suggests, is part-Aldrin, part-Luke Skywalker, and
invests the movie with his own raft of cultural and cinematic memories
and references around the heroic sci-fi/fantasy movie, especially Star Wars.
Many of the toys occupy such worlds, their interaction thereby becoming a
chaotic and comic jostling and intermingling of different textual sources.
While Woody is a simple citation, some of the other characters are paro-
dies of their original: the angst-ridden dinosaur looks back to the monsters
of Jurassic Park whom he ought to resemble but doesn’t, while simultane-
ously evoking memories of the cowardly lion of The Wizard of Oz. The film,
then, is largely composed of a quantity of cultural quotations crossing,
bumping, overlapping, and mingling with each other to highly postmod-
ern effect.
Yet its postmodernism goes even beyond that. Focusing on two small
boys, one loving, the other monstrous, it is striking for its absence of fathers;
the boys have sisters and mothers but there are no adult males, problema-
tizing the issue of who the boys are supposed to grow up into. “Adult” mas-
culinity—the fully developed male—is of course represented to the boys
through Woody and Buzz, and is therefore suffused with heroic assump-
tions; yet the thrust of the film, in accordance with postmodern theory,
demonstrates that such heroic masculinity is not so much “natural” as con-
structed by society, specifically as manufactured by corporate marketing
departments. There are powerful echoes here of Blade Runner. Buzz has to
learn the depressing news that, instead of the free, authentic individual he
believes himself to be (has been programmed to think he is), he is a com-
modity, advertised on television, sold in industrial quantities in shops,
identical to thousands of others. Toy Story 2 (1999) even contains a self-
referential gag, voiced as the characters tour a toy shop passing endless
replicas of themselves on sale for money, about the insufficient numbers of
Buzz Lightyear action figures stocked by retailers in 1995 in overly pessi-
mistic anticipation of the first Toy Story. This strand is powerfully anties-
sentialist; it reduces the self to something fabricated and sold by global
corporations (the toys discuss which companies made them—Mattel, Play-
skool—like children telling each other their parents’ names). This, then, is
what the boys will grow up into: elements of international capitalism,
employees and consumers, all identical to each other, saturated in advertis-
ing and shopping. Selfhood is participation in the marketplace as worker,
10 DIGIMODERNISM

buyer, or product, all disposable. This ostensibly downbeat political mes-


sage, which seems to call for an antiglobalization, anticapitalist resistance,
is engulfed and overpowered in recognizably postmodern manner by the
film’s euphoric play of textual traces; its joyous pastiches, its depthless nos-
talgia, its ironic allusiveness. Knowing about children’s stories, Toy Story is
also very knowing and pointed about its status as a commodity in the inter-
national entertainment marketplace. A very postmodern film, then, and
hugely enjoyable; while all of this was made only more explicit in the
equally successful and enjoyable sequel released four years later. However,
in retrospect both films reflect a transitional cultural phase in their fusing
of postmodern content (in large part derived from The Simpsons, until then
a traditional drawn television cartoon) with its digimodernist means of
production.
This does not apply to the stop-motion methods of Aardman Anima-
tions’ Chicken Run (2000), methods almost as old as talking pictures them-
selves. The roots of the postmodernism of Chicken Run lie in the three
Wallace and Gromit shorts made between 1989 and 1995, which estab-
lished a highly recognizable aesthetic of allusion, pastiche, and parody
through which was generated a hyperreal northern England of the 1930s to
1950s. Saturated with irony, this locale is constructed on one side with
Jamesonian nostalgia as stodgy, prim dreariness redeemed by gentleness,
kindness, and simple optimism, and on the other as a cascade of references
to and parodies of texts associated with the time and place: 1930s’ Univer-
sal horror films such as Frankenstein and The Mummy, Heath Robinson’s
designs for surreally complicated devices to achieve simple ends, Oliver
Hardy’s pregnant looks to camera at his partner’s wearying eccentricity
(adopted by the dog Gromit), the classical Western (the sunset fade-out at
the end of The Wrong Trousers [1993]), Noël Coward and David Lean’s
Brief Encounter (1945; an image repertoire for English repression in this
period), film noir, pulp thrillers, and Battle of Britain movies like 633
Squadron with their valorous dogfights. That these last feature actual
canines as pilots (Gromit) reflects the pleasure that the films take in gags
seemingly constructed to fascinate semioticians and post-structuralist lin-
guists. Wallace removes a painting concealing a wall safe containing a
piggy-bank, but the painting depicts an identical piggy-bank. Gromit takes
refuge in a random cardboard box and cuts out a spy hole to look through
in the shape of a bone; from outside, we see that the box is covered by an
illustration of a dog whose drawn “eyes” Gromit has coincidentally replaced
with his own plasticine ones. The Wrong Trousers features possibly the best
postmodern joke ever, when Gromit glances up from reading a newspaper
The Arguable Death of Postmodernism 11

on one of whose pages is the headline: “DOG READS PAPER.” In a delight-


fully self-conscious play of cartoon conventions, Gromit is permitted to be
as clever as any human but must remain locked within an exclusively bes-
tial ontology: so he reads “The Republic” by Pluto (another nod to the era’s
texts) and, in prison, “Crime and Punishment” by Fido Dogstoyevsky. A
tissue of detourné quotations, nostalgia and irony, these shorts display
above all a joyous and dazzling universality: they appeal to and satisfy both
small children and jaded adults (and overconceptual theorists) in a post-
modernism as natural and inevitable, as unstrained and as free as breath-
ing. This form of postmodernism as pure entertainment for all the family
bears witness on one level to the total permeation of society by a contem-
porary cultural movement, but on another level this mixture of sophisti-
cated self-awareness and generous fun suggests the movement has nowhere
left to go.
Chicken Run, Aardman’s first full-length feature, explores these tech-
niques and aesthetics at more than twice the length. It’s set on a farm whose
hens, seeing their approaching doom and yearning to be free, launch a suc-
cession of failed escape attempts, finally, with the help and inspiration of an
American rooster fortuitously arriving among them, building a rudimen-
tary airplane and flying out of their coop to liberty. The time and place are
the same as for the shorts, and they equip Chicken Run with several textual
models out of which it constructs itself. The first, and most commented
on, is the World War II (though written or filmed later) narrative about
daring British and American escapes from Nazi prisoner of war camps,
such as The Colditz Story, The Wooden Horse, and The Great Escape. The
second, unseen by critics, is George Orwell’s Animal Farm (1945), a fable
about oppressed farm animals rising up collectively to overthrow their
cruel human masters. The third is the girls’ boarding school narrative
such as Enid Blyton’s St. Clare’s and Malory Towers sequences of novels
(1941–45 and 1946–51 respectively). The fourth, but less pervasively, is the
narrative exploring the social and sexual impact on English society of the
sudden arrival in its countryside of glamorous, sexy, hip young American
airmen during World War II (e.g., The Way to the Stars [1945]): the rooster
(who belongs to a circus for whom he is unwillingly but profitably fired
from a cannon) falls into the camp one night and is taken by the dazzled,
smitten hens as a “flyer.”
The effect of building the narrative out of these textual materials is multi-
ple. First, the Colditz trace imbues Chicken Run with an heroic atmosphere:
it’s a tale of good overcoming evil (it contains one truly loathsome and ter-
rifying character); it foregrounds bravery, ingenuity, and resourcefulness.
12 DIGIMODERNISM

Second, the echo of Orwell’s revolutionary animals sets all of this in


England, with English oppressors who are exiled by the revolt (impossible
in the Colditz narrative); his Communist allegory is reproduced as an
egalitarian determination among the hens that all should escape simulta-
neously (not found in the hierarchy-drenched Colditz genre). Third, the
boarding school source configures the interaction of the hens according to
the codes of 1940s upper-middle-class English girls: they live by an ethic of
unspoken decency, undying loyalty and pluck; their main enemy is a tow-
ering middle-aged Englishwoman, like a sadistic teacher. It is crucial that
Chicken Run does not parody the sources it adapts and manipulates so
cunningly, but remains within their values and tone: as the leader of the
hens climactically dispatches the evil farmer’s wife, the viewer’s heart leaps
for justice, liberty, and the pursuit of happiness. This is very clever pastiche,
in which the sources overlay each other in a rich and subtle polyphony;
and, like any postmodern text, the film suggests that time and place can
only be evoked via existing texts, which must then be quoted, fragmented,
deturned, played off. The wealth of textual voices is matched by gags that
twist ambiguity and polysemy out of language with a richness that often
goes over the characters’ heads: as a chicken slips down a slide into a mur-
derous machine, another cries: “Oh shoot/chute!”; as their plane prepares
to launch down the makeshift runway a shout of “chocks away!” leads to
the removal of the empty Toblerone boxes holding the wheels in place.
Chicken Run constructs itself out of the post-structuralist overdetermina-
tion of meaning and the postmodern tropes of quotation, allusion, irony,
and nostalgia. It’s a wonderful film, a postmodern classic, and my five-
year-old enjoyed it as much as I did.
The last of the four great postmodern cartoons, DreamWorks’ com-
puter-animated feature Shrek (2001) also had plenty to delight infants and
intellectuals alike. It begins with romantic music and an image of a child’s
book of fairy tales. The pages turn, revealing a story intoned by a male
voice-over: “Once upon a time there was a lovely princess. But she had an
enchantment upon her of a fearful sort which could only be broken by
love’s first kiss. She was locked away in a castle, guarded by a terrible, fire-
breathing dragon. Many brave knights had attempted to free her from this
dreadful prison, but none prevailed. She waited in the dragon’s keep, in the
highest room of the tallest tower, for her true love and true love’s first kiss.”
As the music swells, a sarcastic and ebullient laugh accompanies the sound
of a page ripping: “Like that’s ever gonna happen! What a load of-.” An ogre
emerges from a flushing toilet, his ass presumably wiped clean on the story.
A pop song starts up.
The Arguable Death of Postmodernism 13

Whatever else it is, Shrek is a fairy tale: it recounts the journey of a brave
hero and his quadruped companion to a castle to save a beautiful princess
from a deadly dragon’s clutches, culminating in his marrying her and living
happily ever after. It is also a deconstruction of these conventions, as the
ripping of the page foreshadows: the “brave hero” is a stinking, ugly ogre,
who journeys for his own ends and not for love or honor; the “quadruped”
is a garrulous, cowardly donkey, irksome to the “hero”; the “beautiful prin-
cess” is turned by true love’s kiss (following a curse) into a fat, green, and
unattractive version of herself; the “deadly dragon” falls in love with and
marries the donkey; sequels would show that the married life of the “hero”
and “heroine” was anything but happy “ever after.” Early on a crowd of fic-
tional characters from an array of different narratives take very postmod-
ern refuge in the ogre’s swamp: there are the Three Little Pigs, Snow White
and the Seven Dwarfs, the Pied Piper of Hamelin (plus rats), the Three
Blind Mice, Pinocchio, and so on. Though referred to as “fairy-tale crea-
tures,” the net is actually cast slightly wider than that to encompass figures
from nursery rhymes and more recent children’s stories (e.g., Tinkerbell).
More precisely, they suggest the principal cast of Disney’s collective back
catalog, uprooted from their fictional settings (the notion here is that a
cruel ruler has evicted them, but clearly only symbolically), deracinated
and set floating away from their narrative homes.
However, and unlike Chicken Run, the postmodernism of Shrek does
not primarily lie in an ironic manipulation of a tissue of quotations. It lies
in its style, its registers and tones, which form a carefully orchestrated,
complex, brilliant, and hilarious clash of hybridities and anachronisms,
where all is deracinated and far from home. Heterogeneous and depthless,
knowing and self-conscious, allusive and affectless, Shrek interweaves and
plays off each other the medieval courtly past and the hip hyperreal present
like a French Lieutenant’s Woman for the twenty-first-century third-grader.
So it is a fairy tale, a romantic depiction of an exciting quest and the tri-
umph of true love with many of the ethnonarrative elements analyzed by
Propp’s Morphology of the Folk Tale. And it is also a cynical deconstruction
of all that. So it is packed with bits of contemporary pop culture, TV pro-
grams, cult movies, pop songs, consumerist cool; but the pop songs that
punctuate the action are of the kind that get nine-year-olds throwing them-
selves about on the floor of the school disco, fizzy, bright, and peppy, with
lyrics that refer to nothing at all (no “Venus in Furs” or “Holidays in the
Sun” here)—it’s children’s pop culture to match children’s fairy tales. And
the two fuse (the Monkees sing “I thought love was only true in fairy tales”
at the wedding party) or are made to (the Three Little Pigs break-dance).
14 DIGIMODERNISM

So this becomes multiply a children’s entertainment, which is then inter-


woven with references for the grown-ups to real estate, pheromones, the
Heimlich maneuver, and Freudian compensation theory. Princess Fiona is
made to speak at first in mock-medieval courtly: “I am awaiting a knight so
bold as to rescue me . . . This be-ith our first meeting . . . I pray that you take
this favor as a token of my gratitude . . . thy deed is great, and thine heart is
pure . . . And where would a brave knight be without his noble steed?” This
mangled archaic discourse is played off against some up-to-the-minute
slang: 24/7, kiss ass, morons, “the chicks love that romantic crap!”
Shrek constructs, then, an infantilized adult and a cool kid as a pair of
viewers embraced by an all-encompassing postmodern aesthetic. In a
sense, it can be read as the postmodern synthesis of Chaucer’s Knight’s Tale
(all reverence, gallantry, and chivalric high-mindedness) with his Miller’s
Tale (all farting, shitting, and belching gags, choleric and irreverent) trans-
posed to the cusp of the media-saturated new millennium. In an emblem-
atic early scene the cruel ruler tortures the Gingerbread Man (by dipping
him in milk) to find out where the rest of the “fairy-tale trash” are hiding.
The interrogation turns briefly and apparently unwittingly into the words
to “Do you know the muffin man?” The mirror from Disney’s Snow White
and the Seven Dwarfs is brought in, questioned, and threatened with smash-
ing by one of the ruler’s henchmen. To mollify him, the mirror turns itself
into a sort of TV/VCR and, parodying the overwritten descriptions and
synthetic hype of TV’s The Dating Game, presents the ruler (who watches
eagerly) with three potential brides: Cinderella, Snow White, and Princess
Fiona. Fiona, as her name suggests, is a modern girl: she has absorbed and
recites reams of fairy-tale conventions (“You should sweep me off my feet,
out yonder window and down a rope on to your valiant steed . . . You could
recite an epic poem for me. A ballad? A sonnet!”), but later proves herself
able to burp, cook, fight, argue, and love as well as any other postfeminist.
In another telling scene, Robin Hood appears with his Merrie Men, who
sing and dance in a pastiche of classic Disney, though their steps momen-
tarily evoke the more contemporary Riverdance; Fiona attacks them with
martial arts moves, which successively invoke Bruce Lee, The Matrix, and
the film version of Charlie’s Angels (which, as Shrek’s makers know we
know, and we know they know, starred Cameron Diaz, Fiona’s voice artist).
Knocking them out, she sighs: “Man, that was annoying!” Disney is a target
throughout: their characters are liberated from those old stories, the ruler’s
dreary and sterile kingdom (entered through a turnstile) looks suspiciously
like Disneyland, and Fiona’s duet with a songbird makes the latter explode
on the high notes.
The Arguable Death of Postmodernism 15

Disney is blown up. Or, to put it another way, Disney’s techniques and
content are belatedly brought forward and renewed into a postmodern age,
twenty years after postmodernism’s heyday. The hybridity and clashes that
Shrek adroitly manages are all reconciled within a prevailing and hugely
enjoyable postmodern aesthetic. Indeed, it suddenly made DreamWorks’
own traditionally animated films look terribly old-fashioned: The Prince of
Egypt (1998), The Road to El Dorado (2000), Joseph: King of Dreams (2000),
Spirit: Stallion of the Cimarron (2002), and Sinbad: Legend of the Seven
Seas (2003) were all drawn under a guiding aesthetic of heroic myth and
“inspiring” legend (down to the pompous colons) which the ferocious
and scintillating postmodernism of Shrek made obsolete overnight. In 2004
DreamWorks announced they would make no more 2D animation, dedi-
cating themselves solely to computer-generated fare and killing off a form
of cartoon narrative along with a tradition of cartoon-making; in 2006
Disney bought Pixar.
Yet these four movies were not the first appearance of postmodernism
in children’s stories. Shrek drew on Jon Scieszka and Lane Smith’s picture
book The Stinky Cheese Man and Other Fairly Stupid Tales, first published
almost a decade earlier and described as “a postmodern collection of fairy
tales for a postmodern time” by the literary critics Deborah Cogan Thacker
and Jean Webb,4 and as “Tristram Shandy for the primary school reader”
by the authors of a popular introduction to postmodernism.5 While this
claims too much of a series of amusing exercises in comic bathos, Cogan
Thacker and Webb are on more solid ground drawing parallels between
Philip Pullman’s metafictional children’s Gothic pastiche Clockwork: Or All
Wound Up, published in 1996, and Italo Calvino’s postmodern classic If on
a Winter’s Night a Traveler. Beyond Shrek and The Stinky Cheese Man lies
Angela Carter’s The Bloody Chamber and its postmodernization of fairy
tales for adults, a staple of university reading lists. However, what is more
interesting is not the (finally inevitable) appearance of postmodernism in
children’s literature, but rather the cultural and historical significance of
the arrival of children’s literature in postmodernism: the fact that what had
once denoted shifts in architectural theory now referred most vibrantly
to the entertainment of prepubescents. This surely suggested a new and
critical stage in the development of postmodernism, which by the turn of
the millennium had come to underpin a billion-dollar industry beloved by
preschoolers.
And soon another, even more damaging point in the history of postmod-
ernism was reached. If it had undermined postmodernism to be reduced
to a child’s plaything, it was even more humiliating to become that infant’s
16 DIGIMODERNISM

discarded toy, grown out of and left behind. When postmodernism turned
into yesterday’s style in the eyes of children, it surely entered the absolute
past tense of contemporary culture. This change, which happened unevenly
over the next five or six years, was signaled by a succession of disastrous
films by DreamWorks: Shrek 2 (2004), Shark Tale (2004), Madagascar
(2005), and Flushed Away (2006, in partnership with Aardman). Mostly
pilloried by critics, they were increasingly unsuccessful at the box office
too. Essentially each took the ingredients of the first Shrek and, rather
than bake them into a postmodern cake, flung them pell-mell at the screen.
The films dissolve into a helter-skelter of scattershot allusions, parodies,
pastiches, fizzy pop songs, knowing irony, breakneck incidents, and adult-
oriented but unsummarizably dull story lines. Whereas Shrek had anchored
itself in a postmodern fairy tale it deconstructed as it went, the plots of
these movies—trying to get on with your in-laws, trying to save London’s
sewer rats, trying to protect yourself and your town from angry fish—are
a diffuse mess, a nothing, at which a disintegrated and nonaccumulating
tumult of stuff gets hurled. Overwrought and scarcely at all funny, they’re
unstructured and hyped-up fragments that, as well as breaking no new
ground as content or style, in fact transform the distinctive Shrek aesthetic
into a tiresome, convoluted and recycled postmodern blizzard. Paradoxi-
cally, they can be seen as even more postmodern than their illustrious
predecessor: it isn’t Disney’s characters that have been evicted from their
narrative home here, but Shrek’s postmodern assembly of elements. They
showcase the postmodernization of the postmodern, and it’s no fun at all.
The more critically and commercially successful films made by Pixar in
this period choose instead to marginalize and downplay the postmodern.
Finding Nemo (2003) can be read as a transtextual recasting of the tradi-
tional Disney narrative about a child separated from his or her parents,
reworked so that attention falls on the parent’s search rather than on the
subsequent informal fostering arrangements that emerge, where the
deeper subject becomes a radical interrogation of the social meaning of
masculinity. You can read it this way if you want, but the film certainly
doesn’t insist on your doing so. An alert postmodern viewer of the same
studio’s The Incredibles (2004) might note the pastiche of Superman and
Batman and the self-consciously ironic references to the narrative codes
of superhero comics (the deadliness of capes and of “monologuing”). But
these are episodic or early gestures only, contributing nothing to the
overall shape of the narrative or its controlling aesthetic. In Cars (2006),
the world of motor sports is depicted through a hyperreal media haze: we
see and listen to TV commentators on the races, see the rolling news TV
The Arguable Death of Postmodernism 17

channels reporting on them, register a barrage of media gags (“The Jay


Limo Show,” “Braking News”), watch the stars at press conferences and
sponsorship appearances and in front of camera crews and their flashing
bulbs and TV interviewers and the whole media scrum. But the story works
to reject that “empty” world in favor of home, hearth, and reverence for
the older generation. Elsewhere, the postmodern is shoved literally into the
background: it can be glimpsed in the nostalgia that Cars shares with The
Incredibles and also Ratatouille (2007) for the vehicles, architecture, and
electronics of the 1950s and 60s. All of this is there, but it’s muted, side-
lined, unexceptional: these are not so much postmodern cartoons as car-
toons tinged with the vestiges of postmodernism.
Wallace and Gromit: The Curse of the Were-Rabbit (2005), Aardman’s
next stop-motion feature after Chicken Run, likewise largely sloughs off the
postmodern aesthetic (presumably now old-fashioned). The infrequent
nods to and parodies of 1930s’ Universal horror movies repeat material
from the studio’s shorts made a decade or more earlier. There is a general
move back toward more traditional modes of storytelling and sources of
humor: a proud yokel complacently invites the villain to “kiss my ar . . .
tichoke.” Sensing a trend, Shrek the Third (2007) improves dramatically on
its predecessor (without reaching the heights of its original) partly by half-
concealing the postmodern elements beneath a comedy of character with
added mild satire. The use of pop songs is made unobtrusive (no climactic
karaoke), the metafiction reined in: while the film starts and ends with dis-
rupted mises en abyme, and plays with the lovely idea that the characters
who lose out in children’s stories might gang up together in search of their
personal happy ending, these bits are unemphasized. The tone throughout
is warm rather than knowing or ironic. The tendency was noted by other
studios: Sony’s Open Season (2006), for instance, goes nowhere near any
postmodern pretensions and reverts instead to the ancient comic values of
slapstick and bodily functions.
A recurring element in this shift away from postmodernism in the chil-
dren’s cartoon is a new focus on environmental catastrophe, which in turn
places a premium on digimodernist aesthetic traits. Blue Sky’s Ice Age
(2002) had already touched on the rapacity of humans and their destruc-
tive relationship to their natural habitat, while its sequel deploys the end of
its eponymous epoch to make “global warming” the backcloth to its story.
DreamWorks’ Over the Hedge (2006) evokes, very ambiguously, the harm-
ful impact on the environment of relentless human consumerism and
exploitation of natural resources. The effect of these issues on such films is
seen most clearly in Animal Logic’s Happy Feet (2006), which moves from
18 DIGIMODERNISM

long scenes where penguins formation-dance to rock songs (oddly remi-


niscent of Baz Luhrmann’s postmodern Moulin Rouge [2001]) to an earnest
Tolkien-style quest to repair the ecological damage done to their Antarctic
home. A watershed occurs when the hero renounces his knowingly ironic
buddies for his weighty and grave destiny. Fox’s The Simpsons Movie (2007)
struggles with the same subject. Back in 1998, in the TV episode “Trash
of the Titans,” Matt Groening’s team had treated it with breathtaking
black-comedic brilliance but, if the political problem had not evolved in
the interim, a cultural mood change was apparent. The film’s overhanging
seriousness is leavened by only the occasional gag; Homer’s oafishness,
once the essence of knowing irony, is by now just obnoxious; and the once-
lacerating manipulation of allusion and parody seems almost extinct.

Killing Postmodernism: Dogme 95, New Puritans, Stuckists

At a conference in Paris on March 20, 1995, celebrating the first centenary


and exploring the future of cinema, two Danish directors, Lars von Trier
and Thomas Vinterberg, announced the launch of a new avant-garde film-
making project called Dogme 95. Their manifesto, copies of which they
threw into the audience, called them:

a collective of film directors founded in Copenhagen in spring 1995.


Dogme 95 has the expressed goal of countering “certain tenden-
cies” in the cinema today.
DOGME 95 is a rescue action! . . .
Today a technological storm is raging, the result of which will be
the ultimate democratization of the cinema. For the first time, anyone
can make movies. But the more accessible the media becomes, the
more important the avant-garde. It is no accident that the phrase
“avant-garde” has military connotations. Discipline is the answer . . .
The “supreme” task of the decadent film-makers is to fool the audi-
ence. Is that what we are so proud of? Is that what the “100 years”
have brought us? Illusions via which emotions can be communicated?
[. . .] By the individual artist’s free choice of trickery?
Predictability (dramaturgy) has become the golden calf around
which we dance. Having the characters’ inner lives justify the plot is
too complicated, and not “high art.” As never before, the superficial
action and the superficial movie are receiving all the praise.
The result is barren. An illusion of pathos and an illusion of love.
To DOGME 95 the movie is not illusion!
The Arguable Death of Postmodernism 19

Today a technological storm is raging of which the result is the


elevation of cosmetics to God. By using new technology anyone at
any time can wash the last grains of truth away in the deadly embrace
of sensation. The illusions are everything the movie can hide behind.
DOGME 95 counters the film of illusion by the presentation of an
indisputable set of rules known as the Vow of Chastity.6

This Vow consisted of ten propositions to which the signatory swore to


submit him or herself:

(1) Shooting must be done on location. Props and sets must not be
brought in . . .
(2) The sound must never be produced apart from the images or
vice versa . . .
(3) The camera must be handheld. Any movement or immobility
attainable in the hand is permitted . . .
(4) The film must be in color. Special lighting is not acceptable . . .
(5) Optical work and filters are forbidden.
(6) The film must not contain superficial action. (Murders, weapons,
etc. must not occur.)
(7) Temporal and geographical alienation are forbidden. (That is to
say that the film takes place here and now.)
(8) Genre movies are not acceptable.
(9) The film format must be Academy 35 mm.
(10) The director must not be credited.

The vow concludes: “My supreme goal is to force the truth out of my char-
acters and settings.”7 The project met with considerable initial success: its
first official film, Vinterberg’s Festen or The Celebration (or Dogme #1;
1998) won the Jury Prize at the Cannes Film Festival and both the New
York and the Los Angeles Film Critics’ awards for Best Foreign Film, while
Von Trier’s Idioterne or The Idiots (Dogme #2; 1998) also received much
critical acclaim. The project spread beyond Denmark, with American and
French directors making Dogme films at the turn of the millennium. Its
official Web site currently lists 254 such films, almost all virtually unseen.8
These statements never mention postmodernism and indeed, had they
done so, it would only have weakened their argument. In 1995 postmod-
ernism in cinema was marginal, restricted to a handful of cult films, though
it grew much more widespread in the second half of the 1990s as we have
seen. The project required a more pervasive and dominant enemy than the
20 DIGIMODERNISM

aesthetics lying behind a Blade Runner or a Pulp Fiction. This was supplied
by the notion of artifice in cinema, and the anti-postmodernist implica-
tions of Dogme 95 are therefore unvoiced, contained in its wholehearted
embrace of values that postmodernism believed it had set in quotation
marks forever. Above all, Dogme 95 threw its arms around a supposedly
unproblematic and transcendent concept of Truth. With Truth came Reality,
also apparently uncomplicated and universal. Its films were to be created in
real places, with real props, using real sound, occurring in real time, con-
sisting of real events (uncorrupted by genre conventions), and avoiding
any sort of directorial or studio tampering with the footage either during
recording or in postproduction. In practice, the overall effect was to give
these fictions a documentary feel; the Vow of Chastity seemed (fraudu-
lently) to defictionalize invented narratives.
It is not important here whether the films stayed absolutely faithful to
the vow or not, or how sincere the signatories were (some accused them
of a public relations coup). What is striking to my mind about Dogme 95 is
how archaic it appears as a cultural event. On one hand, the suggestion that
the True and the Real can be accessed and evoked by a simple act of will is
disingenuous in the extreme, as if von Trier and Vinterberg had failed to
notice developments over the previous century or so in any and every
artistic and intellectual field (Jean-Luc Godard meets Rip Van Winkle). On
the other hand, the act of writing a cultural manifesto seems like an absurd
throwback; it suggests a pastiche of stories about cosmopolitan young
men congregating in early twentieth-century Parisian cafés to draft their
violently worded statements about what must be done with contemporary
art by a fetishized avant-garde. Von Trier and Vinterberg seem to yearn to
be Marinetti, or André Breton; they seem to long for it to be 1909, the year
“The Manifesto of Futurism” was published in a Paris newspaper, or 1924,
the year of the first Manifesto of Surrealism, again. Their rejection of post-
modernist strategies is revealed as a dreamy and impossible nostalgia for
modernism instead. Alternatively, they would like it to be the 1950s again:
point 9, enforcing use of the Academy 35mm film format, returns them
to a picture ratio made obsolete forty years earlier, while the opening words
of the manifesto sarcastically invoke the title of Truffaut’s nouvelle vague
manifesto Une certaine tendance du cinéma français, published in Cahiers
du Cinéma in 1954. The choice of Paris as the city in which to launch the
project was not accidental; and although von Trier and Vinterberg mock
the nouvelle vague, they ostentatiously overlook the fact that by 1995 France
had actually grown peripheral to the world of experimental filmmaking.
The Arguable Death of Postmodernism 21

All this suggests that Dogme 95 sought to overthrow postmodernism


in cinema by reestablishing modernism in its values, strategies, and geog-
raphy: a clearly hopeless task. And it quite swiftly came to an end. In June
2002, a press release announced: “The manifesto of Dogme 95 has almost
grown into a genre formula, which was never the intention. As a conse-
quence we will stop our part of mediation and interpretation on how to
make dogmefilms and are therefore closing the Dogmesecretariat. The
original founders have moved on with new experimental film projects.”9
Von Trier’s Dancer in the Dark (2000) had already dispensed with much
of the Vow of Chastity, and Vinterberg’s It’s All about Love (2003), a com-
mercial and critical disaster that took five years to make, broke each of its
rules. Meta-cinematically and theoretically, Dogme 95 was a failure: the
successor to postmodernism, whatever it may turn out to be, will be many
things, but not its predecessor.
Aesthetically, however, Dogme 95 was, for a time, a success story. Festen
is a great film, a devastating and acute chamber piece exploring family
psychoses in bourgeois Scandinavia that owes much to Strindberg, Ibsen,
and Bergman. Indeed, it’s hard to see that its greatness owes much to the
principles of Dogme 95, most apparent in the unusual color tones that
result from renouncing “special lighting” and “optical work.” Festen’s struc-
ture is classically theatrical, a deft weaving together of exposition, develop-
ment, and resolution with revelations and reversals aplenty; it has frequently
and mostly successfully been adapted for the stage. This reflects a fault-line
in the Vow of Chastity, which prohibits visual and aural artifice but permits
professional actors and well-polished scripts. The Idiots is an original and
fascinating (and misunderstood) film, light, funny, strange, ambiguous,
sometimes frightening, finally heart rending. Yet it too betrays a cultural-
historical strain. On one level it appears, in its portrayal of a group of
cosmopolitan young artists and intellectuals who see themselves as a socio-
cultural avant-garde, to be a reflection on the Dogme 95 participants them-
selves; and it plays with ideas of truth and “illusions,” reality and “trickery”
as if dramatizing allegorically the arguments of their manifesto. But on
another level it is caught between a modernist throwback (il faut épater
la bourgeoisie by acts of aesthetic subversion, which will liberate the
repressed truths of the self) and a postmodernist contemporaneity (no
true self exists, only performances of identity; the film can be read in terms
of mise en abyme as depicting a company of actors who prepare, stage, and
critique a form of street theater). Von Trier cannot resolve this tension
by moving forward beyond either; unable to conceptualize a shift beyond
22 DIGIMODERNISM

postmodernism, he silently proffers instead a raft of aesthetic techniques


that do so.
Indeed, as well as inspiring some terrific work, Dogme 95 has had an
immense influence, opening the door to films from The Blair Witch Project
to Borat in addition to nourishing new digimodernist forms such as the
docusoap and YouTube; its broader influence on the digimodernist aes-
thetic of the apparently real is incalculable. It also led to copycat cultural
movements like the British New Puritans, though with far more question-
able results. All Hail the New Puritans (2000) was a collection of short sto-
ries edited by Nicholas Blincoe and Matt Thorne, and written in accordance
with a ten-point manifesto clearly modeled on the Vow of Chastity and
published in the book’s introduction:

(1) Primarily story-tellers, we are dedicated to the narrative form.


(2) We are prose writers and recognize that prose is the dominant
form of expression. For this reason we shun poetry and poetic
license in all its forms.
(3) While acknowledging the value of genre fiction, whether classi-
cal or modern, we will always move towards new openings,
rupturing existing genre expectations.
(4) We believe in textual simplicity and vow to avoid all devices of
voice: rhetoric, authorial asides.
(5) In the name of clarity, we recognize the importance of temporal
linearity and eschew flashbacks, dual temporal narratives and
foreshadowing.
(6) We believe in grammatical purity and avoid any elaborate
punctuation.
(7) We recognize that published works are also historical documents.
As fragments of time, all our texts are dated and set in the present
day. All products, places, artists and objects named are real.
(8) As faithful representations of the present, our texts will avoid
all improbable or unknowable speculation about the past or the
future.
(9) We are moralists, so all texts feature a recognizable ethical
reality.
(10) Nevertheless, our aim is integrity of expression, above and
beyond any commitment to form.10

Much of this is ludicrous. The pretentious rejection of poetry in point 2 is


toe-curling, while the repudiation of flashbacks in point 5 farcically shows
The Arguable Death of Postmodernism 23

the literary door to a cinematic technique; Blincoe and Thorne may as well
ban novelists from using dolly shots (it suggests too that something other
than prose is today’s “dominant form of expression”). The literary version
of the flashback in a Rebecca or a Beloved is such a subtle and sophisticated
tool that it is hard to see how narrative could be excised of it (or why),
while as a cinematic device it served Vertigo and Citizen Kane well enough.
Destructive too of actually existing great literature is the interdiction of the
historical novel (a rewording of Dogme’s point 7), the authorial aside and
the dual-time fiction, depriving us forever of War and Peace, Bleak House,
and Ulysses.
To replace them, the anthology provides fifteen tales, ranging from the
quite good through the not very good to the poor, written by a batch of near-
unknown and young-ish British authors together with Geoff Dyer, Alex
Garland, and Toby Litt. The stories are brisk, lightweight, and sometimes
reminiscent of people the manifesto surely wants to dislodge (Vonnegut,
Ballard). Unfortunately, they’re not successful or distinctive or unified
enough to add up to more than the quite modest sum of their parts. This
kind of undertaking, like the nouvelle vague, does require some degree of
aesthetic achievement and some appreciably shared traits, and the collec-
tion, to be blunt, offers neither. The whole affair seems like nothing better
than a PR stunt: it’s almost impossible today to publish such an anthology
in Britain and the scaffolding erected around the venture looks in retro-
spect like a gimmick designed to provide a publishing opportunity for a
bunch of ambitious friends. As their Wikipedia entry sourly but accurately
notes: “New Puritanism has not been espoused by any well-known writers
since the book’s publication, and the contributors have not collaborated
since.”11
Nevertheless, the New Puritans are not without their small historical
significance, enough to get them a brief mention in subsequent academic
guides to contemporary British fiction and their book translated into sev-
eral foreign languages. This significance derives from their semiexplicit
and would-be epochal repudiation of literary postmodernism. From the
start the venture declares itself, with paleontological awkwardness, “[a]
chance to blow the dinosaurs out of the water,” and the reptiles it seems to
have in mind are above all the postmodernists: “While I admire the formal
experiments of writers like B. S. Johnson, Italo Calvino or Georges Perec,
the stories in this collection prove that the most subtle and innovative form
available to the prose writers is always going to be a plot-line.”12 Martin
Amis’s Time’s Arrow and Salman Rushdie’s Midnight’s Children, central
works of British postmodernism, are condemned for their lack of “insight”
24 DIGIMODERNISM

and “revelation.”13 In an interview touching on the New Puritans conducted


three years later, Thorne expanded on this literary-historical positioning.
Describing Rushdie as one “of the writers we didn’t like,” he claims that
“[w]e were trying to reach an audience who had maybe been put off by
some of the pretensions and stiffness of some of the Amis generation,” that
is, the writers born between about 1945 and 1954 like Julian Barnes, Ian
McEwan, Graham Swift, and Kazuo Ishiguro, as well as Amis and Rushdie,
the British postmodernists.14 However, while Thorne condemns certain
postmodernist texts and writers, he stops short of openly vilifying post-
modernism per se, preferring instead to criticize Amis and Rushdie’s “snob-
bery,” their “closeted, privileged” origins, and their “incredibly elitist” views
on writing.15 These charges are so vaguely worded and so rank with envy
and resentment they can be dismissed as uninteresting in themselves. But
the New Puritans were nonetheless a symptom of something in the air.
Although the Stuckists, a radical art group founded in London in January
1999, made no allusions to a specific debt to Dogme 95, their twenty-point
manifesto published in August of that year pushes many of the buttons
successfully manipulated by the Danes. Its opening repudiation of “the cult
of the ego-artist” recalls Dogme’s withholding of the director’s name from
their films’ credits.16 Charles Thomson, one of its two cowriters and, with
Billy Childish, a cofounder of Stuckism, describes Stuckist art as unified by
“the values that drive it, namely truth to self and experience in its content,
and clarity and directness in its expression and communication . . . Its
directness results from a meaningful and balanced insight into complexity,
and an unflinching acceptance of our humanity. Like all true art it brings
us closer to who we really are.”17 As with Vinterberg and von Trier, the
Stuckists’ belief in the easy, accessible, and universal concepts of truth, self,
meaning, humanity, and reality produces a discourse that seems to pretend
that twentieth-century art and thought had never happened.
Unlike Dogme 95 and the New Puritans, though, the Stuckists relent-
lessly foregrounded their absolute and explicit abhorrence, overthrow,
murder, and interment of postmodernism. (Beyond its unambiguous death
lies, they say, Stuckism.) Scattered through their pronouncements are
phrases like this one in their original manifesto: “Post Modernism, in its
adolescent attempt to ape the clever and witty in modern art, has shown
itself to be lost in a cul-de-sac of idiocy. What was once a searching and
provocative process (as Dadaism) has given way to trite cleverness for
commercial exploitation.”18 In an open letter to Sir Nicholas Serota, direc-
tor of the Tate Gallery, published in February 2000, they declare:
The Arguable Death of Postmodernism 25

Post Modernism, our official “avant-garde” is a cool, slick marketing


machine where the cleverness and cynicism of an art which is about
nothing but itself, eviscerates emotion, content and belief . . . Since
the 1960’s there has been a paradigm shift towards decentralization,
spirituality and a new respect for the natural laws. Post Modernism’s
febrile introversion hasn’t even noticed this taking place and instead
continues to peddle glibness and irony in its vacuous attempt to
appear dangerous and fashionable . . . The idiocy of Post Modernism
is its claim to be the apex of art history—whilst simultaneously deny-
ing the values that make art worth having in the first place. It pur-
ports to address significant issues but actually has no meaning beyond
the convoluted dialogue it holds with itself . . . If there is any innova-
tion and vision in Post Modernism, it is in the field of art marketing
. . . Post Modernism is destined for the dustbin of history.19

The second Stuckist manifesto in March 2000 announced the birth of a


new overriding paradigm called Remodernism: “Through the course of
the twentieth century, Modernism has progressively lost its way, until
finally toppling into the bottomless pit of Post Modern balderdash. At this
appropriate time, The Stuckists the first Remodernist Art Group announce
the birth of Remodernism.”20 Point 1 summarizes what could be thought
of as distinctive here: “The Remodernist takes the original principles of
Modernism and reapplies them, highlighting vision as opposed to formal-
ism.”21 The by-now-familiar historico-cultural murder occurs in point 3:
“Remodernism discards and replaces Post Modernism because of its fail-
ures to answer or address any important issues of being a human being.”22
Indeed (point 4), “Remodernism embodies a spiritual depth and meaning
and brings to an end an age of scientific materialism, nihilism and spiritual
bankruptcy” whose global name scarcely needs repeating.23 There is even a
manifesto for Stuckist writing that states (as if more clarity were required):
“Stuckism’s objective is to bring about the death of Post Modernism.”24
The Stuckists issued so many manifestos and numbered pronouncements
they frequently appeared to be more in the business of putting out cultural-
historical press releases than in actually making some art themselves.
Indeed, while their own work seemed a marginal concern even to them-
selves (we’ll come to it momentarily), PR was what they most immediately
and effectively became known for. Beginning in 2000 they organized a
series of high-profile media stunts making the same art-critical points:
they launched annual demonstrations against the Turner Prize on the night
26 DIGIMODERNISM

of its award, showing up “dressed as clowns, on the premise that the Tate
had been turned into a circus”; in June 2001 they disrupted the opening
of a conceptualist work of art in Trafalgar Square; in July 2002, again
dressed as clowns, they carried a coffin marked “The death of conceptual
art” through London’s streets; in spring 2003 one of their number cut and
removed the string wrapped by a conceptual artist around Rodin’s The
Kiss in Tate Britain.25
This destructiveness and tendency to shift away from art toward art
criticism are typical of the Stuckists; the home page of their Web site
currently calls for signatures to a petition demanding Serota’s sacking. The
main thrust of their existence actually isn’t the rejection of postmodern-
ism, but of Brit Art, especially those pieces made by Damien Hirst and
Tracey Emin or collected by Charles Saatchi. The attack on postmodern-
ism is, I suspect, an attempt to broaden their appeal beyond such a paro-
chial dispute; through their Web site they soon attracted international
attention, and by 2004 claimed ninety franchised Stuckist groups in twenty-
two countries (they don’t give membership figures). In a concise overview
titled “Stuckism in 20 Seconds” Thomson notes that “Stuckists are pro-
contemporary figurative painting with ideas and anti-conceptual art,
mainly because of its lack of concepts.”26 This is disingenuous, for it falsely
suggests they would embrace a new improved conceptualism, but the com-
mitment to contemporary figurative painting has been unswerving (there
have been regular, usually small exhibitions, tepidly received on the whole).
Thomson has also described as “futile” the “diversification of ‘artists’ into
other media, such as video, installation and performance.”27
It is easy, and mostly justified, to dismiss the Stuckists as too negligible
for attention. They are set up for ad hominem attacks: almost all of the
original twelve members were failed thirtysomething artists from the
corner of southeast England where suburban meets provincial, who can be
seen as hoping to build a career out of a noisy rejection of the dominant
artistic fashion; Billy Childish (who would leave amicably in 2001) was an
ex-lover of Emin’s—indeed, Emin had inadvertently named the group by
describing Childish’s work as “stuck” many years earlier—and there does
seem a disproportionate amount of personal animus in their attacks on
Hirst, Saatchi, Serota, and Emin herself. Moreover, the Stuckists’ addiction
to pompously worded, combative, and sometimes absurdly long manifestos
locks them into Dogme’s time warp whereby they still think it’s 1921 in
their Paris café (or Maidstone pub). The name “Remodernism” makes this
impossible and bankrupt nostalgia painfully clear—it’s that unlikely thing,
a name even worse than “New Puritans.” Moreover again, some of their
The Arguable Death of Postmodernism 27

self-validating writing is sophomoric or worse; and again, their ceaseless


complaints that the Turner Prize should be awarded only to painters
because Turner was a painter are about as selflessly pertinent as men called
Oscar arguing that nobody with any other forename should receive an
Academy Award. You would forgive them all this if their own work was
powerful or original, but what I’ve seen of it (reproduced in two books by
and about them) is mediocre indeed.
Despite all this, I’m loath to write the Stuckists off entirely. As with the
New Puritans, their words and gestures are, like the odd behavior of cattle
before a storm, unwitting signals of wider, larger historico-cultural changes,
which they don’t comprehend. All three movements surveyed here make
the same error: they construe postmodernism as no more than an artistic
fashion, and so assume that, as one hemline is superseded on the catwalk
by another, it can be sent on its way and replaced by a newer thing by a
simple act of self-will. They clearly haven’t read Lyotard or, more damag-
ingly, Jameson. Postmodernist culture was rooted in all kinds of historical,
social, economic, and political developments; it was the aesthetic expres-
sion of epochal shifts engulfing millions of people. It would take something
wrenchingly huge to sweep this away; I believe digital technology, essen-
tially, is that something. Dogme 95, the New Puritans, and the Stuckists
were wrong to think postmodernism could be ended by a few writers,
artists, and filmmakers saying it was over, and yet, unlike previous bouts
of negationism, they chimed with the zeitgeist: the stranglehold of post-
modern trickery (in literature and film) and of conceptualism (in art) has
been broken since 2000, only not due to them, and not to their advantage
either. In the end, if Dogme 95 is the only one of the three able to show
both artistic achievement and a direct influence it is probably because of its
engagement with the new forces of change: their filming techniques owed
everything to the invention of new, lightweight, and tiny digital cameras.

Burying Postmodernism: Post-Theory

The death of postmodernism (if confirmed) would, you might think,


entrain all its constituent elements: in particular, it might be expected to
involve the demise of post-structuralism and post-1960s Franco-American
cultural theory. In October 2005 the Times Higher Educational Supplement,
the trade paper of British academics, published an article by Mark Bauerlein,
a professor of English at Emory University in the United States, which con-
cluded snappily that: “Theory is dead.”28 In response, the paper e-mailed
faculty members from top-rated English departments four questions
28 DIGIMODERNISM

about the current state of theory; 163 replied, yielding results more com-
plex than Bauerlein’s article had suggested. True, 44 percent of respondents
did agree that theory was “a declining influence” in British universities
(40 percent felt its status was unchanged or were uncertain, 16 percent
thought it was still gaining ground).29 Although a significant result, this
was hardly overwhelming, and on the whole, as the THES put it, “[t]he
picture is patchy.”30 As well as the differences apparent from one institution
to another, a strong majority (79 percent) thought theory was “likely to
continue contributing new ideas,” while 78 percent felt it had “made a posi-
tive contribution to the humanities,”31 disputing the tenor of Bauerlein’s
article, which, as its subtitle puts it, “rejoices” in the death it announces.32
Indeed, the article is a traditional antitheory rant, which wildly concludes
that theory has destroyed higher learning, and which, shorn of the word
“dead,” could have been published twenty years earlier.
The supposed death of theory has been one of the defining debates of
the early digimodernist era. Among the most prominent of relevant texts
have been Post-Theory: Reconstructing Film Studies (1996), edited by David
Bordwell and Noël Carroll; Beyond Poststructuralism: The Speculations of
Theory and the Experience of Reading (1996), edited by Wendell V. Harris;
Post-Theory: New Directions in Criticism (1999), edited by Martin McQuillan,
Graeme Macdonald, Robin Purves, and Stephen Thomson; Reading after
Theory (2002) by Valentine Cunningham; After Theory (2003) by Terry
Eagleton; Life after Theory (2003), a collection of conversations with Jacques
Derrida, Frank Kermode, Christopher Norris, and Toril Moi edited by
Michael Payne and John Schad; and Post-Theory, Culture, Criticism (2004),
edited by Ivan Callus and Stefan Herbrechter. Firming up into a scholarly
question in its own right, the issue of post-theory and these texts have been
critically assessed by Slavoj Žižek in his The Fright of Real Tears: Krzysztof
Kieślowski between Theory and Post-Theory (2001) and by Colin Davis in
his After Poststructuralism (2004), as well as in forums such as the “Theory
after ‘Theory’” conference held at the University of York in October 2006.
More recently, Jonathan Culler’s The Literary in Theory (2007) summarized
the state of play: “Theory is dead, we are told. In recent years newspapers
and magazines seem to have delighted in announcing the death of theory,
and academic publications have joined the chorus.”33 In this climate, the
appearance in 2005 of Theory’s Empire, an anthology of assaults on theory
edited by Daphne Patai and Will H. Corral (and including Bauerlein), was
greeted by the Wall Street Journal as a “sign that things may be changing”
in the world of “American humanistic scholarship,” though actual scholars
were less convinced.34 There is clearly plenty here to interest a journalist
The Arguable Death of Postmodernism 29

looking for a trend, and the titles of the books seem pretty unambiguously
to state what the trend is; and the status of some of the participants (though
not all) is very high; or so it all might seem.
However, as with the THES poll, things are not as simple as they might
appear. We shall look at one of these interventions, Eagleton’s, in more
detail shortly, but seen as a group they do manifest certain tendencies.
First, as already noted, some of them are merely antitheory arguments in
disguise, made by people who have long denounced the supposedly dam-
aging effects of theory (like Bordwell and Carroll) and consequently bring
nothing new to the table beyond the allegation of death. The publication of
Theory’s Empire is a significant contribution to cultural debates, but many
of its pieces are years old. Some of these writers have never shown signs of
great enthusiasm for theory, like Cunningham, whose monumental British
Writers of the Thirties, though published as late as 1988, is a theory-free
zone. There is no rupture visible in such texts; it is all continuity, not change.
Second, many of the titles of these books reflect the opportunism of their
publishers. Their actual texts interpret “after” in the sense of “now that we
have read theory,” not in the sense of “now that theory is dead and buried”;
they evoke a reader who has absorbed theory rather than a theory that has
gone stale. Others take the question to mean this: since the initial wave of
post-1960s theory is no longer crashing down on us, it is time to take stock
and consider what we wish to retain from its most turbulent days, what
jettison, and perhaps now we can reorient theory so as to relaunch it in an
improved form. Such texts reduce their “after” to “following the end of one
phase of theory and before the start of the next”; their writers call for tweaks
here and there to theory, not interment. Third, there is general agreement
both that theory was, on the whole, a good thing (which enriched cultural
studies), and that a return to pretheoretical days is impossible. Despite the
occasional Bauerlein, they hold that while theory’s status has irrevocably
altered, it is not mortally wounded; behind their exciting and seismic-
shifting titles, they are more nuanced than a quick reader might suppose.
Fourth, there is a prevailing uncertainty about what theory might look like
in the future. It does not help that few if any of these writers have previ-
ously contributed anything new to theory, or have the philosophical train-
ing that might enable them to do so. Fifth, the sense of an ending that
these books all recognize but qualify is very unevenly spread across the
faculties. There are demographic differences, and not necessarily the ones
you might imagine. Twentysomethings who have just discovered theory
for the first time still frequently thrill to it as acutely as their forefathers did
in the 1970s. The weariness comes from their forty- and fiftysomething
30 DIGIMODERNISM

elders, who will privately admit that keeping up with theory (e.g., by
reading Žižek) is a tedious chore, who give conference papers denuded
of references to Parisian thinkers, and whose published writings increas-
ingly deploy a bricolage of theoretical concepts to analyze texts rather than
subjecting texts to the laser beam of a theory.
A consensus seems to have gathered around statements like these, made
by respondents to the THES poll: “The high watermark of theory has
passed”; “[n]o one would want to go back to the pre-theory times, and it is
important that students and scholars know the debates. But I am glad we
are much less doctrinaire about theory generally speaking, allowing it to
ask questions rather than setting the terms by which we read”; “[theory]
perhaps loses some of its power to shock or to pose a real challenge when
it becomes merely another tool for handling texts.”35 Such comments, and
also the arguments of the books mentioned above, suggest that theory
changes its identity when it ceases to appear radically and excitingly new.
It can be argued that one of the defining characteristics of postmodernism
was a rhetoric of disruption, of overthrow and resistance. But with almost
all its most dazzling figures now dead, and the age when ground-breaking
ideas arrived thick and fast now two or three decades behind us, the heroic
age of theory is decisively over. It always seemed a glamorous, outlaw pur-
suit; but that image is no longer available, except temporarily perhaps for
the young. Nevertheless, this does not equate to the end of theory; and the
problem remains that we still don’t know what the future of theory might
be. Cunningham calls for “tact” in reading, and writers such as Rónán
McDonald, whose The Death of the Critic (2007) also assesses the state of
post-theory, urge a return to the concept of aesthetic value. Indeed, John J.
Joughlin and Simon Malpas have tried to found a “new aestheticism” which
draws strength from “a conjuncture that is often termed ‘post-theoretical,’”
though without noticeable success.36 Often such calls sound like nothing
more than the banal and unsatisfactory wish that people say valid things
about the books they read, or that they value, for reasons as yet undiscov-
ered, the texts they should value. Above all, the “crisis” which theory finds
itself in is integral to and inflected by the wider cultural changes involved
in the shift from postmodernism to digimodernism, a context that nobody
entering this discussion has so far considered.
Much of this can be illuminated by looking in some detail at one partic-
ular example of the post-theory debate. Terry Eagleton taught at various
Oxford colleges from 1969 to 2001, being appointed Thomas Warton
Professor of English Literature in 1991. In 1983 he wrote Literary Theory,
The Arguable Death of Postmodernism 31

a brilliantly accessible introduction to its subject for students, which became


an academic best seller. At the outset of his After Theory, whose title might
suggest a sequel to his greatest hit, he asserts: “The golden age of cultural
theory is long past . . . we are living now in the aftermath of what one might
call high theory . . . Structuralism, Marxism, post-structuralism and the
like are no longer the sexy topics they were.”37 This does not mean a “return
to an age of pre-theoretical innocence . . . It is not as though the whole
project was a ghastly mistake on which some merciful soul has now
blown the whistle” (1–2). Eagleton positions himself immediately as a
post-theorist, certainly not an antitheorist, and much of his book is devoted
both to highlighting what he sees as theory’s achievements and to defend-
ing it against what he considers unfair criticism.
For Eagleton, cultural theory emerged in the 1960s and 70s out of the
perceived failure of Marxism either to address many of the pressing issues
of postwar life or to achieve, despite the period’s radicalism, any lasting
social and political transformations. Turning its back on collective action,
dissent moved away from politics to the study of culture; the right-wing
backlash of the 1980s therefore went hand in hand with intellectual depo-
liticization. Eagleton opposes this loss of leftist political will and its replace-
ment by “cultural studies”: “What started out in the 1960s and 70s as a
critique of Marxism had ended up in the 80s and 90s as a rejection of the
very idea of global politics” (50). In the new century this stance is increas-
ingly untenable: rejecting universalism, grand narratives, and foundations,
uneasy with truth and reality, contemporary theory (intertwined with
postmodernism) “believes in the local, the pragmatic, the particular . . . but
we live in a world where the political right acts globally and the postmod-
ern left thinks locally” (72). Nor is the rapacious might of capitalism the
only looming menace that theory and postmodernism are ill-equipped to
face: confronted with fundamentalist Islamism,

the West will no doubt be forced more and more to reflect on the
foundations of its own civilization . . .
[It] may need to come up with some persuasive-sounding legiti-
mations of its form of life, at exactly the point when laid-back cultural
thinkers are assuring it that such legitimations are neither possible
nor necessary . . .
The inescapable conclusion is that cultural theory must start think-
ing ambitiously once again . . . so that it can seek to make sense of the
grand narratives in which it is now embroiled. (72–73)
32 DIGIMODERNISM

The world has changed and theory must change with it:

Cultural theory as we have it . . . has been shamefaced about morality


and metaphysics, embarrassed about love, biology, religion and revo-
lution, largely silent about evil, reticent about death and suffering,
dogmatic about essences, universals and foundations, and superficial
about truth, objectivity and disinterestedness. This, on any estimate,
is rather a large slice of human existence to fall down on. It is also, as
we have suggested before, rather an awkward moment in history to
find oneself with little or nothing to say about such fundamental
questions. (101–02)

Facing up to these historical challenges by finding something cogent and


significant to say about these issues entails, to Eagleton’s mind, two moves,
which he develops in the second half of the book. First of all, postmodern-
ism must be consigned to the intellectual dustbin of history. In this passage
he identifies it as a relic of a vanished world:

Postmodernism seems at times to behave as though the classical


bourgeoisie is alive and well, and thus finds itself living in the past.
It spends much of its time assailing absolute truth, objectivity, time-
less moral values, scientific inquiry and a belief in historical progress.
It calls into question the autonomy of the individual, inflexible social
and sexual norms, and the belief that there are firm foundations to
the world. Since all of these values belong to a bourgeois world on the
wane, this is rather like firing off irascible letters to the press about
the horse-riding Huns or marauding Carthaginians who have taken
over the Home Counties. (17)

Eagleton’s sarcasm about postmodernism’s supposed incoherences and


inadequacies is sustained throughout. Moreover, he sees it as indistinguish-
able from the exploitative and oppressive globalized capitalism that
emerged in the 1980s, commenting that its “radical assault on fixed hierar-
chies of value merged effortlessly with that revolutionary leveling of all
values known as the marketplace” (68). And yet there is nothing new for
Eagleton in these views. He had been condemning postmodernism as
politically either impotent (Lyotard) or consonant with global capitalism
(Foucault) at least since his essay “Capitalism, Modernism and Postmod-
ernism,” first published in 1985. It appeared in the same journal as had
Jameson’s “Postmodernism, or the Cultural Logic of Late Capitalism” a year
The Arguable Death of Postmodernism 33

earlier, and is in many ways a response to it. Eagleton characterizes post-


modernism here, memorably, as a parody or a “sick joke at the expense” of
the revolutionary art of the twentieth-century avant-garde, which had
dreamed of breaking down the barriers between art and social life.38 Many
if not all of the barbs he directs in After Theory at postmodernism are also
recycled from his unambiguously titled The Illusions of Postmodernism
(1996), especially the long chapter called “Fallacies.” Equally, the misgiv-
ings he expresses here about theory can be traced back through his previ-
ous publications: David Alderson finds “a strong sense . . . of the
overvaluation of theory” relative to historical political struggle in his work
since the early 1980s.39
Eagleton’s second move, a more ground-breaking one, is to put back
into cultural theory many of the concepts, properly clarified, which it
(and postmodernism) had repudiated. He restores a notion of absolute
truth, correctly understood: “The champions of Enlightenment are right:
truth indeed exists. But so are their counter-Enlightenment critics: there
is indeed truth, but it is monstrous” (109). He argues in favor too of the
long-disused notion of objectivity, extolling disinterestedness (“for post-
modern theory, the last word in delusion”: 134). He explores love and self-
fulfillment, and places questions of ethics, morality, and value at the center
of his thinking. As well as reinstating issues marginalized or derided by
theory, Eagleton rejects postmodern and post-structuralist antiessential-
ism, damningly calling it “largely the product of philosophical amateurism
and ignorance” (121). Everywhere he makes these shifts: away from
theory’s concern with the meaning(s) of the body to our fleshly mortality
and limitedness; away from the “culturalism” of theory (a “form of reduc-
tionism which sees everything in cultural terms”: 162) to what he calls
“species-being,” the idea that our bodies belong to our species before they
can be said to belong to us.
This is not so much a breakthrough in thought as a strategic attempt to
change the terms of cultural debate. The shift in content is reinforced by
two noticeable stylistic changes. In place of the polysyllabic abstraction
and pseudo-French neologisms of previously existing theory, Eagleton
increasingly comes to deploy a simple, almost chatty, concretely Anglo-
Saxon vocabulary: “Feeling happy may be a sign that you are thriving as a
human being should, whatever that means; but it is not cast-iron evidence.
You might be feeling happy because the parents of your abductee have just
come up with the ransom money” (129). The tone may be startling in a
book whose back cover contains panegyrics from Slavoj Žižek and Frank
Kermode. Moreover, Eagleton’s roster of authoritative thinkers moves
34 DIGIMODERNISM

decisively away from the post-1960s Parisian writers beloved of traditional


theory. Out go Barthes, Foucault, Kristeva, Althusser, Lacan; in their place,
he draws on Aristotle, Pascal, and the Book of Isaiah, and applauds St. Paul’s
view of the Mosaic Law. Quotations from British analytic philosophers
like Bernard Williams, Philippa Foot, and Alasdair MacIntyre are also
scattered through the text. By contrast, on their rare appearances Fredric
Jameson is called “mistaken” (143) and Derrida is mocked (“One can only
hope that he is not on the jury when one’s case comes up in court”: 153–54).
Not all of this is remarkable in the context of Eagleton’s long career: he
draws as much here on the German Marxists Adorno, Benjamin, and
Brecht as he ever did, and jokes about Derrida are nothing new for him.
What is new is the silent and almost absolute excision in the second half
of the book of the gamut of Franco-American postmodernists and post-
structuralists, a semidisavowed repudiation of a whole intellectual constel-
lation. This is the case also with Joughlin and Malpas’s The New Aestheticism,
which looks to the tradition of Germanic philosophy (Adorno, Benjamin,
Heidegger, Kant) rather than anything smacking of Paris or Saussure, with-
out, though, ever quite admitting to it.
Eagleton’s performance here seeks to reorient theory around questions
of intrinsic interest to religion and a Germanic-British philosophical axis.
The dominant and emblematic figure throughout is Ludwig Wittgenstein,
an Austrian who studied and taught at Cambridge, who wrote in German,
and whose most devoted disciples have been based in Oxford. In the
second half of After Theory Wittgenstein is endlessly quoted from, alluded
to, and silently stolen from; his technique of proceeding philosophically by
aiming to clarify existing ideas rather than introduce new ones is Eagleton’s
too. Placed in the long trajectory of Eagleton’s career, this is a remarkable
development. Wittgenstein’s name does not figure at all in the index of
David Alderson’s survey of Eagleton’s work published in 2004; it does not
appear either in the index of Literary Theory, a compendium of twentieth-
century abstraction, although even here Eagleton was not above passing off
Wittgenstein’s insights as his own: the Englishman’s “these practical social
uses are the various meanings of the word” reworks the Austrian’s “the
meaning of a word is its use in the language.”40 Eagleton made him the sub-
ject of a novella (Saints and Scholars [1987]) and a screenplay (for Derek
Jarman’s Wittgenstein [1993]) without this being noticed by the blinkered
Alderson. It’s almost as if Eagleton’s career was destined to end in the
emergence from the intellectual closet of his love for Wittgenstein. In 1987,
protesting really far too much, he presents the Austrian as almost clinically
The Arguable Death of Postmodernism 35

deranged, an “old lunatic”; but in his 2001 memoir, the philosopher is


(with Brecht) the only figure discussed at length in the chapter called
“Thinkers”; and by 2007, according to a picture caption, he is “commonly
thought to be the greatest philosopher of the twentieth century.”41
After Theory is a fascinating and remarkable intellectual performance.
It seeks to give back to cultural theory, newly clarified, concepts and
issues that it had long suspended and denied, and to clear away some of
the pseudophilosophical litter that had come to infest it. The emblem of
its intention is its use of everyday language in which to discuss universal
(or species-wide) questions. It belongs to a broader shift in Eagleton’s work
in the mid-2000s, one which saw him write a defense of Christian theology
against the assaults of Richard Dawkins, a short book called (and accessi-
bly exploring) The Meaning of Life, and a sympathetic introduction to an
edition of the gospels. Truth, meaning, ethics, mortality, the good life—
these had become Eagleton’s domain, and were taken as valid, even essen-
tial questions. To do this required an implicit renunciation of the issues
and the thinkers discussed in Literary Theory, and a reconciliation with
ordinary-language British philosophy of which, back in the 1980s, one
would never have thought him capable. It’s a huge shift in one sense, but in
another, nothing has changed: Eagleton is still doing cultural theory as he
always did, but now is trying to see beyond the end of theory as we knew
it, in order to begin it again, begin it better.
Yet the extent to which all of this is successful is very moot. It doesn’t
matter here that the argument is sometimes flawed in detail, that, for
instance, his take on religious fundamentalism reduces Christianity to
Protestantism, an error common in the English but awkward for a writer
who has published repeatedly on Irish culture, and probably attributable to
another of the book’s finally trivial faults, its recurring and unfocused anti-
Americanism. What matters more is that, beyond its performative exam-
ple, whereby an internationally famous cultural theorist talks at length
about certain issues in a certain style drawing on certain thinkers, the text
produces nothing new and distinctive for theory to build on. A reader
might be encouraged to engage with the same writers regarding the same
or similar subjects, but will find nothing in the specifics of the argument to
move around or use as a base from which to launch new thinking. Most
brutally, it has to be said that in detail the argument is sometimes banal,
that all of it is poorly structured (often because sections have been cut and
pasted from previous publications), and that it leads to no conclusions at
all. Yet the fact that the argumentative performance has been made, and
36 DIGIMODERNISM

made like this, is fascinating and important in itself. Like Theory’s Empire,
it contains no philosophical leap forward beyond the vital and significant
truth of its own published existence.
Eagleton concludes that:

We can never be “after theory,” in the sense that there can be no reflec-
tive human life without it. We can simply run out of particular styles
of thinking, as our situation changes. With the launch of a new global
narrative of capitalism, along with the so-called war on terror, it may
well be that the style of thinking known as postmodernism is now
approaching an end. (221)

This climactic claim is, I believe, new in Eagleton but, though arresting,
it’s undermined by his own eternal antipathy to postmodernism. It’s also
impoverished by a refusal to relate these issues to the cultural mood
outside the academy. How, he might have wondered, is postmodernism
getting on in the big wide world? This final failure is symptomatic of the
whole post-theory debate. At best, all the “end of theory” seems to mean is
that here is one era that is dying and another (so far) unable to be born. The
picture is murky and indecipherable: essentially, our inability to see who
the new king or queen of thought might be makes us unclear whether
exactly the old one is dead. There is no decisive proof of the death of post-
modernism here, only a tumult of circumstantial evidence.

Succeeding Postmodernism: Performatism, Hypermodernity, and so on

This doesn’t mean that no one has claimed the throne as the new king or
queen of thought in the wake of postmodernism’s supposed “death.” Yet
such a claim is problematic in a way that arguing for the extinction of
cultural postmodernism isn’t. In 1992, observing that “the postmodern
phenomenon has gradually infiltrated every vacant pocket of our lives and
lifestyles,” Gilbert Adair warned:

Postmodernism is, almost by definition, a transitional cusp of social,


cultural, economic and ideological history when modernism’s ear-
nest principles and preoccupations have ceased to function but have
not yet been replaced by a totally new system of values. It represents
a moment of suspension before the batteries are recharged for the
new millennium, an acknowledgement that preceding the future is a
strange and hybrid interregnum that might be called the last gasp of
the past.42
The Arguable Death of Postmodernism 37

So postmodernism must die and, according to Adair, before the arrival of


the twenty-first century. Yet this assumes that postmodernism is artistic:
you wouldn’t seriously talk of current scientific practice as “transitional” or
an “interregnum.” It precludes what some find in postmodernism, a once-
and-for-all theoretical leap forward, the discovery and elaboration of a way
of understanding and thinking so valid it can never and will never be lost.
Philosophical postmodernism or post-structuralism may consider that,
following millennia of error, we have been ushered under the aegis of
the insights and strategies of Derrida, Foucault, Deleuze, et al. into a more
just and accurate mental universe; such a postmodernism may feel itself
immune to periodization, and claim irreversibility. For the zealot, this
knowledge will never be over without some unimaginable cataclysm super-
vening; the moderate might feel that anyone announcing the “death of
postmodernism” will have to disprove it. What can the putative argument
say to that?
One possible answer, supported by some important figures, would be
that philosophy does not progress. The stance of the later Wittgenstein was
that: “Critically there can be progress; certain philosophical methods can
be ruled out as illegitimate . . . certain philosophical ‘doctrines’ . . . can be
shown to be nonsense. But if by ‘progress’ we mean accumulation of knowl-
edge, discovery of new facts and construction of novel theories to explain
them, then there is no progress in philosophy.”43 Another reaction might
be that early digimodernism subsumes in its blind and potent youth the
residual remnants of postmodern or post-structuralist thought. This latter
cannot remain world-changing forever. David Alderson observed in 2004
that “though the term ‘postmodernism’ seems to be increasingly out of
favor these days its intellectual bearings remain largely in place, all the
moreso [sic], indeed, for having become almost a sort of common sense.”44
While the last two words are an incitement to demystification, the general
point may suggest the way philosophical postmodernism ends: not with a
coup, but with absorption. Indeed, to say that something is “dead” is the
opposite of arguing that it never existed; it means that, no longer growing
and vibrant, an entity has merged with the ever-expanding past and as such
feeds into and inflects our present and future. Thus “disproof,” which would
turn the clock back, is not required: that belongs to an earlier time when
writers sought to undo postmodern and post-structuralist thought, to wipe
it off the face of the earth as if it had never been.
A brief word about such writers. None of the histories of postmodern-
ism systematically surveys its counterpart anti-postmodernism, which is a
pity since it comprises much of the most enjoyable writing on the subject.
This subgenre boasts a handful of recognizable traits: it lands devastating
38 DIGIMODERNISM

blows on its subject, leaving it battered and bleeding; it can never finish its
subject off, which survives, apparently indestructible, until another day;
and it is invariably backward-looking in its intellectual wish list. Probably
the best is Alex Callinicos’s Against Postmodernism (1989), which denies
that “postmodern art” is distinguishable from modernist, finds holes in
postmodern and post-structuralist thought which in any case he sees as
modernist in spirit, and rejects the idea of a recent historical “rupture.” He
advocates Marxism instead (in the year the Berlin wall fell). Christopher
Norris’s What’s Wrong with Postmodernism (1990) extols Derrida but vio-
lently rejects Lyotard and Baudrillard; he wants Britain to embrace early
1980s’ Labor Party socialism (it never would). Eagleton’s The Illusions of
Postmodernism (1996) assaults a trashier version of the enemy and, coming
too late to urge Marxism, would settle for general left-wing activism
instead. Raymond Tallis’s Not Saussure (1988, 1995) excoriates in hysterical
style all philosophy deriving from the influential Swiss as incoherent and
unfounded; he wants to roll back to a kind of realism. Most (in)famous
is Alan Sokal and Jean Bricmont’s Intellectual Impostures (1997), which
rightly exposes the abuse of scientific rhetoric in postmodern and post-
structuralist writing, but bizarrely supposes that these stylistic failings
discredit the entire project; calling (quite reasonably) for a valorization of
scientific positivism, they go through a quantity of texts labeling anything
they can’t understand “meaningless” in an example of nineteenth-century
scientistic imperialism we can only thank postmodernism for having
deconstructed. This subgenre is so established a one it’s no surprise to find
the volume on postmodernism in the OUP Very Short Introduction series
uniquely repudiating its subject (the author prefers liberal realism). All
such texts tend to want to send postmodernism away in favor of one of its
predecessors; hence their failure.
If anything of philosophical postmodernism or post-structuralism is
likely to survive undigested, to resist absorption, it’s the work of Jacques
Derrida. However, the jury must surely still be out on Derrida’s oeuvre:
how much of it will survive, which parts, and with what persuasiveness are
as yet unknown. Having published vastly during a lifetime that ended as
recently as 2003, Derrida will gradually find his place somewhere in the
philosophical tradition (and not among the untrained). It can nevertheless
be said with virtually complete confidence that the future will not see him
either as a terrifying and despicable nihilist bent on destroying reason and
truth, or as a godlike superstar who successfully reinvented the history of
human thought. Both of these wild and bogus simplicities, popular in the
The Arguable Death of Postmodernism 39

1980s and 90s, will appear ever more embarrassing as time wears on.
Unambiguous statements such as this one will have to be accounted for:

I have never accepted saying, or encouraging others to say, just any-


thing at all, nor have I argued for indeterminacy as such . . . how sur-
prised I have often been, how amused or discouraged, depending on
my humor, by the use or abuse of the [claim that] . . . the deconstruc-
tionist . . . is supposed not to believe in truth, stability, or the unity of
meaning, in intention or “meaning-to-say” . . . this definition of the
deconstructionist is false (that’s right: false, not true) and feeble; it
supposes a bad (that’s right: bad, not good) and feeble reading of
numerous texts, first of all mine, which therefore must finally be read
or reread. Then perhaps it will be understood that the value of truth
(and all those values associated with it) is never contested or destroyed
in my writings, but only reinscribed in more powerful, larger, more
stratified contexts.45

This is not to “defang” Derrida, nor to commit the intentional fallacy.


However, it suggests the complexity and difficulty of his work, which must
be respected and interrogated by those trained and competent to do so.
Various caricatures of it and of the texts of his peers became for a time
prevalent among sophomores, the philosophically illiterate, and those too
lazy to read them; if not dead today, they richly deserve to be.
In the 2000s it can be argued that a kink appeared in the tradition of
anti-postmodernism: writers began, with varying degrees of success, to
seek to dislodge and supersede postmodernism in the name of something
new. These theories place themselves, to a differing extent, in a similar
argumentative space to digimodernism; they may be considered competi-
tors, but I think this an oversimplification of their relations. I shall now
glance at a selection of such theories.
An article titled “Performatism, or the End of Postmodernism” was
published in the Fall 2000/Winter 2001 edition of the journal Anthropoetics
by the German-American Slavist Raoul Eshelman. Over the following
years, Eshelman wrote pieces on performatism in architecture, film, and
literature that (I assume) he gathered together into a book of the same
name as his initial article published in 2008; I haven’t been able to see this,
but an online chapter list reflects the contents of these articles. Eshelman
identifies a “broadly drawn borderline of 1997–1999, which in my view
marks the beginning of performatism,”46 a form that Amazon’s product
40 DIGIMODERNISM

description calls a “new cultural dominant,” borrowing Jameson’s term


just as the book’s title deliberately echoes the American’s (“-ism, or
the”). Indeed, as a theorized description of what Eshelman calls “post-
postmodernism” (a vile term consecrated by Wikipedia), performatism
struggles with chronology. The label is tightly associated with postmodern
and post-structuralist thought (cf. Judith Butler’s gender as performance),
and the places where he finds it—art, architecture, auteurist cinema, and
literary fiction—are the loci classici of postmodern culture: he’s looking
for something new with an old name in exactly the same places as he found
the old thing. The echoes too are invidious: though purloining or deturn-
ing Jameson’s lexicon, he does not constitute performatism extratextually
as linked to any historical shifts—he implicitly defines it as the cultural
equivalent of a raised or lowered hemline. Artists may whimsically decide
to create like this or like that, and so one -ism yields to another. This is
shallow and a misrepresentation (like the Stuckists’) of postmodernism.
Indeed, postmodernism to Eshelman is not so much a cultural-dominant
as a cultural-monopoly: overstating its onetime supremacy, insisting on
seeing it monolithically, and excising Jameson’s references to coexisting
“residual” and “emergent” cultural elements, he discusses nobody who
explicitly treated postmodernism (Lyotard, Jameson, Harvey) and charac-
terizes it through theorists who avoided the category (Derrida, Deleuze,
Lacan) as though they had once exerted a totalitarian grip on Western cul-
ture. This skewed vision enables him to present performatism as emanci-
patory and sexily rebellious in the same way, ironically, that postmodernism
once did.
The initial article, with its approval of a 1980s’ piece called “Against The-
ory,” promises just another slab of backward-looking anti-postmodernism.
The “five basic features of performatism” he identifies here are heavy on
rejection of postmodernist traits and strategies (“No more endless citing
and no authenticity”) and mindlessly embrace their polar opposites
(“return of history . . . return of authoriality . . . Transition from metaphysi-
cal pessimism to metaphysical optimism . . . Return and rehabilitation of
the phallus”).47 The later articles grow in sophistication and interest, but are
vitiated again by chronology. Eshelman calls performatism “an application
of Eric Gans’s generative anthropology” for which Anthropoetics is the
house journal,48 but (leaving to one side its details for now) this theory is
not historicized: it presents itself as a general anthropological framework
with spatially and temporally unlimited validity. How, then, can some eter-
nal truths about humanity start to inspire art only in 1997–99? Eshelman is
obliged to portray postmodernism negatively, as pointlessly erroneous,
and also to exclude from his frame of reference all art produced before
The Arguable Death of Postmodernism 41

monopolistic postmodernism (in practice, before 1990). The characteris-


tics he gives of a performatist text sound like vast quantities of texts:

themes of identity, reconciliation, and belief . . . [identification] with


single-minded characters and their sacrificial, redemptive acts . . .
dramatically staged, emotionally moving dénouements . . . dense or
opaque subjects . . . scenes of transcendence . . . the originary experi-
ence of love, beauty and reconciliation . . . a space where transcen-
dence, goodness, and beauty can be experienced vicariously . . . forces
us, at least for the time being, to take the beautiful attitude of a believer
rather than the skeptical attitude of a continually frustrated seeker of
truth . . . Performatist aesthetics . . . bring back beauty, good, whole-
ness and a whole slew of other metaphysical propositions, but only
under very special, singular conditions that a text forces us to accept
on its own terms . . . framing the reader so as to force him or her to
assume a posture of belief vis-à-vis a dubious fictional center etc.49

What about Ian McEwan’s The Child in Time (1987)? Or One Hundred
Years of Solitude, The Tempest, The Seventh Seal, Oedipus Rex, The Divine
Comedy, The Magus? One example of performatist literature that he gives,
Olga Tokarczuk’s “The Wardrobe,” recalls nothing so much as Charlotte
Perkins Gilman’s “The Yellow Wallpaper” (1892), perhaps rewritten by
McEwan. There are times when Eshelman simply seems to have a taste for
mild irrationalism or naivety in art. When he speaks of “this odd prefer-
ence for positive metaphysical illusions, for narrative authoritativeness and
for forced identification with central characters,”50 the “oddity” is sparked
only by the assumed but actually spurious former cultural-monopoly of
theoretical postmodernism (his writing is full of references to the “usual”
or “standard” postmodern position or strategy)—I’m sure he doesn’t find
the Odyssey artistically weird. Eshelman identifies narratives that move in
the space between programmatic antirealism and bourgeois realism, but
many artists always have and plenty did even during postmodernism’s hey-
day. Deconstruction is indeed not very useful with such texts, but then it’s
a mistake to see Derrida primarily as a cultural critic. You suspect that
Eshelman is tired of the über-skepticism of post-structuralist thought and
eager for the traditional pleasures of art, and you can’t blame him for that.
But finally he’s a symptom: he picks up on the superannuation of postmod-
ernism but doesn’t suggest its successor.
If Eshelman positions himself as the heir to Jameson, Gilles Lipovetsky
would supplant Lyotard. Lipovetsky’s Les Temps hypermodernes, first pub-
lished in 2004, proposes “hypermodernity” as the successor to Lyotard’s
42 DIGIMODERNISM

postmodernity: “That era is now ended.”51 “Several signs suggest that we


have entered the age of the ‘hyper,’ characterized by hyperconsumption
(the third phase of consumption), hypermodernity (which follows post-
modernity), and hypernarcissism” (11). There are some false moves in
defining the precise nature of hypermodernity. Presented as a “society
characterized by movement, fluidity and flexibility” (11), it sounds exactly
like postmodernity; described as a period of anxiety and fear following
one of euphoria and blissful emancipation it sounds like the same thing
in a different climate, as the comparison of films made respectively in 2003
and 1986 suggests: “In short, the slogan is no longer ‘Enjoy yourselves
without hindrance!’ . . . but ‘Be very afraid, however old you are’; and the
Rémy Girard obsessed by disease and death in Denys Arcand’s film Les
Invasions barbares has logically replaced the dilettantish Rémy Girard of
The Decline of the American Empire, fifteen years or so earlier” (13). A close
reading of the book suggests that hypermodernity is primarily a social
and historical quantity, not a cultural or technological one: there is no real
discussion of texts or mention of the impact of computerization, but a
number of remarks instead about pornography, unpaid voluntary work,
and the enduring valorization of love and femininity. Two themes particu-
larly interest Lipovetsky here: first, contemporary society’s sense of time
and its relation to past and present, reflected in the book’s title; and second,
contemporary hyperconsumption, or the nature of a society flooded (but
not monopolized) by the ethos and practices of consumerism, which he
sees as paradoxical in its effects. In short, the thrust of the book is to define
the advantages and shortcomings of a society mostly given over to con-
sumerism, in particular its nuanced conception of historical time.
Indeed, you might find yourself wondering as you turn the pages whether
Lipovetsky actually needs the category of hypermodernity. The label some-
times seems to mean nothing more than the period, subsequent to post-
modernity, of hyperconsumption. Importantly, though, Lipovetsky defines
hypermodernity as the literal and culminating fulfillment of modernity:
“The ‘post’ of postmodern still directed people’s attentions to a past that
was assumed to be dead; it suggested that something had disappeared
without specifying what was becoming of us as a result . . . The climate of
epilogue is being followed by the awareness of a headlong rush forwards, of
unbridled modernization” (30–31). Modernity offered limitless individu-
alism, freedom from social obligations, emancipation from oppressive
duties and structuring conventions, the pursuit of pleasure, and personal
autonomy. Rather, it spoke of these, but could not produce them as social
realities. Hypermodernity is the moment when these notions become
The Arguable Death of Postmodernism 43

concrete and flesh, when they are lived and experienced (not always
happily) right across society. Consequently, hypermodernity is not intel-
lectually new: it’s the maximization of modernity, the era from which all
premodern structuring principles (family, church, class, etc.) have in prac-
tice been stripped: “[t]he era of hyperconsumption and hypermodernity
has sealed the decline of the great traditional structures of meaning, and
their recuperation by the logic of fashion and consumption” (14).
There is much to recommend this analysis, which notably breaks with
three postmodernist or post-structuralist traits: it’s neither millenarian
(there’s no pulsating rhetoric of “ends” or “post-s”) nor a continuation of
May ’68 by other means (it’s not countercultural; it applauds hyperindivid-
ualism for saving us from the bloodshed of ideological fanaticism) nor
does it flirt with nihilism (Lipovetsky holds that our society believes
unshakably in human rights, in love, it foregrounds others’ well-being,
etc.). The argument, though sketchy, feels qualitatively different than those
of a previous generation of French intellectuals. However, this portrait of
modernity as a bunch of ideas finally reified by hypermodernity is incom-
plete. What happened to universalism? What about the reign of reason?
In fact, Lipovetsky interprets modernity as the sociopolitical dream of
the French Revolution, the hope of liberty, equality, and fraternity and les
droits de l’homme; he distances himself from philosophical Enlightenment,
refurbishing Lyotard with his claim that hypermodernity is characterized
by the “dissolution of the unquestioned bases of knowledge” (67). In prac-
tice, he may see his work as an updated and more completely sociologized
version of Lyotard’s, which “defined the postmodern as a crisis in founda-
tions and the decline in the great systems of legitimation. That was of
course correct, but not absolutely so” (77).
A great part of Les Temps hypermodernes accords though with concep-
tions of a possible digimodernist society, as my Chapter 7 will suggest; its
account of consumerism is particularly compelling. In 2007 Lipovetsky
extended these arguments to the cultural domain with L’Ecran Global:
Culture-médias et cinéma à l’âge hypermoderne, as yet untranslated into
English.52 His fetishization of cinema is traditionally French, and his run-
ning together of all contemporary forms of “screen” into one bundle ruled
by film is both simplistic and conservative. The book adds little to the
meaning of hypermodernity, which again comes across as a mostly super-
fluous category whose content is insufficient and unsatisfactory, and which
obscures his insights into consumerist society.
In a doubtless unwitting echo of Lipovetsky, Paul Crowther argued
in 2003 that “we are now living in what—in cultural terms—would be far
44 DIGIMODERNISM

better described as supermodernity. This term is warranted insofar as the


contemporary world continues to be driven by the same socio-economic
force which was central to the creation of modernity, namely the market.”53
Both accounts are largely commensurate with my own view that our
present social condition is best understood as a development within
modernity. Crowther also defines supermodernity as a period that “absorbs
the opposition”: postmodernism, being nothing more than a narrowing of
modernist criteria, is contained by supermodernity; Crowther’s goal is to
develop a philosophy that moves beyond them by reestablishing notions of
civilization, value, and knowledge.54 According to José López and Garry
Potter that philosophy is critical realism. In 2001, they described postmod-
ernism as “in a state of decline . . . out of fashion” and asked: “what is to
come after postmodernism?”55 For them, “postmodernism is inadequate
as an intellectual response to the times we live in . . . critical realism offers
a more reasonable and useful framework from which to engage the philo-
sophical, scientific and social scientific challenges of this new century.”56
Like Crowther, critical realism insists on the possibility of knowledge, truth,
and reality; it would specifically take over from post-Saussurian epistemol-
ogy. It’s a powerful and attractive philosophical method, but it doesn’t
“succeed” postmodernism: it was born in the 1970s and grew up concur-
rently with it, even if, like a long-eclipsed younger sibling, it has survived
the death of its more famous and glamorous relative.
Such proposals show recurring features. They struggle, in absolute
contrast to postmodernism, to make waves outside of academia, though
this doesn’t necessarily invalidate them, and they are in any case young.
Each is confined to a single academic discipline (aesthetics, sociology) or a
single scholarly subject (philosophy of science); consequently each lays claim
in practice to only a fragment of postmodernism’s hegemonic inheritance.
Digimodernism differs from them, I think, in claiming to be triggered by a
specific historical event (the redefinition of textuality and culture by the
spread of digitization) other than their shared perception of the decline
and fall of postmodernism. The reader will decide him/her/yourself on
their plausibility: they may seem to jostle like impatient listeners at the
reading of a will, but they don’t quite occupy the same intellectual space.
Overall, the (subjective) sense of an ending is more impressive than the
(objective) promotion of a beginning. Very interesting in this regard is the
recent work of Charles Jencks, the author of an important study of post-
modernism in architecture and the arts first published in 1986 and regu-
larly updated since. The book’s fifth edition, released in 2007, argues for the
displacement of postmodernism, after thirty years, by a new cultural era
called critical modernism beginning in 2000. This refers “both to the
The Arguable Death of Postmodernism 45

continuous dialectic between modernisms as they criticize each other and


to the way the compression of many modernisms forces a self-conscious
criticality, a Modernism-2.”57 So many suggested new times, but always the
same (putative) ending.

Cock and Bull

A tempting argument for the demise of postmodernism is the critical and/


or commercial failure of attempts at postmodern texts in the 2000s. This
might be seen as the effect of the “new impossibility” of postmodernism in
its cultural form. A list could be assembled then of stillborn texts attesting
to this climate, which could be tabulated as below: the column on the left
gives the author where available; the middle column shows the failed and
recent postmodern text (some more failed than others); the right-hand
column suggests an equivalent text from the era of high postmodernism
that was critically and/or commercially successful.
More could be added, but the argument can never become watertight
no matter how long its list of instances. Every single failure can be attrib-
uted to aesthetic causes: to bad direction, writing, acting, to insufficient
or exhausted talent; in no case is their inadequacy necessarily the result of
the zeitgeist. Indeed, blaming the failures on the zeitgeist implies that in
another time either they would have been acclaimed as masterpieces

Postmodernism bust and boom

Author (genre) Recent failure Past triumph


Larry and Andy The Matrix Reloaded and The Matrix (1999)
Wachowski (film) The Matrix Revolutions
(both 2003)58
N/A (film) Molière (2007)59 Shakespeare in Love (1998)
Woody Allen (film) Hollywood Ending (2002) The Purple Rose of Cairo (1985)
and Melinda and
Melinda (2005)60
N/A (film) The League of Gentlemen’s The Purple Rose of Cairo (1985) and
Apocalypse (2005) Pleasantville (1998)
Martin Amis (novel) Yellow Dog (2003)61 Money (1984) and London Fields
(1989)
Bret Easton Ellis Lunar Park (2005) American Psycho (1991)
(novel)
N/A (TV) Rob Brydon’s Annually The Larry Sanders Show (1992–98)
Retentive (2006– ) and It’s Garry Shandling’s Show
(1986–90)
46 DIGIMODERNISM

(highly unlikely, to say the least) or that the zeitgeist would somehow have
made them better (a rather strange notion). Furthermore, it can’t be ruled
out that next month a stunning postmodern film, book, or TV program
may appear. My feeling is that this too is improbable, as none has emerged
for several years now, but I can’t “prove” it. The argument of paradigmatic
failure due to superannuation can be put forth but not conclusively dem-
onstrated; again, the decline of The Simpsons after 2000 suggests a certain
cultural climate, but no more than that. The same goes for the turning away
from postmodernism of artists hitherto associated with it, such as Julian
Barnes or Damon Albarn. Though significant, it proves nothing: artists
with long careers evolve, and without their earlier modes necessarily hav-
ing “died” for the whole human race.
The most that can be said is that lately there hasn’t been much cultural
postmodernism around and what there is, isn’t that good. But perhaps
this is just a fallow period, and postmodernism will regain its strength and
vitality soon (perhaps; but there’s no evidence for this; and all artistic
movements end one day). Or maybe this is still a thriving postmodern
moment but some people—especially the middle-aged hankering wistfully
for the texts of their youth—refuse to see it. It is true that a generation gap
has opened up between the professors teaching postmodernism modules
and their students. An undergraduate taking such a module in 2010 is
likely to have been born in 1989 or after, and likelier still to be given no
primary text to read published in her or his lifetime. This is Mom and Dad’s
culture. Some professors will nevertheless present it as the latest thing in
cutting-edge aesthetics, although it all belongs to the same era as Betamax
video recorders, shoulder pads, and voodoo economics (and that is at best;
teaching The French Lieutenant’s Woman recently I found myself having to
explain as many of the “contemporary” references as of the Victorian ones
to students for whom this novel represented, indeed, their grandparents’
culture). Postmodern texts try to get to grips with the Cold War and televi-
sion; today’s students take for granted Islamism and the Internet.
And yet: it can be argued that this is the fault of old-fart professors who
have lost touch with the latest developments in postmodernism (rather
than saying that postmodernism has no latest developments). It’s true that
in books such as Brian McHale’s Postmodernist Fiction (1987) and Ian
Gregson’s Postmodern Literature (2004) the same period is discussed: while
the authors differ in their choice of interesting texts, either from personal
taste or shifting critical perspectives, the passing of almost two decades
between them is not reflected in a change in the alleged “time” of postmod-
ernism.62 McHale considers the latest fictive thing whereas Gregson—who
The Arguable Death of Postmodernism 47

scarcely looks beyond 1990—in effect contemplates a historical literary mode.


Again this can be blamed on Gregson’s supposed conservatism: merging
him with hundreds of other professors worldwide, some vague conspiracy
theory can be sketched by which the repressive oldsters pretend that post-
modernism is their generation’s achievement, and ignore the brilliant post-
modernist work being done by the youth of today (such a view has been
put to me). Perhaps. But again it seems unlikely, and Gregson, whose book
is wide-ranging, sympathetic, and subtle, makes for an implausible cultural
fascist. It’s actually in the interest of professors to (over)sell their period as
packed with exciting new stuff; suppressing it makes no economic sense.
This evidence then is all, I think, compelling but not definitive. What is
perhaps even more persuasive than a range of alleged failures or ambigu-
ous absences is that certain recent texts show signs of a historical shift
in their relationship with postmodernism. Michael Winterbottom’s film
Tristram Shandy: A Cock and Bull Story (2006) is such a text. A film-within-
a-film, it’s based on the eighteenth-century novel by Lawrence Sterne often
regarded as a precursor of postmodernism; Winterbottom moves between
a semistraight adaptation of bits of Sterne’s text and the personal, technical,
financial, creative, and organizational rigmarole of shooting it. It’s fabu-
lously complicated and multilayered, but finally redundant and insubstan-
tial, in large part due to its very superannuation. Sharing some of its
shortcomings with Shrek 2, it pastiches unmercifully Fellini’s 8½ (1963),
borrowing its music to soundtrack most of the screen time devoted to the
making-of segments; but the shallowness of Winterbottom’s convolutions
is only thrown into relief by the evocation of Fellini’s masterpiece: we seem
to be watching a futile and second-rate imitation of it. Given its comic
elements and the nebbishness of its leading man, A Cock and Bull Story
often resembles a pastiche of Woody Allen’s Stardust Memories (1980),
itself a knowing redeployment of 8½. It also pastiches Truffaut’s postmod-
ern film-about-the-making-of-a-film La Nuit Américaine (1973; everyone
on the set is sleeping with everyone else), Robert Altman’s postmodern
satire about movie-making in Hollywood The Player (1992; the overlap-
ping dialogue), and Martin Amis’s Money (1984), a postmodern novel
about the preproduction of a Hollywood movie that turns into a critique
of the creative process. It equally draws on or looks back to Karel Reisz’s
postmodern The French Lieutenant’s Woman (1981), which moves between
the present-day shooting of a period novel and the novel itself, and Peter
Greenaway’s postmodern film The Draughtsman’s Contract (1982), which
is set in a seventeenth-century English country house and whose music
Winterbottom also appropriates.
48 DIGIMODERNISM

The unoriginality of A Cock and Bull Story suggests a nostalgia for


postmodernism: a warm evocation of those great, long-gone days of the
postmodern summer. When it refers to Knowing Me, Knowing You . . .
with Alan Partridge (1994–95), a postmodern TV parody of the chat show
starring Steve Coogan, Winterbottom’s leading man, or Winterbottom’s
own postmodern film 24 Hour Party People (Tony Wilson interviews
Coogan, who played him in it), A Cock and Bull Story seems to lose itself in
a miasma of postmodern harking backs to backward-looking irony that
quote and cite and allude and echo. All of its models come from the
halcyon days of high postmodernism and it pales by comparison with
them (including Blackadder). Finally, it’s a pastiche of a postmodern text, a
tissue of quotations from postmodern texts. It’s a bankrupt and uninspir-
ing frolic at the funeral of postmodernism: what postmodernism did to
all preceding culture—quoted it, ironized it, cut it up and redeployed it—
Winterbottom inadvertently does to all preceding postmodern texts. And
as though to signal his belatedness, to show his exclusion from a departed
time, he uses a technique integral to digimodernism: the handheld camera
of the docusoap with its generated sense of the apparently real. The film
wants to be postmodern, yearns to join the club; but it actually demon-
strates on every level possible that the club has long since disbanded.

***
Author of The Poetics of Postmodernism (1988) and The Politics of Postmod-
ernism (1989), which became standard texts in their field, Linda Hutcheon
appended an epilogue to the second edition of the latter book when it
appeared in 2002 called “The Postmodern . . . In Retrospect.” She noted
that when writing the first edition “the postmodern was in the process of
defining itself before my very eyes.”63 However, “[it] may well be a twentieth-
century phenomenon, that is, a thing of the past. Now fully institutional-
ized, it has its canonized texts, its anthologies, primers and readers, its
dictionaries and its histories.”64 Jameson had dated the coming of post-
modernism to the 1950s’ institutionalization of modernism; now the circle
was complete. Hutcheon concluded, “Let’s just say: it’s over.”65 In 2007,
a special issue of the academic journal Twentieth-Century Literature titled
“After Postmodernism” appeared with an introduction evoking “the wake
of postmodernism’s waning influence. By now, as Jeremy Green notes,
declarations of postmodernism’s demise have become a critical common-
place.”66 He’s right: you find them everywhere. The death of postmodernism:
it’s so old hat. And yet it’s still all just assertion: it can be and is and has been
declared; but is that it?
The Arguable Death of Postmodernism 49

The conclusion is that the proposition “postmodernism is dead” cannot


be proven, but not because it is a pseudoproposition as some may think. In
truth, we have been hexed by a metaphor (“[a] picture held us captive”67),
by which it was possible here to impute sentience and talk of mortality. The
language game of the correspondence theory of truth does not mesh with
such a metaphor. The consequence of this tour d’horizon is nonetheless
the painting of the backcloth and the establishment of the ground for the
emergence of postmodernism’s successor. Early digimodernism is inflected
and shaped by the lingering residue of its predecessor: the whole of this
chapter functions (but you knew that) as an introduction to digimodern-
ism, or as an analysis of the cultural landscape within which digimodern-
ism first appeared. It’s time to focus in now. Steven Connor, who also
wrote a standard 1980s’ text on postmodernism, said of it in 2004, quoting
Beckett: “‘Finished, it’s finished, nearly finished, it must be nearly finished’
. . . Surely, the first thing to be said about postmodernism, at this hour,
after three decades of furious business and ringing tills, is that it must be
nearly at an end.”68 But he retreated from this: “One is compelled to
begin almost any synoptic account of postmodernism with such sunset
thoughts, even as, in the very midst of one’s good riddance, one senses
that the sweet sorrow of taking leave of postmodernism may be prolonged
for some time yet.”69 Perhaps Connor’s uncertain sense of an ending would
be resolved by the idea of a new beginning.
2
The Digimodernist Text

sea change: (unexpected or notable) transformation


watershed: line of separation between waters flowing to different rivers or basins or seas . . .
(fig.) turning-point
Concise Oxford Dictionary, 19821

There are various ways of defining digimodernism. It is the impact on


cultural forms of computerization (inventing some, altering others). It is
a set of aesthetic characteristics consequent on that process and gaining a
unique cast from their new context. It’s a cultural shift, a communicative
revolution, a social organization. The most immediate way, however, of
describing digimodernism is this: it’s a new form of textuality.
In this the passage from postmodernism to digimodernism bears no
resemblance to the way that the former broke from its predecessor. Textu-
ally, The Bloody Chamber or Pale Fire differs from The Waves or As I Lay
Dying only on the surface, as an evolution in the codes and conventions
and the manner of their manipulation; in their depth they rely on the same
textual functioning. The author creates and sequences a quantity of words;
these solidify as “the text”; the reader scrutinizes and interprets that inher-
ited, set mass. The author precedes the material text, which may outlast
him/her; the reader makes over their sense of what they receive but neither
brings the words into being nor contributes to their ordering (I distinguish
these two functions since 1960s’ avant-gardism found ways, as we shall see
in Chapter 3, to give the reader some control over sequencing). Traditional
texts were once thought to possess a hermeneutical “secret,” a fixed mean-
ing placed there by the author which the reader was to locate and treasure;
later, texts were seen as hermeneutical free-for-alls, their meanings multi-
ple and scattered, which the reader chose to bring pell-mell into play.

50
The Digimodernist Text 51

In either case the physical properties of the text remained solidified and
inviolate: no matter how inventively you interpreted Gravity’s Rainbow you
didn’t materially bring it into existence, and in this Pynchon’s postmodern
exemplum exactly resembled Pride and Prejudice.
The digimodernist text in its pure form is made up to a varying degree
by the reader or viewer or textual consumer. This figure becomes authorial
in this sense: s/he makes text where none existed before. It isn’t that his/her
reading is of a kind to suggest meanings; there is no metaphor here. In an
act distinct from their act of reading or viewing, such a reader or viewer
gives to the world textual content or shapes the development and progress
of a text in visible form. This content is tangible; the act is physical. Hence,
the name “digital modernism” in which the former term conceals a pun:
the centrality of digital technology; and the centrality of the digits, of the
fingers and thumbs that key and press and click in the business of material
textual elaboration.
Fairly pure examples of digimodernist texts would include: on TV,
Big Brother, Pop Idol, 100 Greatest Britons, Test the Nation, Strictly Come
Dancing, and Quiz Call; the film Timecode; Web 2.0 forms like Wikipedia,
blogs, chat rooms, and social networking sites; videogames such as Mass
Effect, Grand Theft Auto IV, BioShock, Final Fantasy XII, and Metal Gear
Solid 4; SMS messages; “6-0-6” and certain other kinds of radio phone-in;
or the Beatles’ album Everest (see “Music,” Chapter 6). Digimodernism is
not limited to such texts or even to such a textuality; rather, it is more easily
expressed as the rupture, driven by technological innovation, which
permits such a form. They are not by virtue of their novelty “great” texts;
indeed, the quality of the digimodernist text is moot. The distinctiveness of
their functioning interests us, not their ostensible content. Instead, it is in
the functioning of such a textuality that the irreducible difference of the
digimodernist becomes most palpable.
The digimodernist text displays a certain body of traits that it bequeaths
to digimodernism as a whole. These will recur throughout the rest of the
analysis. Such characteristics relate to the digimodernist textuality almost
as a machine: considered as a system by which meaning is made, not as
meaning. Postmodernist features denote either a textual content or a set of
techniques, employed by an antecedent author, embedded in a materially
fixed and enduring text, and traced or enjoyed by a willful reader/viewer.
The traits of digimodernist textuality exist on a deeper level: they describe
how the textual machine operates, how it is delimited and by whom,
its extension in time and in space, and its ontological determinants. The
surface level of what digimodernist texts “mean” and how they mean it
52 DIGIMODERNISM

will be discussed later in the book. We can sketch the following dominant
features:
Onwardness. The digimodernist text exists now, in its coming into being,
as something growing and incomplete. The traditional text appears to
almost everyone in its entirety, ended, materially made. The digimodernist
text, by contrast, is up for grabs: it is rolling, and the reader is plunged in
among it as something that is ongoing. For the reader of the traditional text
its time is after its fabrication; the time of the digimodernist text seems to
have a start but no end.
Haphazardness. In consequence, the future development of the text is
undecided. What it will consist of further down the line is as yet unknown.
This feels like freedom; it may also feel like futility. It can be seen as power;
but, lacking responsibility, this is probably illusory. If onwardness describes
the digimodernist text in time, haphazardness locates in it the permanent
possibility that it might go off in multiple directions: the infinite parallel
potential of its future textual contents.
Evanescence. The digimodernist text does not endure. It is technically
very hard to capture and archive; it has no interest as a reproducible item.
You might happily watch all the broadcast hours of Fawlty Towers; no one
would want to see the whole of a Big Brother run again (retransmission has
never been proposed), and in any event the impossibility of restaging the
public votes renders the exact original show unreplicable.
Reformulation and intermediation of textual roles. Already evident, and
explored at greater length in this chapter, is the digimodernist text’s radical
redefinition of textual functional titles: reader, author, viewer, producer,
director, listener, presenter, writer. Intermediate forms become necessary
in which an individual primarily the one acts to a degree like another.
These shifts are multiple and not to be exaggerated: the reader who becomes
authorial in a digimodernist text does not stand in relation to the latter as
Flaubert did to Madame Bovary. These terms are then given new, hybrid-
ized meanings; and this development is not concluded.
Anonymous, multiple and social authorship. Of these reformulations
what happens to authorship in the digimodernist text especially deserves
attention. It becomes multiple, almost innumerable, and is scattered across
obscure social pseudocommunities. If not actually anonymous it tends to a
form of pseudonymity which amounts to a renunciation of the practice of
naming (e.g., calling yourself “veryniceguy” on a message board or in a
chat room). This breaks with the traditional text’s conception of authorship
in terms tantamount to commercial “branding,” as a lonely and definite
quantity; yet it does not achieve communality either.
The Digimodernist Text 53

The fluid-bounded text. The physical limits of the traditional text are
easily establishable: my copy of The Good Soldier has 294 pages, Citizen
Kane is 119 minutes long. Materially a traditional text—even in the form of
a journalist’s report, a school essay, a home movie—has clear limits; though
scholars may discover new parts of a whole by restoring cut or lost material
their doing so only reinforces the sense that the text’s physical proportions
are tangibly and correctly determinable (and ideally frozen). Embodying
onwardness, haphazardness, and evanescence, the digimodernist text so
lacks this quality that traditionalists may not recognize it as a text at all.
Such a text may be endless or swamp any act of reception/consumption.
And yet texts they are: they are systematic bodies of recorded meaning,
which represent acts in time and space and produce coherently intelligible
patterns of signification.
Electronic-digitality. In its pure form, the digimodernist text relies on
its technological status: it’s the textuality that derives from digitization; it’s
produced by fingers and thumbs and computerization. This is not to be
insisted on excessively; however, this is why digimodernism dates back
only to the second half of the 1990s. Digimodernism is not primarily a
visual culture and it destroys the society of the spectacle: it is a manually
oriented culture, although the actions of the hand are here interdependent
on a flow of optical information unified through the auspices of the
electronic.
Much more could be added here, but there is space for only two further
clarifications. First, an ancestor of the digimodernist text is Espen J.
Aarseth’s notion of “ergodic literature” in which, he argued as long ago as
1997, there is “a work of physical construction that the various concepts
of ‘reading’ do not account for . . . In ergodic literature, nontrivial effort is
required to allow the reader to traverse the text.”2 The description of page-
turning, eye movement, and mental processing as “trivial” is misleading,
while the implication of textual delimitedness contained in “traversal” has
been outdated by technical-textual innovations. However, his account dif-
fers from mine most notably in its lack of a wider context. For I see the pure
digimodernist text solely as the easily recognizable tip of a cultural iceberg,
and not necessarily its most interesting element. These characteristics can
be found diffusely across a range of texts that I would call digimodernist
whose consumer cannot make them up; though digimodernism produces
a new form of textuality it is not reduced to that, and many of its instances
are not evanescent, haphazard, and so on. But the discussion had to start
somewhere. Digimodernism can be globally expressed in seven words (the
effects on cultural forms of digitization) and historically situated in eight
54 DIGIMODERNISM

(the cultural-dominant succeeding postmodernism prompted by new


technologies). It can be captured, as I said, in a pun. Yet all in all it’s a more
complex development than this might suggest. Ergodic literature is then
no more than the forerunner of a distinctive feature of digimodernism.
Second, this textuality has been described as “participatory.” There’s
a political rhetoric to hand here, all about democracy, antielitism, the com-
mon man, and so on. Al Gore has celebrated Web 2.0 for offering such
a mode of popular expression (debate, forums) and overcoming the top-
down manipulation imposed by spectacular television.3 But, as well as sug-
gesting Gore hasn’t watched TV since the 1980s (it has reinvented itself in
the direction of Web 2.0), this way of thinking presupposes a cleaned-up,
politically progressive but traditional text. “Participation” assumes a clearly
marked textual boundary (even if fuzzy a line is necessary to take part in),
an equality of text-makers (you don’t “participate” by controlling), a com-
munally visible and known group of intervenants, and a real-life situation
(you can participate in theater but not in a novel). The participant too is
condemned to action. Digimodernist textuality, as I hope I’ve made clear,
goes beyond all this. The political consequences of digimodernism are
more likely to be desocialization and pseudoautism than an upsurge in
eighteenth-century notions of democratic practice.

Reader Response

It could be felt (the point has been put to me) that everything I’ve said here
about the digimodernist text is already contained in post-1960s’ theories
of the text and of reading, that there is nothing new here. A similar critical
discourse might appear to have been around for a while. Discussing the
ending of the film Performance, Colin MacCabe argues, for instance, that
“the final eerie minutes of the film are entirely our invention.”4 For MacCabe,
the film’s “whole emphasis” favors “a performance in which the spectator is
a key actor.”5 However, this is too loose for its own good: except as rhetori-
cal excess, as a sort of flourish, there is no way that someone sitting in a
chair gazing silently at a screen is an “actor,” key or not, coterminous with
those s/he is watching; and while the ending of Performance does leave
much to the intelligence, imagination, and wit of its audience, to call it
“entirely our invention” is an exaggeration. The most MacCabe can mean is
that we feel alone as we grope to explain it; it’s so ambiguous, so slippery,
that our interpretations feel strangely exposed, deprived of any textual
underpinning. In reality, the final few minutes were entirely invented at
the end of the 1960s by a group of actors and technicians employed by
The Digimodernist Text 55

Warner Bros. MacCabe’s rhetoric conflates the realm of meaning-making


with that of text-making, the act of mental (for where else is it?) judgment
with that of physical creation. As for his claim, also about Performance,
that “we are no longer spectators but participants,” the overstatement of the
latter term relies on an exaggerated view of the passivity of the former;
properly understood as an assertive mental activity, spectating is precisely
what Performance has us do.6
MacCabe’s rhetoric owes something to German reception theory, which
developed in the 1970s out of the insight that, as Terry Eagleton puts it,
“Literary texts do not exist on bookshelves: they are processes of significa-
tion materialized only in the practice of reading. For literature to happen,
the reader is quite as vital as the author.”7 Formulated more precisely like
this, reception theory, and theories of reader response in general, avoid the
criticism drawn by MacCabe. Summarizing the insights of Wolfgang Iser,
perhaps the most interesting of all reader response theorists, Eagleton
remarks that:

although we rarely notice it, we are all the time engaged in construct-
ing hypotheses about the meaning of the text. The reader makes
implicit connections, fills in gaps, draws inferences and tests out
hunches . . . The text itself is really no more than a series of “cues” to
the reader, invitations to construct a piece of language into meaning.
In the terminology of reception theory, the reader “concretizes” the
literary work, which is in itself no more than a chain of organized
black marks on a page. Without this continuous active participation
on the reader’s part, there would be no literary work at all8

Yet this is clearly an active participation in meaning-making, not in text-


making. Despite this, since texts are concentrated bodies of meaning,
can we not argue that to make meaning in this context is ipso facto to
make text? Finally, no; but it is easy to see how the slippage might occur.
Consideration of what it physically means to write a novel or make a film,
and what is involved in reading or viewing, keeps the distinction plain.
Raman Selden’s recognition both of the distinction and of the risk of slip-
page across it is visible in his use of inverted commas and the qualifiers
“a sort of ” and “in a sense” when discussing Roland Barthes: “What Barthes
calls the ‘pleasure of the text’ consists in this freedom of the reader to pro-
duce meanings, and in a sense to ‘write’ the text . . . For Barthes, reading is
a sort of writing, which involves ‘producing’ the texts’ signifiers by allowing
them to be caught up in the network of codes.”9 Discussing Iser, Selden
56 DIGIMODERNISM

captures the nuance: “We are guided by the text and at the same time we
bring the text into realization as meaning at every point.”10
In truth, theory can only conceptualize the reader/viewer as the pro-
ducer of a text by transforming its sense of a text into a system of meanings.
This enables it to construct the reader/viewer as the producer of textual
meanings and hence, to all apparent intents and purposes, as the producer
of text. But, as any filmmaker or novelist knows, a text is primarily a selected
quantity and sequence of visual or linguistic materials, and to make text is
to create those materials. In turn, the materials generate a play of mean-
ings, which the reader/viewer will eventually come in among, finding and
inventing his or her own; but this is secondary. In fact, such theories of
reading silently presuppose a text that is already created; to conceive of a
text as a set of meanings implies approaching it when already constituted
and seeing what has already been made. The point of view of the critic or
student or reader is melded here with the functioning of the text. This is
not exactly an error: it is how texts appear to such people (Iser’s work was
rooted in phenomenology), and for almost its entire existence a text will
consist of a fixed or almost-fixed set of already-created materials. The
source of theory’s assimilation is that it cannot conceive of a meaningful
form of the text which is not already materially constituted; nor does it see
why it should.
However, Barthes’ short essay “From Work to Text,” a central piece of
post-structuralist literary theory originally published in 1971, highlights
another aspect of the question. He attempts here to define Text (capitalized
throughout) as a post-structuralist form of writing that stands in contrast
to the traditional literary “work.” Isolating seven differences between the
two, Barthes describes Text as: not “contained in a hierarchy”; “structured
but decentered”; “plural, [depending] not on the ambiguity of its contents
but on what might be called the stereographic plurality of its weave of signi-
fiers”; “woven entirely with citations, references, echoes, cultural languages
. . . which cut across it through and through in a vast stereophony”; shorn
of “the inscription of the Father”; and “bound to jouissance.”11 These are all
classically post-structuralist; the digimodernist may not be inclined to
write like this (may find it a historical mode of thinking) but would not feel
the need to jettison it. Picking up an earlier point that “the Text is experi-
enced only in an activity of production,”12 Barthes also argues that:

The Text . . . decants the work (the work permitting) from its con-
sumption and gathers it up as play, activity, production, practice.
This means that the Text requires that one try to abolish (or at the
The Digimodernist Text 57

very least to diminish) the distance between writing and reading,


in no way by intensifying the projection of the reader into the work
but by joining them in a single signifying practice . . . The history of
music (as a practice, not as an “art”) does indeed parallel that of
the Text fairly closely: there was a period when practicing amateurs
were numerous (at least within the confines of a certain class) and
“playing” and “listening” formed a scarcely differentiated activity;
then two roles appeared in succession, first that of the performer, the
interpreter to whom the bourgeois public (though still itself able to
play a little—the whole history of the piano) delegated its playing,
then that of the (passive) amateur, who listens to music without being
able to play (the gramophone record takes the place of the piano). We
know that today post-serial music has radically altered the role of
the “interpreter,” who is called on to be in some sort the coauthor
of the score, completing it rather than giving it “expression.” The Text
is very much a score of this new kind: it asks of the reader a practical
collaboration.13

From a digimodernist point of view, this sounds like the straining labor
pains that promise to end in the birth of the digimodernist text. Seen from
a vantage point almost forty years on, Barthes appears to be signaling the
arrival of something yet to be materially possible but which he has theoret-
ically described and greeted (postmodernism as the unwitting mother of
digimodernism). It is as if he is clearing an intellectual and artistic space
for a textuality he cannot yet see, but which he is thereby helping to bring
into existence. To be sure, whether he would have welcomed any of the
actual examples of digimodernism we have so far is a moot point; however,
J. Hillis Miller, a doyen of American deconstruction, described Wikipedia
as “admirable” in an essay on Derrida that adopted its practice of disam-
biguation (so who knows).14 While Barthes’ essay ends with the proto-
digimodernist declaration that “[t]he theory of the Text can coincide only
with a practice of writing” this is subsumed by his recognition that his
remarks “do not constitute the articulations of a Theory of the Text.”15 The
essay is to be read as prophetic and not descriptive, as a call for a theory still
to be written. It is clear that the coming of digimodernism removes, in one
wrench, all the cultural privileges which throughout postmodernism
accrued to theorists as the hieratic investigators and interpreters of the
mystery of the text. The textuality of digimodernism downplays the critic’s
naturally belated relationship to text in favor of growth and action in
the present. Theorists may yet find ways to get their privileges back; indeed,
58 DIGIMODERNISM

during the last decade of his life Barthes himself can increasingly be seen
as working through these issues on a theoretical level.
Other readers have raised objections that parallel the one I’ve discussed
here. For instance, I’ve been told that Baudrillard’s take on Disneyland in
his 1981 essay “The Precession of Simulacra” already contains everything
I’ve called digimodernist; but while a theme park is a text concretized by
physical action (you must travel around it), it isn’t materially invented by
that action—it was wholly constituted before any visitor arrived (it’s a post-
modern textuality, like most loci of mass tourism). Again, Baudrillard’s
comments in the same essay about a fly-on-the-wall TV documentary
shown in 1973 don’t short-circuit a theory of digimodernism; I don’t have
to reach back ten years for my TV examples, or ten hours, come to that.
In Chapter 3 I’ll consider the ways in which our era is characterized by
the move to the cultural center of what had previously been a disreputable,
buried, or just exceptional textuality. But the digimodernist text is, because
of technological innovation, really new, something genuinely never before
seen, and indirect evidence for this comes in the next section.

The Antilexicon of Early Digimodernism

One sign of the novelty of the digimodernist text is that none of the tradi-
tional words describing the relations of individuals with texts is appropriate
to it. The inherited terminology of textual creation and reception (author,
reader, text, listener, viewer, etc.) is awkward here, inadequate, misleading in
this newly restructured universe. So new is it that even words recently devel-
oped to step into the breach (interactive, nonlinear, etc.) are unsatisfactory.
Of course, in time this new kind of text will evolve its own seemingly inevi-
table lexicon, or perhaps existing words will take on new and enriched
senses to bear the semantic load. Aiming to contribute nothing directly to
this linguistic growth, I am going instead here to assess the wreckage of the
current lexical state, thereby, I hope, helping to clear enough ground to open
up the conceptual landscape a bit more to view. Like all dictionaries, what
follows should really be read in any order: the reader is invited to jump non-
sequentially around the entries, which inevitably overlap.

A is not exactly for Author

Central to postmodernism and to post-structuralism was their vigorous


repudiation of the figure of the author. Roland Barthes in a famous essay
published in 1968 declared that “the birth of the reader must be at the
The Digimodernist Text 59

cost of the death of the Author” and called for the latter’s “destruction”
and “removal” from the field of textual criticism.16 Coupled with Michel
Foucault’s subsequent weak conception of the “author-function,” this
stance became orthodoxy among post-structuralist critics.17 Written self-
consciously “in the age of Alain Robbe-Grillet and Roland Barthes,” John
Fowles’s postmodern novel The French Lieutenant’s Woman critiques
and dismantles the myth of the Author-God, finally revealed as an “unpleas-
ant . . . distinctly mean and dubious” figure.18 Postmodernist culture returns
repeatedly to this debilitated or tarnished image of the author. Martin
Amis’s are obnoxious and louche: a priggish nerd with “sadistic impulses”
in Money, a murderer and murderee in London Fields, and twin preten-
tious morons in The Information: “Like all writers, Richard wanted to
live in some hut on some crag somewhere, every couple of years folding a
page into a bottle and dropping it limply into the spume. Like all writers,
Richard wanted, and expected, the reverence due, say, to the Warrior Christ
an hour before Armageddon.”19 As a symptom of this degeneration, almost
all of the major fictions by one of the greatest of all postmodern authors,
Philip K. Dick, are only, and read like, first drafts: messy, clunky, wildly
uneven, desperate for polishing. Redeemed by their content, these texts’
achievement implicitly junks the Romantic conception of the author as a
transcendent donor of eternal beauty in favor of the haphazardly brilliant
hack.
Digimodernism, however, silently restores the authorial, and revalorizes
it. To do this, it abolishes the assumed singularity of authorship in a redefi-
nition that moves decisively away from both traditional post-Enlightenment
conceptions and their repudiation. Authorship is always plural here, per-
haps innumerable, although it should normally be possible, if anyone
wanted to, to count up how many there are. The digimodernist authorial is
multiple, but not communal or collective as it may have been in premod-
ern cultures; instead, it is rigorously hierarchical. We would need to talk, in
specific cases, of layers of authorship running across the digimodernist
text, and distributions of functions: from an originative level that sets
parameters, invents terms, places markers, and proffers structural content,
to later, lower levels that produce the text they are also consuming by deter-
mining and inventing narrative and textual content where none existed
before. The differing forms of this authorship relate to this text at differing
times and places and with varying degrees of decisiveness; yet all bring
the text into being, all are kinds of author. Though a group or social or
plural activity, the potential “community” of digimodernist authorship
(widely announced) is in practice vitiated by the anonymity of the function
60 DIGIMODERNISM

here. We don’t even get Foucault’s author as social sign: the digimodernist
author is mostly unknown or meaningless or encrypted. Who writes
Wikipedia? Who votes on Big Brother? Who exactly makes a videogame?
Extended across unknown distances, and scattered among numerous
zones and layers of fluctuating determinacy, digimodernist authorship
seems ubiquitous, dynamic, ferocious, acute, and simultaneously nowhere,
secret, undisclosed, irrelevant. Today, authorship is the site of a swarming,
restless creativity and energy; the figure of the disreputably lonely or
mocked or dethroned author of postmodernism and post-structuralism is
obsolete.

If I is for Interactive, there’s a love-hate relationship with “inter”

The spread of the personal computer in the 1980s brought with it a new
associated vocabulary, some of which, like “interfacing” or going “online,”
has been absorbed permanently into the language. If the emergence of the
digimodernist text has had a comparable effect you might point to the dis-
course of “interactivity” as an example. Videogames, reality TV, YouTube,
and the rest of Web 2.0 are all supposed to offer an “interactive” textual
experience by virtue of the fact that the individual is given and may carry
out manual or digital actions while engaging with them. I talk about the
difficulties of the passive/active binary elsewhere, so will restrict myself
here to the term’s prefix, one that has, indeed, spread across the whole digi-
tal sphere.
The notion of “interaction” seems inevitable and exciting partly because
it evokes the relationship (or interplay or interface) of text and individual
as a dialectical, back-and-forth exchange. This very reciprocity can be seen,
to an extent, as the kernel of digimodernism; the new prevalence of the
“interactive” nexus and of the prefix in general is a sign of the emergence of
a new textual paradigm. Older terms like “reader” or “writer,” “listener” or
“broadcaster” don’t convey that doubled give-and-take, its contraflow; they
focus on one individual’s role within an inert textual theater. The word
“interactive” then is as textually new as the digimodernism with which it is
identical because it reflects the new textual dimension that has suddenly
opened up: not only do you “consume” this text, but the text acts or plays
back at you in response, and you consequently act or play more, and it
returns to you again in reaction. This textual experience resembles a see-
sawing duality, or a meshing and turning of cogs. Moving beyond the
isolation of earlier words, “interactivity” places the individual within a dia-
chronic rapport, a growing, developing relationship based on one side’s
pleasure alone.
The Digimodernist Text 61

I like “inter” both because it captures the historical rupture with the
textual past in its new ubiquity, and because it highlights the structuration
of digimodernism, its flow of exchanges in time. It’s highly misleading,
though, as well, because it suggests an equality in these exchanges. In truth,
just as the authors of the digimodernist text vary in their levels of input or
decisiveness, so the individual is never the equal of the text with which s/he
is engaging. The individual can, for instance, abandon the text but not vice
versa; conversely, the text is set up, inflected, regulated, limited and—to a
large extent—simply invented well before s/he gets near it. Engaging with
a digimodernist text, s/he is allowed to be active only in very constrained
and predetermined ways. In short, the creativity of this individual arrives
rather late in this textual universe.
A better understanding of digimodernist authorship would clarify the
nature of interactivity too, which often seems reduced to a sort of “manual-
ity,” a hand-based responsiveness within a textuality whose form and con-
tent were long ago set. Your “digital” interventions occur here when, where,
and how they are permitted to. But I won’t let go of the glimpse of the new
textual machinery that is conveyed by and contained within “inter.”

L is sort of for Listener

Two versions of listening are familiar to us: the first, when we know we are
expected to respond (in a private conversation, in a seminar, meeting, etc.);
the second, when we know we will not respond (listening to music or
a politician addressing a rally, etc.). The social conventions governing
this distinction are fairly rigorously applied: they make heckling, the act
of responding when not supposed to, inherently rebellious, for instance.
Listening has then a double relationship with speech or other human
sound creation, like music: it can only be done, obviously, when there is
something to listen to; and it differs qualitatively according to whether the
listener knows s/he is expected to respond. In one case, we can probably
assume that s/he listens more closely, does nothing else at the same time; in
the other s/he may start and stop listening at will, talk over the discourse,
and so on. Varying contexts produce varying intensities of listening, though
it remains always a conscious, directed act (distinct from the inadvertency
or passivity of hearing). The corollary of this is that the grammar of what
we listen to also embeds these social conventions. When we are expected
to respond, the discourse offered will tend to the second person (“you”),
either explicitly (e.g., questions, orders) or implicitly (e.g., a story that pro-
vokes the response “something similar happened to me”). When not
expected to respond we will probably listen to first-person plural modes
62 DIGIMODERNISM

(“we,” the implicit pronoun of the stand-up comic) or third person (“s/he,”
“they”), although politicians and others will sometimes employ rhetori-
cally the second person to create an actually bogus sense of intimacy (“Ask
not what your country . . .”).
Radio, traditionally, offers sound to which we know we will not respond:
third person, easily capable of being talked over or ignored or sung along
to or switched off in mid-flow. DJs, like politicians, try to create warmth by
generating the illusion that they are speaking to you (this is the whole art
of the DJ) but without using literally a second-person discourse—their
mode is also the comic’s implicit “we.” Digimodernist radio, in which
“listeners” contribute their texts, e-mails, and phone voices to the content
of the show, gives us a different kind of listening, pitched halfway between
the two familiar versions. We are neither expected to respond or unable to,
but suspended between as someone who could respond, who might respond.
We could, as easily as anybody else, send in a text or e-mail or call up the
phone-in line and speak. And perhaps we do: some people will become
regular callers to such programs or repeat contributors of written material,
and their voices and writing take on in time the assured, measured delivery
of the seasoned professional. In so doing, they achieve the conversational
parity of the responding listener. It’s noticeable that such programs permit
their external contributors to make only very brief and concise points. This
is usually explained by “we’ve got a lot of callers” but in some instances,
especially on sports phone-ins like those following an England soccer
match, many of the callers make roughly the same point—they’re not cur-
tailed to allow space for a vast wealth of varying opinions. E-mails and
texts are short too even though they tend to be better expressed and less
predictable than the improvised speech of the presenter. This could again
be due to the psychological effect being sought: the more people who
contribute, the more it could be you contributing, both in terms of the
show’s mood and identity, and as a brute numerical fact.
Similarly, the discourse thrown up by digimodernist radio lies curiously
stranded between the modes typical of the two traditional versions of
listening. It consists, on one level, of the first-and-second person of ordi-
nary conversation: I think this, why do you, and so on. Yet it cannot in fact
be about either of them, partly because the external contributor, in digi-
modernist fashion, is virtually anonymous—to be “Dave from Manchester”
is to teeter on the brink of being anyone at all. So the content of the show
becomes an intimate exchange about public matters, which is why it resem-
bles stereotypical male conversation, like bar or pub talk (and the majority
of contributors are always men). Accounts of personal experience are
The Digimodernist Text 63

tolerated here, but only to clarify a general point. Unlike bar talk, this
discourse has no chance of becoming oriented on private matters since,
though intimately formulated, it belongs to a broadcast public discussion.
The effect, finally, is that the exchanges feel neither really intimate (a faked
I-you-I) nor generally interesting (they make no new intellectual discover-
ies but just stir around the quasi-knowledge and received wisdom of the
presenter and their callers). It’s an attractive model of spoken discourse
because, synthesizing the traits of both common forms, it promises an
unusual richness and potency. But it actually provides neither desired out-
come of listening, neither personalization and intimacy, nor clarification
and action. Listening to digimodernist radio does tend to be listening, but
never the sorts we used to know.

N isn’t yet for Nonlinear (a mess that needs clearing first)

Nonlinear: such a contemporary term! We are always hearing that new


technologies prompt new, nonlinear experiences of texts, though this is a
highly confused terminology. It’s popular because it suggests freedom: to
follow doggedly and obediently a “line” is more oppressive than to scatter
whimsically away from it (compare use of “the beaten track,” which every-
body boasts of getting “off ” and nobody wishes to be seen “on”). If linearity
means to construct the textual experience as running necessarily from its
beginning through its middle to its end, then some digimodernist forms
are in fact ultralinear. Videogames, for instance, pass through these stages;
although you can freeze your position within them for the next time, you
will nevertheless simply resume your linear progression when you return.
You can’t do a bit near the end of the game, then a bit near the beginning;
you follow a line. The innovation of videogames, it seems to me, is that they
are multilinear: you follow, each time, a slightly different line, and these
various strands lie virtually side by side as ghostly or actual lines taken.
To a degree this is true of any game (it’s certainly true of chess), but in
videogames it’s textually true: there are characters, plotlines, tasks, and so
on, opened up along one line that are denied another. The multilinearity of
videogames is precisely what differentiates them from other textual forms.
A duller version of digimodernist ultralinearity is the DVD. If you had
wanted, in the age of video, to show a class the similarities between the
hat-passing scene in Beckett’s Waiting for Godot and the lemonade stall
sequence in the Marx Brothers’ Duck Soup, you could have cued your two
tapes just before the bits in question, then slid them into the seminar room
VCR at the appropriate time. Try to do this with DVDs and you spend five
64 DIGIMODERNISM

minutes per film trudging through studio logos, copyright warnings


(ironically), adverts and the rest of the rigmarole, because DVDs enforce
a beginning-middle-end textual experience. Again, though, they are mul-
tilinear: whereas a video offers only one version of the movie, a DVD offers
twenty, with different audio and/or subtitle settings, with the director’s or a
critic’s commentary overlaid, and more. They sit side by side on the DVD,
mostly ignored by the viewer; ultralinearity here is multilinearity.
What is often called nonlinearity is actually nonchronology, the jump-
ing around in time of stories such as Eyeless in Gaza, Pulp Fiction, Memento,
or Waterland. They are still, though, linear textual experiences. Reading
and viewing are necessarily linear—you might skip, but you wouldn’t
jumble the chapters or sequences—whereas rereading and re-viewing will
often focus on fragments, episodes, scenes; I’ve only read Ulysses from start
to finish once, but I’ve read the “Cyclops” section five times at least. To
return to a text is to permit a nonlinear experience. Yet in practice this is
only the replacement of a totalized linearity with a restricted one: I still
tend to read the “Cyclops” pages in order or, if I jump around, I read the
lines in order—the linearity is ever more straitened, but indestructible.
As for new digimodernist forms, like the Internet, the terms that seem
to me most apposite are antisequentiality and ultraconsecutiveness. By
sequence I mean a progression in which each new term is logically pro-
duced by its predecessor or a combination of its predecessors (compare the
Fibonacci sequence); by consecutiveness I mean a progression in which
the new term is simply adjacent, in time or space, to the previous one with-
out there necessarily being an overall systematic development. Clicking
your way around the Internet or one of its sites, each shift of page takes
you, inevitably, to one that is cyberspatially adjacent, even if that adjacency
is found via the intermediation of a search engine. Moving from one page
to the next contains its own logic, but a series of ten or twenty moves will
produce a history with no overall logical arc; it’s not random but it’s not
governed by a totalizing pathway either. The fact that it has no beginning,
middle, and end (its mooted nonlinearity) is not very interesting for me,
partly because, like rereading Ulysses, they are reproduced at more local,
straitened levels, and partly because it’s more useful to define it as a pres-
ence, an activity, than as a lack. Internet sweeps (what used to be called
surfing) seem to me necessarily consecutive, condemned to the tyranny of
the adjacent at the expense of the overall. They therefore bear two hall-
marks: they are one-offs, virtually impossible to repeat, and, the corollary,
they are intrinsically amnesiac—the brain cannot reconstruct them in the
absence of a logical, overarching shape, so finds it difficult to remember them.
Such sweeps tend to be antisequential, but not absolutely: each term may
The Digimodernist Text 65

derive logically from the last, but a more complex, developed sequence
becomes increasingly hard to discern. This is a complex field, where termi-
nological precision is so far somewhat elusive, but stopping the habit of
mindlessly boasting of nonlinearity would help.

P isn’t for Passive (and Active is in trouble, too)

One of the most misleading claims the digimodernist text and its prosely-
tizers can make is that it provides an active textual experience: that the
individual playing a videogame or texting or typing Web 2.0 content is
active in a way that someone engaged in reading Ulysses or watching
Citizen Kane isn’t. This is self-evidently something in its favor; no one
wants to be “passive.” It’s typical of digimodernism that its enthusiasts
make vigorous and inaccurate propaganda on its behalf; the vocabulary
of “surfing” the Internet common in the 1990s, where a marine imagery of
euphoria, risk, and subtlety was employed to promote an often snail-paced,
banal, and fruitless activity, seems mercifully behind us. But the hype
differentiating the new technologies’ supposedly terrific activeness from
the old forms’ dull passivity is still extant, and very misleading it is too.
It’s true that the purer kinds of digimodernist text require a positive
physical act or the possibility of one, and the traditional text doesn’t. Yet
this can’t in itself justify use of the passive/active binary: you can’t suppose
that an astrophysicist sitting in an armchair mentally wrestling with string
theory is “more passive” than somebody doing the dishes just because the
latter’s hands are moving. Mere thought can be powerful, individual, and
far-reaching, while physical action can become automatic, blank, almost
inhuman; in terms of workplace organization, a college professor will be
more active (i.e., self-directing) than a factory worker. The presence of
a physical “act” seems in turn to suggest the word “active” and then its
pejorative antonym “passive,” but this is an increasingly tenuous chain of
reasoning. It’s one of those cases beloved of Wittgenstein where people are
hexed by language. Yet the mistake is symptomatic: how do you describe
experientially the difference between the traditional and the digimodernist
text? It’s a tricky question, but one that at least assumes that there are such
differences, which here is the beginning of wisdom.

P is also for a doubly different idea of Publishing

A friend of mine (though he’s hardly unique) thinks that Web 2.0 offers the
biggest revolution in publishing since the Gutenberg Bible. Anyone can
now publish anything; it’s democratic, open, nonelitist, a breaking down of
66 DIGIMODERNISM

the oppressive doors of the publishing cabal which for centuries repressed
thought and decided what we could read; it’s a seizing of the controls of the
publishing world by the people for the people. If this were true, it would
indeed be as exciting as my friend thinks. Sociologically, publishing has
always defined itself as the sacrilizing of speech: whereas speech dies the
instant it is spoken, and carries only to the geographical extent reached by
the volume of the voice, the publishing of text enables utterances to endure
for centuries, even millennia (though increasingly unstably), and to be
transported to the furthest point on our planet, even beyond. Temporally
and spatially published text is, at least potentially, speech equipped with
wondrous powers, furnished with immense resources. It isn’t surprising
that such text has accrued a similarly wondrous and immense social pres-
tige (even if, in practice, the great majority of it is soon destroyed). We all
talk, but few of us talk to everyone forever. Publishing a book is the edu-
cated adult’s version of scoring the touchdown that wins the Super Bowl.
It’s this glamour, this prestige that my friend assumes Web 2.0 lets everyone
in on, and that he’s gotten so excited about.
Leaving to one side for now the issue of whether everyone can or ever
will access Web 2.0, let us imagine a world in which they do. The Web is
indeed responsible for a stupendous increase in the volume of published
material and in the number of published writers. Though held in electronic
form rather than on paper, this text fulfills the definition of publication: it
is recorded, in principle, for everyone forever. This is the first new idea of
publishing. However, and more problematically, this innovation comes at
the expense of a second: the loss of the social prestige associated with the
publishing of text. It isn’t only that so much UGC is mindless, thuggish,
and illiterate, though it is. More awkwardly, nothing remains prestigious
when everybody can have it; the process is self-defeating. In such circum-
stances the notion of a sacrilizing of speech becomes obsolete.
To argue that the newly opened world of publishing is a newly devalued
world seems patrician, antidemocratic, even (so help us God) “elitist.”
Furthermore, it’s not strictly valid. Through, for instance, the placing
of academic journals online, the Internet has also increased the quantity of
easily accessible, highly intelligent, and well-informed written matter, and
it sits cheek-by-jowl with the vile and ignorant stuff on search engine results
pages. What will probably occur in the future will be a shift in our idea of
publishing toward greater stratification and hierarchy, internally divided
into higher and lower forms. The quantity of publication will continue to
rise to unimaginable heights, but unendowed now with social prestige.
How long it will take for the sacred aura of published text to go is anybody’s
The Digimodernist Text 67

guess, but the likelihood is that there will be nothing “nonelitist” about it;
differentiation will simply re-form elsewhere according to other criteria.
This may be a meritocratic hierarchy, whereby text is judged for what it
says rather than what it is, but I wouldn’t want to bank on it.

R is, amazingly, for Reading (but don’t rejoice yet)

Authors of best-selling jeremiads about contemporary society frequently


bemoan a widespread decline in reading. Young people today don’t know
about books, don’t understand them, don’t enjoy them; in short, they
don’t read. Christopher Lasch, decrying in 1979 the “new illiteracy” and
the “spread of stupidity,” quoted the dean of the University of Oregon
complaining that the new generation “‘don’t read as much.’”20 For Lasch
himself, “students at all levels of the educational system have so little knowl-
edge of the classics of world literature,” resulting in a “reduced ability
to read.”21 Eight years later Allan Bloom remarked that “our students have
lost the practice of and the taste for reading. They have not learned how to
read, nor do they have the expectation of delight or improvement from
reading.”22
Such comments—repeated so regularly by commentators they have
become orthodoxy—assume the prestige of publication: “reading” will
be of “books” which will often be “good,” or at least complex and mind-
stretching. A quantitative decline in reading (fewer words passing intelli-
gently before a student’s eyes) can therefore be safely conflated with a
qualitative decline (fewer students reading Shakespeare, Tolstoy, Plato).
But the digimodernist redefinition of publishing goes hand in hand with a
recasting of the sociocultural status of reading. In short, digimodernism—
through the Internet—triggers a skyrocketing rise in quantitative reading
as individuals spend hours interpreting written material on a screen; but it
also reinforces a plunging decline in qualitative reading as they become
ever less capable of engaging mentally with complex and sophisticated
thought expressed in written form.
You do wonder what Lasch or Bloom would have made of the sight of
a campus computer suite packed with engrossed students avidly reading
thousands upon thousands of words. Yet although the Internet has brought
about a vast sudden expansion in the activity of reading among young
people, it has done so at the cost of heavily favoring one kind: scanning,
sweeping across printed matter looking for something of interest. If liter-
ary research is like marriage (a mind entwined with the tastes, whims, and
thoughts of another for years) and ordinary reading is like dating (a mind
68 DIGIMODERNISM

entwined with another for a limited, pleasure-governed but intimate time),


then Internet reading often resembles gazing from a second-floor window
at the passersby on the street below. It’s dispassionate and uninvolved,
and implicitly embraces a sense of frustration, an incapacity to engage.
At times it’s more like the intellectual antechamber of reading, a kind of
disinterested basis to the act of reading, than the act itself. Internet reading
is not, though, just scanning: it accelerates and slows as interest flickers and
dies, shifts sideways to follow links, loses its thread, picks up another. What
is genuinely new about Internet reading is the layout of the page, which
encourages the eye to move in all two-dimensional directions at any time
rather than the systematic left to right and gradually down of a book.23 The
screen page is subdivided by sections and boxes to be jumped around in
place of the book page’s immutable text and four margins. This, along with
the use of hyperlinks, makes Internet reading characteristically discontinu-
ous both visually and intellectually. It’s interrupted, redefined, displaced,
recommenced, abandoned, fragmentary. It’s still unclear how the revolu-
tionary layout of the Internet page will affect reading in its broadest sense,
but there doesn’t seem much good news here for advocates of training in
sustained, coherent, consecutive thought. In the meantime it’s noticeable
that many student textbooks and TV stations have adopted the subdivided
layout (oddly, when you can’t actually click on anything).
The view that would probably be found among most people who had
seen message-board comment on something they had published online
would be that Internet reading is just bad: quick, slapdash, partial. Much
comment is so virulent in tone it suggests a reader seething with a barely
suppressed impatience to leap into print. As academics know, reading-to-
write (e.g., book reviewing) is very different from just reading, and while
alert subeditors will channel critics into some semblance of fair judgment,
message boards impose no such intellectual quality control. But bad
reading is as old as reading itself: Lolita, Lucky Jim, and Ian Fleming’s
James Bond novels are only the first examples that come to my mind of
preelectronic texts widely misunderstood by their readers. This impatience
and virulence are surely linked to the frustration inherent in reading-as-
scanning. It presumably has a second cause as well, one that will affect the
success or otherwise of the e-book should it finally ever be commercialized
(it’s been promised half my life). If Internet reading is on the whole quali-
tatively poor, as I think it is—it’s often blank, fragmented, forgetful, or
congenitally disaffected—then this can be explained by the unconscious
intellectual unpleasantness of trying to make sense of something while
The Digimodernist Text 69

having light beamed into your eyes. The glow of the screen pushes reading
toward the rushed, the decentered, the irritable; while the eye is automati-
cally drawn to the light it emits (explaining the quantitative surge), the
mind is increasingly too distracted to engage with, remember, or even
enjoy very much what it is given to scrutinize.

T definitely is for Text (but not that one)

Pace Barthes, digimodernism’s emblematic text is very different than


post-structuralism’s key term. Derrida and Lacan were fascinated by the
letter and the postcard; technological innovation produces a newer form.
The text message, several billion of which are digitally created and sent
every day, is by some criteria the most important “textual” mode or recorded
communication medium of our time. It’s ubiquitous, near-permanent, a
hushed element of the fabric of the environment; on the street, in cafés,
bars, and restaurants, in meetings and lecture halls and stadia, on trains
and in cars, in homes, shops, and parks, thumbs are silently darting over
displays and eyes reading off what’s been received: an almost-silent tidal
wave of digital text crashing upon us every minute of our waking lives.
Manually formed, the text message concentrates, in a happy semantic
coincidence, most of the characteristics of the digimodernist text. Con-
stantly being made and sent, it exists culturally in the act of creation more
than in finished form; though you see people texting all the time, the mes-
sage inheres only in its formation and immediate impact (like a child’s cry).
Almost the whole lifespan of the text is comprised by its elaboration. It is
ephemeral and evanescent, even harder to hold on to than the e-mail; biog-
raphers who depend professionally on stable, enduring private messages
written and received by their subject look on the SMS and despair. It’s
almost anonymous: if the letter has no author (Foucault), it at least has a
signatory, regularly elided by texts. Indeed, it’s the lowest form of recorded
communication ever known: if speech tends to be less rich, subtle, sophis-
ticated, and elegant than writing, then the text places itself as far below
speech again on the scale of linguistic resourcefulness. It’s a virtually illiter-
ate jumble of garbled characters, heavy on sledgehammer commands and
brusque interrogatives, favoring simple, direct main clauses expressive
mostly of sudden moods and needs, incapable of sustained description or
nuanced opinion or any higher expression. Restricted mostly to the level
of pure emotion (greetings, wishes, laments, etc.) and to the modes of
declaration and interrogation, it reduces human interaction to the kinds
70 DIGIMODERNISM

available to a three-year-old child. Out go subclauses, irony, paragraphs,


punctuation, suspense, all linguistic effects and devices; this is a utilitarian,
mechanical verbal form.
The text is, of course, a very useful communicative tool, so useful there
is no good reason to go without it. The danger lies in the effect it may exert,
if used to excess, on all other forms of communication. Teachers who spot
their teenage charges texting under their classroom desks have noted the
use of similar verbal styles in their formal school work (e.g., writing “cus”
for “because”). They may also identify in them a parallel tendency to
a speech that is equally abbreviated, rushed, and fragmentary, reduced to
simplistic and jumbled bursts of emotion or need. The comedy characters
Vicky Pollard and Lauren Cooper, so successful recently in Britain as
emblems of a certain kind of contemporary adolescent, speak with the
expressive poverty and the breakneck fluency of the text message. The
SMS is to discourse what fries are to nutrition: all depends on the wider
communicative context.

T isn’t for Typist, but it’s very much for typing

Truman Capote famously and sourly remarked of Jack Kerouac’s work:


“that’s not writing, it’s typing.” By this he meant that “writing” was a cre-
ative and intelligent action, whereas “typing” was mechanical, mindless,
and reactive. In the world of work, this bifurcation was reflected in his day
by the employment of women as “typists” whose task was to uncompre-
hendingly and automatically convert the creative, intelligent outpourings
of their male superiors. Challenged by feminism and by industrial restruc-
turing, this hierarchy was finally demolished by the spread of the word
processor in the 1980s. In the digimodernist age, everyone types all the
time (to be a “typist” is increasingly just to have a job). In this dispensation,
typing is no longer the secondary and inferior adjunct to writing, but the
sole method of recording discourse. There is no other term (more and
more Capote’s sarcasm will become unintelligible). What digimodernism
therefore looks forward to is a world without writing, that is, one where
nobody manipulates a pen or pencil to record discourse; it suggests a time
when children will never learn how to write and be taught, instead, from
infancy how to type. There is something scary about a society where no
one writes, where no one knows how to hold and wield some sort of pen,
since writing has always been the symbol of and identical with civilization,
knowledge, memory, learning, thought itself. The idea, assumed by Capote,
that writing’s absence is somehow dehumanized, haunts us; not to teach
The Digimodernist Text 71

a child how to write feels like consigning him or her to an almost bestial
state. And yet there is no reason today to imagine that we are not heading
toward such a world. Already the e-mail and SMS have largely superseded
the phone call, which itself saw off the letter; we have passed from writing
through speaking to typing, and while the newer form can coexist with its
downgraded forerunner, something must logically at some stage become
obsolete. Negotiating that may be a key challenge of our century. For now,
we early digimodernists are stranded: we can write but have less and less
need to, and we type but have never been trained to. It’s a part of the char-
acteristic helplessness of our age.

U is hardly for User (or up to a point)

The term “user” is commonly found in expressions such as “user-generated


content” to describe someone who writes Wikipedia text or uploads You-
Tube clips or develops their Facebook page or maintains a blog. It has also
been employed in TV, especially through the intriguing new portmanteau
word “viewser.” Yet it brings its own set of linguistic problems. The idea
of “use” suggests a means to an end (a spanner used to tighten a nut, an
egg-whisk used to whisk an egg) whereby a tool plays an instrumental role
in achieving a logically distinct objective. Here, however, it is difficult to
identify such an objective since the acts in question appear to be their own
end (“communication” is too vague an ambition, and incompatible with
the anonymity of the Web). Equally, there’s no identifiable tool involved:
contrary to the egg-whisk or spanner, which were invented to answer an
existing need, the computer predates and exceeds any of the applications
of Web 2.0. Furthermore, “usage” would seem grammatically to refer more
to reading or watching material than creating it (compare “drug-user,”
where the consumer and not the producer is denoted), rendering UGC a
contradiction in terms.
Despite its final inadequacies, it’s easy to see the initial attractiveness of
the word. For one, it conveys the crucial digimodernist quality of a physical
act, and it gives to this act the vital connotation of working a machine.
True, it’s misleading in that it distances us from the elaboration or manu-
facture of text or textual content, for which terms drawn from publishing
(author, reader, etc.) have already been tried and found wanting. Filming
your friends and putting the result on YouTube is so much more like
writing a short story than it is like using a trouser-press that the rejection
of a publishing jargon for a mechanistic one is unhelpful. Nonetheless,
the word “user” does succeed in taking the necessary step beyond the
72 DIGIMODERNISM

overspecificity of “reader,” “filmmaker,” “writer” toward the polyvalent and


shifting textual intervenant of digimodernism. This figure slides typically
between maker and consumer, reader and writer, in a seamless complex
singularity; and even in its vagueness “use” does suggest both engagement
with a technology and the inescapable multiplicity, the openness of that
act.

V is no longer for Viewer (you might think)

Given all of this, can someone sitting on a couch in front of a digimodern-


ist TV program really be called a “viewer” any more? The term struggled
initially into existence, finally being deliberately selected from an assort-
ment of words indicating sight; it lacks naturalness, or once did, and
while a change of terms several decades into a medium’s existence seems
unlikely, it already jars the ear in certain contexts with its incongruity.
Some have suggested the portmanteau word “viewser” to describe an
engagement with TV that is both optical and manual, as in the combined
act of watching and voting in Big Brother or otherwise actively participat-
ing in the editing and production of a show while gazing at it from outside.
A clever pun, the term nevertheless inherits all the problems faced by
“user”—it’s like correcting a car’s faulty steering by removing a wheel.
It should also be borne in mind that the viewer is far from obsolete, in
two senses: first, many TV shows, like soaps and sitcoms, invite no manual
action and imply a reception that can be defined optically; and second,
even in the case of the digimodernist program the manual action relies on
a prior optical experience—you only vote meaningfully on Big Brother after
watching it, while many of its viewers won’t vote at all. Viewing hasn’t
become vieux jeu: it’s the essential condition of “use,” and not vice versa;
more precisely, digimodernism builds beyond it.
However, there is no word at all (yet) for the individual who watches and
votes, votes and watches in a spiraling weave of optical/manual actions.
Digimodernist TV invents, then, an (extra)textual person for whom we do
not have a name since their actions and standing are so new. And the
attraction of the term “viewser” is that it can be transferred to any Internet
site privileging UGC: on YouTube or Wikipedia or message boards, an
optical act (reading, watching) intertwines with a potential or real act of
creating text. What do you call such a person? A reader, yes, a writer too, or
some new term beyond both?
3
A Prehistory of Digimodernism

Nothing will come of nothing


King Lear, I. i. 89

It can’t be forgotten how young, how freshly emergent digimodernism is


(or was) at the time of writing this. When I first began thinking about these
issues in 2006 it was less than a decade old in any identifiable form, and as
a cultural-dominant its age was still in single figures on this book’s publica-
tion day. This is particularly relevant to the history of the digimodernist
text. On one hand, it’s undeniable that such a text, reliant as it is on new
technologies simply nonexistent in, say, 1990, is, in its fully fledged form,
a totally new phenomenon. On the other hand, with the digimodernist
era still so young, and with everything yet to solidify into enduring form,
it seems absurd to be speaking already of a “fully fledged” digimodernist
textuality. All of this is going to shift and develop considerably for a while
yet, perhaps finding shapes unrecognizably different than the one outlined
in the previous chapter. Moreover, surely nobody wants to play a game
of “more digimodernist than thou,” whereby various texts—YouTube clips,
Jackson’s Lord of the Rings, Grand Theft Auto IV—are placed side by side on
some spectrum leading from “utterly digimodernist” down to “not digi-
modernist at all.” This would be wrongheaded, not least because the final
meaning of the term “utterly digimodernist” is still, it seems to me, up for
grabs.
Nevertheless, there are certain traits intrinsic to the digimodernist text
that existed already before the arrival of digital technology gave them their
distinctive contemporary form. We can speak of a proto-digimodernism,
by which texts lacking the necessary technological apparatus emerged
into a cultural climate unfavorable to such modes displaying some of the

73
74 DIGIMODERNISM

characteristics of what was to come: digimodernism avant la lettre, and


the science too. In this chapter I’m going to survey a selection of these.
Digimodernism’s very youth means it cannot be fully described in isolation
from its prehistory; any kind of temporal perspective will soon take us
back to before its proper existence. This is not an attempt to write a history
of the digimodernist text (there is no space for that) but to suggest roughly
what its gestation looked like. Moreover, the question of how digimodern-
ism grew out of and superseded postmodernism remains a crucial one,
both to understand the newer form and to shape its development. Most of
the texts discussed here belong to the postmodern era, partly because that
is my area of expertise, and partly because any biography, even one of a
textuality, interests itself with its subject’s parents’ lives too. If they also
derive mostly from the English-speaking world, that is largely because, if
postmodernism gave a starring role to continental Europe, digimodern-
ism, at least so far, tends to be Anglo-American.
The danger is that this chapter will indeed seem to suggest a spectrum
of more-or-less digimodernist textual forms, and perhaps even to attribute
value to wherever on that spectrum a text is misguidedly thought to be.
For myself, I find it hard to see that a form of textuality, without even
considering its content or style, can be, in and of itself, more valuable than
another. However, such a hierarchy is endemic to our culture, and what all
the texts here do have in common is their socioculturally marginal status.
If I highlight this now it is in order to resist another evaluative instinct, one
conditioned by decades of programmatic postmodernism, which cannot
see a cultural margin without calling either for it to be moved to the center
or for all invidious notions of margins and centers to be overthrown.
Postmodernism successfully, and admirably, brought into the cultural cen-
ter previously marginalized textual forms from women’s writing to postco-
lonial literature, and while the days of such remapping of the cultural
landscape are not over yet (there remains work still to be done), this is
an inappropriate place to be thinking in such terms. The contemporary
digimodernist text does not need dragging out of the cultural margins into
the center, because the center is where it already is. It’s what counts today,
what we hear about; it’s the cultural-dominant; as the development of digi-
modernist cinema makes clear, it has violently remade, and is still remak-
ing, everything else in its own image. Often it doesn’t seem to realize its
own strength; and those attached to older forms, from 1980s TV to 1970s
film, are often, indeed usually, scathing about what they consider the new-
fangled texts’ shortcomings. In terms of prestige the digimodernist text is
still marginal: as potential form it’s all-controlling, but as actual presence
A Prehistory of Digimodernism 75

it’s a whipping-boy. And to a great extent the virulence of that criticism


has been deserved. The point, though, is not to dream fondly of a return to
a supposedly superior, earlier textuality, but to breathe greatness into the
one we have (easier called for than done).
As examples of embryonic digimodernist texts, those analyzed here
take in most cultural forms: there are representatives of cinema, photo-
graphy, journalism, television, music, literature, and the performing arts,
arranged in roughly reverse chronological order. Of those that were
finally excluded from this survey, the most notable perhaps was the recep-
tion/rewriting of the film version of The Rocky Horror Picture Show (1975),
in itself a postmodern free-for-all of pastiche, parody, camp, free-floating
desire, gender as performance, multiple genre-mixing, and irony. Late night
movie theater audiences gradually evolved around it a separate proto-
digimodernist textual experience anchored in the film but not altering its
substance: dressing up as the characters and dancing or acting out as they
do, they threw rice (at a wedding) and toast (at a toast), or donned paper
hats and rubber gloves and party hats in line with the characters, or shouted
out set interjections so as to seem to “interact” with them. Consequently,
there emerged two parallel, intercrossing material texts: one fixed and of its
time, and one, haphazardly growing, collective, and anonymous, anticipat-
ing our own (and one much more fun than the other). A recent derivative
of this is Sing-a-long-a Sound of Music, which became popular in Britain in
the 2000s: the audience dresses up as nuns or Austrian children, and riot-
ously sings the subtitled songs as they appear on screen. Also omitted was
discussion of Dungeons and Dragons and other role-playing games, while
“happenings” in the 1950s–60s’ sense, a form of theater manifesting many
of the characteristics of digimodernism (fluidity of participant-function,
textual haphazardness, evanescence), proved unreconstructable for detailed
analysis here.1

Industrial Pornography

It’s the early 1950s, or 1850s. You are walking alone through a wood on a
mild spring or summer day. From a distance you espy a couple. They are
having sex. What do you actually see? Or try this. It’s the 1900s, or 1940s.
One afternoon, alone, you glance from your window. Across the way the
drapes are almost wholly drawn, but there’s a gap, and from the angle you’re
looking along the gap leads in to a mirror on a wall, and as chance would
have it the mirror reflects slantwise a couple on a bed having sex. What
actually do you see? And in both cases, suppose the couple is averagely
76 DIGIMODERNISM

self-conscious, neither furtive nor exhibitionist. And that you feel nothing:
not curiosity, or shame, or disgust, or excitement. Your eyes are a camera.
What do they record?
On one level, the answer is self-evident: you see a couple having sex, of
course. More precisely, you probably see a conglomerate of limbs, a mass of
hair, a jerking male behind, a quantity of physical urgency or tension. On
another level, the question is paradoxical, because the total situation here
of viewer and viewed (people being watched having sex) structurally repli-
cates the ostensible reception and content of industrial pornography; but
the glimpsed actions probably wouldn’t resemble those of porn at all. Why
wouldn’t they?
Industrial pornography is a product, it would seem, of the 1970s; its
origins lie in the heartlands of postmodernism. Yet its textual and repre-
sentative peculiarities make it both emblematic of postmodernism and a
precursor of digimodernism; indeed, it has shifted into the new era much
more smoothly than have cinema or television. We don’t need to waste too
much time on what differentiates industrial pornography from other porn,
from “erotica” or “art,” and so on; these are essentially legal battles. Three
points are unarguable: that there exist texts whose principal or sole aim is
to stimulate sexual excitement in their consumer; that some of these texts
manifest a standardization of content and a scale of distribution that can
be called industrial; and that, as a generic label, “industrial pornography”
is in places as smudged in its definition as, say, “the war movie” or “the
landscape picture.” The label suggests the vast scale of pornographic pro-
duction and consumption over the past thirty years or so, along with the
(relative) openness of its distribution and acquisition. It therefore excludes
material aimed at niches, some of which, involving children or animals, is
more accurately classed as recordings of torture; “pornographic” perfor-
mance is, by definition, exchanged for money.
Above all, industrialization manifests itself here as a standardization
of product. It is always the same poses, the same acts, its performers made
to converge on a single visual type. Everything nonsexual is rigorously
cut out; the “actors” or models are identified solely with their sexual attrac-
tiveness or potency. The predilection of early hard-core movies like Deep
Throat for a detachable plot arc was wiped out by industrialization, which
made it hard to differentiate any one title from the next. Buy a random
industrial porn magazine and you will see a seemingly endless array of
similar-looking individuals in the same positions; rent or buy a random
industrial porn film and similar-looking people will work through the
same acts methodically, systematically, with a soul-crushing repetitivity.
In both cases, models and scenes are separated off by extraneous material
A Prehistory of Digimodernism 77

(articles, “acting”) placed there to distinguish them from each other, and
famously ignored.
From a postmodern perspective, industrial pornography is hyperrreal,
the supposed reproduction of something “real” which eliminates its
“original.” The reason for the dates given in the first paragraph of this sec-
tion is that industrial pornography has transformed the sexual practices of
many individuals in societies where it is prevalent, recasting them in its
image. Increasingly, “real” sex tries to imitate the simulacrum of industrial
pornography. Moreover, its staging is often either explicitly or implicitly
self-referential in a recognizably postmodern way: it has a strong sense
of its own status as a representation of sex by paid performers; magazines
discuss their models’ lives as professional models, film actresses gaze at the
camera, and so on. The third postmodern element in this material is its
frequent reliance on pastiche or parody (especially of Hollywood), as a
source of ironic clins d’oeil which also help to achieve a minimum degree
of product differentiation.
From a digimodernist point of view, what characterizes industrial por-
nography is this: it insists loudly, ceaselessly, crucially on its “reality,” on its
being “real,” genuinely happening, unsimulated, while nevertheless deliv-
ering a content that bears little resemblance to the “real thing,” and what
distorts it is its integration of its usage, of the behavior of its user. Take a
soft-core magazine photo spread of a model. As the eyes move sequentially
across the images, she appears to gradually disrobe, turning this way and
that, finally placing herself naked on all fours or on her back with her legs
apart. Very few of the poses derive from the “natural” behavior of women
eager to attract a man; and yet these images will excite many men. The
spread as a whole creates, for the regarding male, the illusion of an entire
sexual encounter: the most explicit images set the woman, in relation to
the camera, in positions she would only adopt seconds before being pene-
trated. Consequently, for the regarding male, the photographed woman
appears to be moving ever closer to intercourse with him. And yet—here is
the digimodernist point—within the logic of the photos she doesn’t actu-
ally get closer to sex with anyone at all, there’s no one else there anyway,
there’s only an increasingly unclothed and eroticized woman. And nothing
in the images explains why her appearance and conduct are changing that
way. The images then are only intelligible, both in their content and their
sequencing, by inserting into them the sexual habits of their male con-
sumer. Otherwise, they look almost bizarre.
This process is found in hard-core movies in even more dramatic
form. Here, sexual positions are adopted solely that someone can watch
the performers who are adopting them, and clearly see their genitalia.
78 DIGIMODERNISM

Couples copulate with their bodies scarcely touching, or contort their limbs
agonizingly, or favor improbable geometries, solely in order that penetra-
tion be made visible. Male ejaculation occurs outside of the woman’s body
purely in order that a viewer can watch it happen (nothing in the text
explains such a pleasureless act). The mechanics of hard-core industrial
pornography suggest an unreal corruption, a slippage from sex as it is done
and enjoyed to sex done so that someone else can enjoy seeing it, and this
corruption generally has the unspoken effect of diminishing the partici-
pants’ pleasure. Such positions, the ejaculation shot, and the rest are staples
of industrial pornography not because they yield unrealistically fantastic
sex but because they permit unrealistically visible sex. While deformation
of “known reality” for creative purposes is all but universal in the arts, its
function is doubly peculiar here: first, since the unique selling point of hard
core is its documentary sexual factuality, the distortions simultaneously
betray the genre’s raison d’être and furnish its necessary cast-iron proof,
making them both structurally crucial and self-destructive; and second,
every one of the changes here stems specifically from the systematic and
crude sexual demands of the watching consumer, not from the artfulness
of the creator.
This is equally apparent in the narrative logic of industrial hard-core
porn movies, which integrates their consumption, constructing itself out
of the circumstances of their viewing. If viewing here is the chancy recep-
tion of sexual images, then the circumstances of the encounters seem cor-
respondingly impromptu, the sudden couplings of virtual strangers (the
pizza delivery boy or the visiting plumber and the housewife) both in their
narrative context and in their presentation to the watching gaze. If viewing
is voyeurism with the consent of the seen, then encounters tend to exhibi-
tionism, sex breaking out on yachts or hilltops, in gardens, by pools, such
that the viewer’s “discovery” of naked copulating bodies is mirrored by the
performers’ “display,” both to the viewer and narratologically, of their
nudity and their copulation. If viewing means “happening” on other peo-
ple having sex, then performers do it to fellow cast members too, acciden-
tally entering rooms to find sex in progress, and joining in or watching.
Indeed, the proportion of encounters watched from within the scene as
well as from outside is striking.
Its alloyed digimodernism marks off the hard-core industrial porn film
from any other movie genre, even those, like comedy or horror, which also
aim to stimulate a visceral or physical response. In turn, no genre excites as
powerful a reaction in its viewer, an impact that derives less from its osten-
sible content than from its digimodernist construction. While experienced
A Prehistory of Digimodernism 79

perhaps most acutely by fans, hard-core porn tends also to have a fairly
overwhelming or engulfing effect on those who find it disgusting or
tawdry. That engulfing, that outflanking of the viewer is recognizably digi-
modernist and shared to a great extent by videogames and reality TV; each
short-circuits, in a way that elicits inappropriate notions of “addiction,”
a deliberate, controlled response. We will come back to this issue later.
Its digimodernism also means that industrial pornography should be
primarily seen as something that is “used” rather than “read” or “watched,”
employed as an ingredient of a solitary or shared sexual act outside of
which it makes no sense or appears ludicrous. However, it’s undeniable
that, for many reasons, the viewer whose feelings, actions, sightlines, and
rhythms are so efficiently uploaded into and visually integrated by indus-
trial pornography tends to be male. There is little universality about the
use of porn. Women, research suggests, initially find hard-core films as
arousing as men do but lose interest much more quickly, and this may be
because the movies are textually invested, in their content and sequencing,
with the sexual practices, habits, and responses of an expected male viewer.
It is women whose pleasure is most visibly articulated (men’s is self-
contained) or whose fellatio is in all senses spectacular; it’s the woman’s
body that is waxed and inflated to become something it had never previ-
ously needed to be: exciting to stare at during sex. However, textual con-
ventions (regular, monotonous) must be separated here from their possible
reception (perhaps wayward, unexpected): it is not because industrial
pornography reinvents lesbianism solely as an object of male regard, for
instance, that some straight women don’t find it exciting. This discussion is
about textuality, not consumption.
The digimodernism of industrial pornography is doubly partial: it coex-
ists with its postmodernism (an interesting contribution to debates about
their relationship); and the viewer (textually male) does not determine or
contribute to the content or sequencing of the material by any conscious
act. His sexuality, abstracted from him and inserted in heightened form
into what he is regarding, “writes” what he sees through the intermediary
of someone else’s hand—the director’s—which guides his metaphorical
pen. Sitting in a ferment before these images he doubtless does not
know or care why they are the way they are, nor why he is responding so
intensely. Entranced, his digimodernist autism overpowers his individual-
ity just as, functionally, industrial pornography relies on anonymity: the
obvious pseudonyms of the performers, and equally of the consumers
whose experiences contributed to Laurence O’Toole’s book Pornocopia.
On his acknowledgments page O’Toole thanks, increasingly ridiculously,
80 DIGIMODERNISM

“‘Nicholas White,’ ‘Jonathan Martin,’ ‘Kate,’ . . . ‘Burkman,’ ‘Shamenero,’ . . .


‘bbyabo,’ ‘dander,’ ‘knife,’ . . . ‘Chaotic Sojourner,’ ‘Thumper,’ ‘Ace,’ . . .
‘thunder,’ ‘SAcks,’ ‘Demaret,’ ‘Tresman,’ ‘der Mouse,’ . . . ‘billp,’ ‘Gaetan,’
‘Zippy,’ ‘Zennor,’ ‘Imperator.’”2 They resemble, tellingly, the names of
Internet UGC contributors or characters in movies heavy with CGI
(computer-generated imagery). O’Toole himself may well be a punning
pseudonym for all I know. Paradoxically, industrial pornography hides
actual people away, and the reverse is also true: for Michael Allen it is the
“great ‘unsaid’” of Hollywood, while academics have long complained of
the impossibility of getting funding for research for it.3 The number of
adults prepared to admit they enjoy it is a fraction of the true figure.
Socially, no other form is so omnipresently occluded, so popular and
disreputable, so centrally marginalized.

Ceefax

I enter my living room, turn on the TV, pick up the remote control, and
retreat to the couch. I choose BBC1, then press a button on the remote
control: the picture is replaced by words on a black screen; I key in 316 on
the control number pad. I have entered the world of Ceefax. In seconds the
current football scores have appeared on the screen (it is 4:10 on a Saturday
afternoon in England). Seeing that Manchester United are winning 1-0,
I go to the kitchen to make a coffee, leaving the TV as it is; when I return at
4:30 I discover that United’s opponents have scored two goals in rapid suc-
cession. I take the remote again and key 150: the latest news flash appears.
Then 102: I get the latest news headlines, five or so words per story, and
choose 110 for an extended (about ninety words) rendition of the story
that interests me most. Then (it is a May afternoon) I key 342 to see the lat-
est score in a cricket match being played; then 320 to see the latest score in
my local team’s football match; then 501 to check out the latest entertain-
ment news; then 526 to see brief (about 130 words) reviews of the latest
films. My partner mentions going for a picnic tomorrow, so 401 gives me
the weather forecast and 426 the predicted pollen count. Back on 316,
I discover that United have banged in two more goals to win their match,
and on 324 confirm—since all the games are now finished—that they top
their league. Feeling indolent, I key 606 to see what is currently showing on
the terrestrial channels; uninspired, I try 643 and 644 to see what’s on the
radio right now; then back to 342 for an updated cricket score. Then I turn
the TV off and play with my son instead.
A Prehistory of Digimodernism 81

As this description of using it shows, Ceefax is an electronic newspaper


accessed via a TV set, a closed information system that can be entered and
left but leads nowhere else. Some of what it contains is also in my morning
newspaper (the radio listings), while some (the final football scores) will be
in tomorrow’s; some (the interesting news story) will appear in tomorrow’s
paper in much more detail, possibly along with other articles commenting
on it (Ceefax, as a BBC product, has no opinion pieces of its own, not offi-
cially at least). But much of it—the minute-by-minute updates—will never
be published anywhere, since a print newspaper has to impose a cutoff
time after which its “news” cannot be described differently. Ceefax has no
such cutoff time, or its cutoff time arrives every minute: it is constantly
renewed, constantly rejigged (tomorrow’s weather forecast endlessly
adjusted in response to incoming data), and in this sense it resembles TV
or radio news—if I put either of those on, I might be told the latest cricket
scores or Manchester United’s current position. But then again I might not;
I would rely on a journalist or producer deciding to tell me. Ceefax, how-
ever, puts access to this information in the viewer’s hands: it is there right
now, when s/he wants it.
It’s like a super-newspaper, and has the sections you would get in a
paper: world and national news, business, local, and sports news, the
weather forecast, TV schedules, readers’ letters, quizzes, culture and enter-
tainment news, and so on. In fact it offers many more news items than a
paper would, for varying lengths of time (a news story may last a few hours,
a film review a week), and much less about any particular item than most
papers would. Its skimpiness and lack of contextual background or opinion
make it a supplement to a paper rather than rendering one redundant;
similarly, its lack of visuals—it carries only words and diagrams, no photos
or film—establishes it as a user-oriented complement to TV news but not
as its nemesis. Moreover, while quickly and easily accessed, its informa-
tion—unlike a newspaper’s or TV’s—cannot last. All of Ceefax’s content
is evanescent; the viewer can store it only by photographing the screen
or writing it down. It provides, for instance, on page 888, subtitles for TV
programs currently being broadcast, but if you tape a program and watch
it later the subtitles have disappeared forever. In this, it is inferior to rolling
news channels who also provide a constantly updated flow of information
but can be recorded and played back later too. Nevertheless, Ceefax does
make aspects of print news less interesting: if, on a January morning at
7:30 a.m., it gives me the latest score and a report on the day’s play in a
cricket match in Australia played, because of the time difference, between
82 DIGIMODERNISM

about midnight and 7 a.m. GMT, I am then way ahead of my print newspaper,
which, at 8 a.m., will provide me (having been put to bed before midnight)
with the score and its meaning from the previous day’s play (cricket matches
can stretch over several days). Even before I buy it, my paper is badly out of
date. Furthermore, as cricket matches can fluctuate considerably, I might
flick disconsolately through my 8 a.m. paper knowing England have been
comprehensively trounced and eyeing a report about how brilliantly they
have played. True, I could get a more up-to-date picture from other sources
(radio, live TV action); but the point is that Ceefax is an electronic version
of print news, and can be systematically compared only with that.
I still remember the shock of first using Ceefax in the 1980s: used to
staring passively at pictures on a TV screen, I felt strange keying three-
digit codes to get endless things to read.4 Ceefax’s arrival in British homes
undermined the established passivity of the TV viewer in a shift equivalent
to the more-or-less concurrent spread of the VCR, which allowed the
viewer to accelerate or rewind the pictures on his or her screen, re-view
and pause them, controlling physically what the screen displayed. Yet the
VCR, a new machine connected up to a TV, in effect merely subjugated
television to an outside technology now responsible for its content. Ceefax,
though, came from within the set; accessed via the BBC’s channels, it was
a BBC product (and other broadcasters had their own versions). Also
significant was Ceefax’s nonsequentiality: as my description shows, users
accessed specific pages (targeted use) or, at best, consecutive pages within
specific sections, and otherwise leapt around the system at will. There was
no real reason either why sections should be arranged according to any
particular numbers: ITV’s Teletext placed sport at 400, local news at 330,
and current TV listings at 120, creating a confusion resolved by users tend-
ing to prefer one broadcaster’s system. If print newspapers differ in this,
it is not only because they tend to arrange their sections pretty much in a
standard order. Readers of print papers will go through them consecutively,
page by page, pausing as interest flickers but glimpsing (at least) all of them,
even the most uncongenial. They will rarely jump into a print paper at a
specific page, or vault dramatically around it; they may pick it up in order
to read, say, the op. ed.s, but will leaf through looking for them and not
consult page numbers (which may vary from day to day anyway), whereas
people like me will go directly and securely to Ceefax page 316. Unlike a
print paper, then, Ceefax yields no sense at all of a total news product to
be consulted or handled from one end to the other; even Ceefax addicts
only ever see perhaps 20 percent of its pages, and this is integral to its infor-
mational identity.
A Prehistory of Digimodernism 83

Who uses Ceefax? Anecdotal evidence suggests men5 (it is ideally suited
to sports news), while the letters page suggests users living in isolated rural
areas who hold pompously expressed, reactionary opinions. It might be
surmised here that Ceefax is yesterday’s technology, suited only to those
too old or too dull to adapt to newer, better forms. It’s true that Ceefax’s
self-containment is constricting compared with the Internet’s varied riches;
equally, Ceefax offers no more scope for user-generated content than print
newspapers did a century ago: its content is top-down. Much of Ceefax does
duplicate the BBC’s news Web site, and it may be that the form is doomed
to obsolescence. Yet Ceefax still has certain advantages. It is cheap—virtu-
ally free in itself, and a TV set costs much less than a computer—and sim-
pler and quicker to access and use, largely because of its very narrowness.
Both these points give Ceefax a potential reach which the Internet cannot
currently match, reinforced by the fact that, unlike the Web, a high number
of simultaneous Ceefax users does not slow the system down.
Above all, to a great extent Ceefax is the perfect proto-digimodernist
textual form: an evanescent, nonsequential textuality constantly being
made and remade anew, never settling, never receiving definitive shape.
Ceefax contains no records of the past: it’s an encyclopedia of right now
(it will give you status reports on flights due to land in the United Kingdom
in the next few minutes) with no other temporal dimensions. Using it
becomes hypnotic, addictive, trance-inducing, evacuating all sense of time;
its anonymity (only a tiny fraction of its pages give their authors’ names,
and there’s no space even for them to tell us anything about themselves)
means that all textual moments on Ceefax resemble all others. Ceefax has
no textual memory, no history; it’s an impersonal, amnesiac textuality; it is
designed to be used, and used right now, not remembered or discussed.
Indeed, analysis of it tends to focus on its engineering, overlooking its
content.6 Though integral to British life it is scarcely ever mentioned by
Britons, in the same way that people rarely talk about their washing machine
(except when it breaks down, and Ceefax doesn’t). In its country of origin
it is both omnipresent and ignored, a cultural marginalization inherent in
its very form: it is, above all, a technological textuality, remarkable for its
efficiency, rather than a content-centered textuality, interesting for what it
says. Proto-digimodernist to the end, it is soon to be largely phased out.

Whose Line is It Anyway?

A TV show host invites his four guests to come down on to the stage and
form a line facing the studio audience. They are going to make up a rap.
84 DIGIMODERNISM

Asking the audience to shout suggestions for a subject, he selects “burgundy


handbags.” While an offstage pianist extemporizes a rhythm and melody,
each guest takes it in turns to step forward and improvise a verse or two on
this unlikely theme, to the audience’s delight.
There is something self-evidently digimodernist about Whose Line is It
Anyway? a series of short-form improvised comedy sketches held together
by a parody of a game show broadcast first in Britain by Channel 4 from
1988 to 1998 and then in the United States from 1999 to 2006. It isn’t just
the improvisation, by which the sketches emerge into created being before
our very eyes, and we witness the acts of inventing and performing text
condensed into the same movement, the same gesture. The text here has an
uncertainty, a haphazard quality typical of the digimodernist text. Equally
striking, though, is the role allocated to the audience, which becomes a
low-level author, contributing content to the sketches, shaping and orient-
ing them so that the performers appear to be inventing at its behest. In fact,
there is an ubiquitous fluidity of function here: the host, who starts and
stops the performers ruthlessly, acts as a sort of director whose cry of “cut”
is replaced by his buzzer or bell; although the performers must improvise
in accordance with the suggestions of the audience, they are so well versed
in the styles they mimic that their turns often slide into generic (and pre-
planned) parody, wresting authorial control back to themselves; the range
of games played is narrow, depriving the host of much positive input;
the host is also a spectator during the sketches, not setting foot on the
performers’ platform (in Britain); the audience appears suspiciously well
drilled when asked to contribute, coming up with the necessarily surreal
range of styles or the necessarily banal subjects with an attentiveness to
the show’s expectations that seems to make them actors for the night,
saying, not their appointed lines, but their appointed kinds of lines. Specta-
tor, author, performer, director—this is a show that multiplies and scatters
all the given theatrical roles.
However, try this game. Somebody is going to make up a song. An audi-
ence member suggests, as a subject, a TV set or a washing machine or a
toothbrush or coal (all shouted out in different shows); another audience
member, asked to offer a musical style, suggests Eurovision or music hall or
grunge or Edith Piaf or gospel or blues (ditto). And the song is belted out,
together with extemporized rhymes. Many of the games on Whose Line is
It Anyway?—very aptly, given its British transmission dates—invite some
easy postmodern strategies: they summon up all the cultural modes of the
preexisting Western tradition and flatten them into pastiche and parody.
All are exposed as dead languages, as empty quanta of recognizable codes
A Prehistory of Digimodernism 85

and conventions, fit only for deconstructing and knowing manipulation.


They’re interchangeable and depthless discourses, impossible as “authentic”
vehicles of expression, capable only of being skated over briefly and ambiv-
alently (with mockery and yet affection). Whose Line is It Anyway? as its
title (a double pun on past TV shows7) suggests, is a backward-looking fes-
tival at the grave of Western culture, especially its theatrical and musical
inheritances; if Fredric Jameson had been a TV producer in 1988, he might
have thought this up. The game “film and theater styles” is especially vivid
in this regard: the host gives a pair of guests a banal situation (returning
a faulty purchase, a jealous husband discovering another man in his house,
a realtor showing a client around a property, an agent discovering a star on
the street) and elicits from the audience styles in which it is to be acted out:
pell-mell come German expressionism, Disney, Whitehall farce, kung-fu,
Greek tragedy, Restoration comedy, Star Trek, a fire brigade training movie,
Macbeth, a disaster movie, The Muppets, Coronation Street—and the host
buzzes from style to style so that they merge and mingle in a hyperreal play
of surface codes.
Moreover, this relentless flattening is inevitably antihierarchical, causing
a leveling of all performed styles. Certain games seem deliberately to seek
this effect, as when the host elicits three dull facts about a member of
the audience (that day she brushed her teeth, missed a train, and went to a
barbecue) and the four performers make up an opera out of this unsur-
passedly antidramatic material. Even in a slightly different form, where
they take, say, the act of making mashed potato or getting lost in a maze or
doing the dishes and transform it into the libretto for a Broadway musical,
the effect is still self-consciously deflating. It’s an antiheroic form of com-
edy based, ironically, on an encyclopedic knowledge of past cultural styles
(though not especially funny, the polymath John Sessions was ideally suited
to the format, and for a time appeared in every episode for this reason).
Yet this antiheroism was paradoxical too, since improvisation, when
done successfully, is a heroizing form: to stand before an audience without
a script and, off the top of your head, given cues with no forewarning
(a terrifying prospect for a performer), to reduce them to hysterical laugh-
ter—this would seem to elevate a performer to near-godlike status. Fans
of the show quickly came to revere (and/or fancy like crazy) such talents
as Greg Proops, Josie Lawrence, Mike McShane, Ryan Stiles, and Tony
Slattery. And yet this too stood on quicksand. For the only way to ensure
the scale of their achievement was to guarantee that the material was
improvised, created ex nihilo before us. Some games, like “world’s worst”
where the guests step forward in turns to nominate, for example, the world’s
86 DIGIMODERNISM

worst person to be marooned on a desert island with or the world’s worst


thing to say to the royal family, could too easily have had their answers
or content prepared in advance (the idea came from the host), and were
consequently less successful. The guarantee of improvisation turned out to
be neither more nor less than the guarantee of the possibility of failure, of
the performers trying to do things that really don’t come off, of them fall-
ing on their faces. In another game, also less successful, they bring along an
author, clearly coordinated by the producers to generate a (postmodern)
merging of registers and styles (Earl Hamner [creator of The Waltons],
Samuel Taylor Coleridge, the writer of air hostess training manuals, and
the Rosetta Stone), and are given by the audience a suitably trite and
improbable subject (Rambo’s sore toe) to pastiche along to. All of this
seemed preplanned; though funny, it felt less exciting, less “real,” less “edgy,”
less satisfying. The audience’s suggestion is swiftly subsumed into a well-
rehearsed generic parody. Similarly, when the audience suggests profound
philosophical issues (the existence of God) and the host gives pairs the
style of two six-year-olds or two American construction workers to discuss
them in, the registers clash with highly postmodern effectiveness, but the
sense of improvisation is undermined. And with that, the point of the show
goes up in smoke: as soon as it seems preplanned, you just wonder why
they didn’t hire better gag writers than that.
And so the show’s digimodernist textuality—its fluid movement of
creative roles, its use of the audience as low-level author, its present hap-
hazardness—could easily collapse into antitext, into lapse, silence, short-
fall. Its digimodernism was constantly held over a trap door dropping
into the dungeon of textual failure. Already constrained by the show’s
dominant textual postmodernism, the digimodernism toward which
Whose Line looked was finally only identical with the possibility of come-
dic miscarriage. As if in recognition of this, certain games were absurdly
hard: in “props,” for instance, a pair is given by the host a misshapen,
obscure item and must come up instantly with quick and surreal “uses” for
it, so quickly that sometimes their ideas just don’t work. In another, they
must talk to each other only in questions, about a host-suggested subject
so left-field (auditioning for a circus) it can only induce performing gaps
or creative bankruptcies. In yet another, one performer is made to recite
every other line from an unknown old play (more cultural backward-looks)
while another improvises (say) the booking of an airline ticket (the host’s
suggestion), all the time obliged to work toward a “natural” last line (e.g.,
“it’s not as small as it looks”) already randomly thrown out by the audience.
Again, this was so ferociously hard for the improviser that, as well as inspir-
ing moments of terrific comedy, it also produced performative failures
A Prehistory of Digimodernism 87

when she or he was clearly baffled, stumbling, struggling to come up with


anything at all. And now we seemed to see the performers truly, as career
entertainers momentarily evicted from their act. The regular game “party
quirks” similarly played genuine comedy off actorly bemusement so as to
guarantee, it seemed, the improvised nature of the material: one person
receives the other three at home, the latter having secrets hitherto sug-
gested by members of the audience (one hears voices, another is a cub scout
leader, or a compulsive liar, or an undertaker drumming up business, or a
character from Thunderbirds), and has to guess who they are. As the guests
surreally interact to the amusement of an audience in on their identities,
the party-giver frequently stands bewildered and helpless at the center of
the storm. Again, his or her inability to speak, this performer’s annihila-
tion, this actorly “death” is the surest sign that the material has been coau-
thored by the audience, that it really is being made up as it goes along, that
it genuinely is as haphazard and chancy as it appears. In short, the show’s
digimodernist textuality is fatally bound up with antitextuality, with tex-
tual self-annihilation.
Contrary perhaps to what I suggested at the outset of this chapter, the
show was fairly successful, notching up 136 episodes in Britain and 215 in
America. While this was largely due to its hip postmodernism, which
tended to overshadow the unstable digimodernist elements, the show’s
success as a format did not have the same effect on the careers of those
who starred in it.8 Why might this have been? One can only hypothesize;
perhaps it’s the case, though, that while its reliance on contemporary
postmodern content made the show a niche success, its problematic digi-
modernist undertow trapped its stars forever in that niche.

House

The Smiths’ song “Panic,” released in 1986 as house music spread across
Britain, urged listeners to “burn down the disco” and “hang the DJ” because
“the music they constantly play/It says nothing to me about my life.” Locked
inside the expressive-meaningful assumptions of white-boy rock music,
The Smiths could only look at house and see an inability to evoke everyday
experience, a failure of signification. As for postmodernism (from which
rock was essentially excluded), it misconstrued house by overemphasizing
its use of sampling, a 1980s technological innovation by means of which
elements of previous songs, like their bass lines or drum beats, could be
excised and redeployed in completely new settings. Consequently, a hit
record like M/A/R/R/S’ “Pump up the Volume” (1987) sounded fresh and
new (and wonderful) while being self-evidently made up of fragments of
88 DIGIMODERNISM

other songs. For postmodernists, a more apt example of Barthes’ notion of


text as a “tissue of quotations” or of Jameson’s pastiche could not have been
imagined. While the stitching together of bits of old songs to make new
ones was hardly revolutionary, what the sampler permitted, in a shift antic-
ipated by Walter Benjamin, was the cannibalization of recordings rather
than simply of songs, a process that yielded contemporary pieces from
sonic components clearly created at a range of past times. The sense of
hearing something utterly “now” formed at many different periods (found
in most kinds of hip-hop and dance, but never so purely as in house) was
uncanny, dislocating, evocative, and exciting. Yet postmodernism’s enthu-
siasm for sampling still fell into the trap of focusing on house music’s
ostensible signifying content.
Pace The Smiths, house music was created not to describe or express
something beyond itself, but to be used: to underpin and inflect and satu-
rate the experience of dancing. You wouldn’t play it as background music
at home; it needed to be blasted out at club- or rave-level volumes, not so
much to seduce the ears as to overwhelm the central nervous system. It was
a music that generated not an emotion (a psychological effect) but a total
physical state, one that approached the simplicity and sublimity of eupho-
ria. The symptom of this was house’s narcissistic lyrical focus on itself. Take
three hit songs, all by Technotronic: “This Beat is Technotronic” (1990),
“Pump up the Jam” (1989), and “Get Up (Before the Night is Over)” (1990).
All of the lyrics, which are interchangeable from one piece to another, con-
cern the creation, performance, and consumption of the song in which
they occur. This total suspension of the whole of reality beyond the song,
imposed by such textual narcissism, makes description or expression an
impossibility; there is nothing else to describe or express. As a result, the
lyrics (and titles too) are often limited to the grammatical mode of the
imperative (“get your booty on the floor tonight/Make my day”) or of
the performative, a statement true only in the act of articulating it (“now
when I’m down on the microphone stand/I give you all the dope lyrics”).
Such text about text is of course central to a postmodernism privileging
Linda Hutcheon’s “narcissistic narrative,” but from a proto-digimodernist
perspective what matters more is that house’s haemorrhaging out of all
other space and time reduces its textual content solely to its giving and tak-
ing, its making and receiving, in short, its usage. Music had always been
danced to, of course, or “used” in other ways as a means to an end. But
its content had remained extratextually referential, even in disco, even in
Kraftwerk, while house’s self-absorption was absolute. Equally, the “use” of
house was not instrumental or free: it overpowered the self, extirpating all
A Prehistory of Digimodernism 89

thought, feeling, and behavior not utterly subjugated to it. Its use created,
quite simply, a sense of euphoric weightlessness and nonattachment, of
freedom and exultance, and so the excision of what lay beyond was
integral to it. Such lyrics, and indeed the process of sampling, worked to
achieve an impersonality, an experiential liberation from the confines of
the self in a state of collective transcendence, of ecstasy-induced oneness
with innumerable others. Hence the necessarily communal nature of its
reception or experiencing; hearing it at home yielded a weak echo of
its genuine potency, tangible only when played very loudly to large groups
of people in clubs or at raves.
In such a context, house evolved several distinctively proto-digimodernist
features. First, the songs blurred into one another, their beginnings and
endings unclear, while varying versions of themselves multiplied dizzy-
ingly; house broke the organic song text into a proliferating, haphazard
textuality. Second, house tended to eradicate authorship. This music felt
anonymous and autonomous, and its makers, who were half-unknown
even at the height of their success, rarely achieved the status of “artists,”
people whose work you might follow over time (only DJs achieved that).
Third, house was ephemeral. Intense, vital, and wonderfully exciting in
their time, pieces became almost instantly outmoded. In general, house,
which foregrounded its own use or appropriation, which privileged the
moment, circumstances, and impact of its overwhelming reception, was
swiftly used up, exhausted, discarded. Histories of the phenomenon, like
Sean Bidder’s Pump up the Volume, describe a succession of human experi-
ences, not the development of a genre.9 Fourth, house was as international
as the Internet would later be, both under the automatic aegis of a version
of the English language. Songs came from Belgium, Chicago, England,
Spain, or Italy, but most importantly sounded as though they came from
anywhere and nowhere. (The title of “Ride on Time” (1989), by the three
Italians behind Black Box, is an EFL student’s mishearing of the American
pronunciation of “right on time.”) Once again, all external temporal and
spatial specificity and content were ruthlessly cut away; the songs existed,
in a forerunner of cyberspace, in a kind of autonomous musicspace, a float-
ing realm of sound and feeling, exalted, narcissistic, sleek, and euphoric.
House was the textuality of the suspension of the self and the other.

B. S. Johnson’s The Unfortunates

If you live, as I do, in Oxfordshire in England and you want to read


B. S. Johnson’s novel The Unfortunates (1969), the best way is to reserve the
90 DIGIMODERNISM

copy they hold in the public library service’s headquarters. When you
collect it you are given a small rectangular box with author and title names
on the front beside the words “a novel,” and a kind of purple blotchiness
spreading across it (actually a photograph of cancer cells). Open the box
and you find on the left the warning: “Fiction reserve. This book is to be
returned to Headquarters and the fiction reserve. It is not to be added to
stock at any branch.” What is this impossible novel? In what does its impos-
sibility lie?
Within the box is a wrapper holding sections of stitched papers. A note
reads:

This novel has twenty-seven sections, temporarily held together by


a removable wrapper. Apart from the first and last sections (which
are marked as such) the other twenty-five sections are intended to
be read in random order. If readers prefer not to accept the random
order in which they receive the novel, then they may re-arrange the
sections into any other random order before reading.10

You remove the sections from the wrapper; you find and read “First,” three
pages long. It seems to be the jumbled interior monologue of a football
reporter sent one Saturday to cover a match and arriving in a city he
realizes he knows, triggering memories of a Tony and his “disintegration,”
of a June (Tony’s wife?) and of a Wendy (the reporter’s ex-girlfriend?).11 He
immediately admits: “The mind circles, at random, does not remember,
from one moment to another, other things interpose themselves,” and he
contrasts Tony’s “efficient, tidy” mind with his own, “random, the circuit-
breakers falling at hazard, tripped equally by association and non-associa-
tion, repetition.”12 Thus forewarned, you take the next section in the pile,
or another, but not the one titled “Last” (though unnamed, the other
twenty-five have a distinctive abstract pattern at their head, perhaps to aid
the printers).
At this point I can’t, of course, tell you what I read (I would if you were
here). To record and therefore privilege in print a certain pathway through
the novel is, clearly, to betray and disfigure its very meaning; you must go
through it along a certain pathway, and all are equal. Anyway, maybe I’ve
read it ten times by now, and along ten different paths: which would I write
about? Sections succeed each other, perhaps a page long, maybe eight, and
the style is as the reporter suggested, a circuitous wandering around mem-
ories and perceptions with sentences that wind and turn in and out and
back on themselves. The memories go over his past with Tony, with Wendy,
A Prehistory of Digimodernism 91

and you read on along the labyrinth’s route you have chosen yourself to
follow, looking for answers to the questions that emerge about these people
and what happened to them. You realize soon that the randomness of the
sequence of reading within a limit (the wrapper’s fixed contents) mirrors
the obsessive, trapped winding of the reporter’s thoughts and memories
deprived of chronological objectivity. The method of producing the book
enforces a demonstration of and a disquisition on the processes of remem-
bering, which are revealed as associative, chaotic, emotional, and nonse-
quential. Tony is dead; Wendy has been supplanted; they were all students
about ten years ago; they went to pubs and restaurants and visited each
other. Bit by bit, as in any novel, you piece together information about
them; uncannily, the order in which you do this here doubtless hasn’t been
and never will be experienced by anybody else. In the present of the report-
er’s thoughts he moves about the city, and random sequencing means
you read bits from after the match he saw before bits “set” earlier; and
so the jumbling of his memories produced by the text’s structuration is
reproduced in the chaoticization of your reading about them. The voice
sometimes echoes the thought-processes of Leopold Bloom, another
newspaperman-outsider in a provincial city (keen on horses, not soccer),
and sometimes suggests the traumatized recollections, circular and quest-
ing, of Graham Swift’s narrator in Waterland; like the former, he digresses,
tiptoes past clichés, notes everything around him, and like the latter his
present is lonely and his memories intolerably painful. The novel is as trick-
ily experimental as Joyce and as turbulently male as Conrad, as readably
contemporary as The Information and as heartbreakingly “literary” as Flau-
bert’s Parrot. Pitched midway between modernism and postmodernism,
The Unfortunates is a forerunner of a third textual axis.
It’s a novel about friendship, loss, love, guilt, ageing, masculinity,
memory, about place; it’s a story that packs an emotional punch; it’s not
some sterile game. Past and present interweave via a sardonic pun (free
association/association football) and ironic counterpoint (soccer involves
the homosocial bonding of young men too). The intellectual effect of the
structuration is to dramatize the movements of memory and perception,
past and present brain activities, when semiunmoored by objective chro-
nology. But the emotional effect comes up on you too: it makes the report-
er’s grief feel labyrinthine, the way grief feels to all of us; it exposes mourning
and loss as an unchartable psychological prison from whose confines
you feel you will never emerge. And, paradoxically, you forget, as you read
on, finishing a section, choosing its successor (perhaps striving for ran-
domness and digging deep in the pile, perhaps just taking the next one,
92 DIGIMODERNISM

perhaps selecting the shortest one left or the longest for your own reasons),
the “strangeness” of the structuring principle and the author’s refusal of
sequentiality. True, the story has its opacities, its launches in medias res, its
allusions clarified only later, its jumps back and forth in time; but these
seem only confirmations of its literary time and place, not so different
from, say, Greene or Durrell. In practice, the structuring principle is sub-
sumed into the experience of reading: it continues to be felt as the intellec-
tual and emotional effects that I’ve tried to describe, but very soon, on the
second reading session, it no longer feels odd.
It isn’t, of course, completely nonsequential or reader-generated: though
s/he chooses the order of sections at random, the author has invested each
of them with its own unshakably sequential prose. But Johnson under-
mines sequentiality on every level he can. The reader’s sectional nonse-
quentiality is reproduced by a tendency to free associate or drift randomly
within each section, from paragraph to paragraph or from sentence to
sentence (some separated by hiatuses), and even within sentences, which
return on themselves or break up and regroup and restart. Your random
reading is finally just your contribution to an overall literary project fore-
grounding irrational consecutiveness, multilinear form, and internal non-
sequentiality; it’s integrated, part of a whole.
By the reading’s second half (counted by page quantity; the novel itself
has no such thing, of course), the parallel shifts, from finding your way
haphazardly through the labyrinth of somebody else’s mental processes to
the pressure of inevitability you feel, sensing the gaps in the chronology
and lifting the next section, which resembles the latter stages of completing
a jigsaw puzzle. Except that you don’t choose the next “piece” because
you know it fits; you pick it up and it slots itself in as you read it. But
whichever image is employed, the proto-digimodernism of The Unfortu-
nates is, I think, clear: whereas a traditional novel offers a set of words in
a particular order, a materially fixed text, Johnson proposes a set of words
to be placed in one of the 1.551121 × 1025 possible orders which the reader
must select him or herself. In other words, the sequencing of the novel,
traditionally the author’s sole responsibility, here becomes largely the con-
sequence of a physical act necessarily carried out by the “reader.”
Although Johnson can be seen, like Godard, as a late modernist, the
impossibility of The Unfortunates does not lie, as it did for The Rainbow
(also partly set in Nottingham), in its content; it was felt not by censors
whose suppression could be later reversed but by professionals in the
textual field, and it is still palpable today. The publishers Secker & Warburg,
with whom Johnson was contracted, were initially unenthusiastic about
A Prehistory of Digimodernism 93

the project (“it was going to be hellishly expensive to put into practice”)
and, on delivery of the manuscript, “completely nonplussed by [its] daunt-
ing practicalities.”13 Librarians also disliked it, finding that borrowers
would appropriate individual sections; to prevent this, some libraries
bound the sections together, destroying the book’s purpose. Scholars too
have struggled with its refusal of a materially set text, its extreme multilin-
earity (not nonlinearity); scholarship has always regarded indeterminate
textual sequencing simply as a problem needing to be solved, as the history
of the disputes over the “correct” order of the chapters making up Kafka’s
The Trial illustrates.14
As for the book’s contemporary reviewers, they suffocated it, politely
acknowledging its innovations and mildly damning its content.15 Few
national literary establishments can have been as hidebound, reactionary,
and philistine as Britain’s in the 1960s and 70s, as Johnson himself fre-
quently and stridently charged. Yet it can be countered that every official
culture is dominated by a narrow, middle-aged conservatism, and that
what marked Britain’s out in those dark days was that its young and left-
liberal wings had abandoned any belief in homegrown artistic innovation
or excellence, preferring instead to worship what came from France and
America. A London art student in a 1963 novel by John Fowles talks about
feeling “there’s so little hope in England that you have to turn to Paris, or
somewhere abroad,” while a young bohemian in a 1975 novel by Martin
Amis calls the idea of reading an English novel “outré . . . like going to
bed in pyjamas.”16 Johnson’s natural demographic looked the other way.
Widespread indifference to his work, among other crises, culminated in
Johnson’s suicide in 1973 at the age of forty. His biographer Jonathan
Coe records that, only a few months before the end, he was “devastated to
learn . . . that without consulting him, Secker had pulped all the remaining
unsold copies of The Unfortunates, the novel that was, as a physical object,
by far the most dear to him . . . Gone. All destroyed.”17 Impossible.
Its structuring principle was not entirely new: Coe establishes that
Johnson was aware of the appearance in New York in 1963 of the English
translation of Marc Saporta’s Composition No. 1, a novel made up solely of
single pages contained in a box.18 I chose not to analyze this because I could
not find a copy; it was never published in the United Kingdom. The proto-
digimodernist text, inescapably a creature of the margins, runs the risk, as
with happenings, of disappearing forever off the cultural radar: if Johnson’s
novel went almost unread for thirty years (and, despite reissue in 1999,
remains obscure today), then Saporta’s forerunner, at least in my experi-
ence, has vanished into the textual night. These two went furthest, it seems
94 DIGIMODERNISM

to me, toward a proto-digimodernist novel in which the reader is invited or


compelled to sequence literary materials created previously by the author.
By contrast, when The Unfortunates was published in Hungary in 1973
it came

[n]ot . . . in a box: the economics of Hungarian publishing would not


allow that . . . [but as] a regular, bound paperback, with each chapter
prefaced by a different printer’s symbol. These same symbols were
reproduced on a page at the end of the book: Johnson then invited
Hungarian readers to tear this page out, cut around each symbol,
throw them all in a hat and bring them out one by one—randomly19

The “regular” format clearly works against the revolution in textuality


initially wrought by Johnson; indeed, Coe is skeptical whether many
Hungarians followed his instructions (nevertheless, the translation sold
9,000 copies, far more than the original had in Britain). In this form it bears
some resemblance to Julio Cortázar’s novel Rayuela (1963), translated into
English as Hopscotch (1966), whose prefatory Author’s Note reads:

In its own way, this book consists of many books, but two books
above all. The reader is invited to choose between these two
possibilities:
The first can be read in a normal fashion and it ends with chapter
56, at the close of which there are three garish little stars which stand
for the words The End. Consequently, the reader may ignore what
follows with a clean conscience.
The second can be read by beginning with chapter 73 and then
following the sequence indicated at the end of each chapter. In case of
confusion or forgetfulness, one need only consult the following list:
73–1–2–116–3–84 (etc.)20

The novel contains 155 chapters in all, and for its longer version chapters
57 to 155 have been shuffled and then scattered among numbers 1 to 56
(55 is not reused). This gives the reader very little actual scope for textual
determination, but conversely Hopscotch’s universe is more fictively auton-
omous than Johnson’s aestheticized memoir. Note too the insertion into
each of these novels of a sort of user’s manual, made necessary by their
authors’ attempts to break with traditional textual form. The Unfortunates
especially pulls the processes governing the publication, dissemination,
A Prehistory of Digimodernism 95

and reception of texts inside out; it is intolerable by everything that, since


Gutenberg, has regulated the material existence of texts.
From Cortázar we come, finally, to the Choose Your Own Adventure
series of children’s books, the first of which, Edward Packard’s The Cave of
Time (1979), is prefaced (inevitably) with a user’s note: “Warning!!!! Do
not read this book straight through from beginning to end!”21 Written in
the second person, the story itself starts by briefly recounting a journey
and the discovery of a cave, and after two pages “you” are invited to choose
what “you” want to do next. If you decide to go home, you turn to page 4,
if not, to page 5; having plumped (say) to go home, another section of story
ends by presenting you with two more options: should you prefer to return
to the cave now, you turn to page 10. Here, the text will describe two
tunnels: selecting the left one brings you to page 20, taking the right leads
to page 61, quitting the cave altogether sends you to page 21; and so on.
These books are multilinear narrative experiences, the reader plotting
and choosing what appears to be his or her own story by these successive
decisions, and progressing toward one of up to forty separate conclusions.
Adjacent pages look almost surrealistically incongruent; dénouements
range from “your” death to lasting contentment; and clearly the “same”
book could be read several times (though boredom would eventually pre-
vail). Unlike The Unfortunates, the text is therefore consumed incompletely,
and the narrative content is actually conventional; like Hopscotch, the
reader jumps around the consecutive pages of a bound book.22 Fascinating
in themselves, all three can be seen as precursors in different ways of hyper-
text and online interactive fiction, a form that, as I argue in Chapter 6,
digimodernism has helped render obsolete before its time, the Betamax of
textuality.

Pantomime

No book on how to stage a pantomime (a staple of British amateur theatri-


cal companies) is without a section or chapter on “audience participation.”
Among the forms this can take, as described by Gill Davies (to whom I am
indebted here), are the following. A member of the cast may come in front
of the curtain before it rises and get the audience to join in a song or a
game, or to “practice” the cheering, clapping or booing they will perform
later on. Before the second half starts s/he may also conduct a raffle, hand-
ing the winning spectators their prizes. During the show itself, a member
of the cast may come out while scenery or costumes are being lengthily
96 DIGIMODERNISM

changed behind her/him and discuss the story, or, Davies suggests, hand
out newspapers and award a prize for the best hat made from them, or,
again, invite people up to take part in a game that makes them look
slightly and amusingly ridiculous. Then there are the standard pantomime
commentaries voiced during the action: perhaps encouraged by boards
held up by members of the crew, the audience will boo or hiss or denounce
archaically the villain (“Shame!” “Scoundrel!”), and give the hero(ine)
aahs of sympathy and cheers when appropriate. More traditional still are
the exchanges whereby an actor and the audience contradict each other
over events happening on stage: “Oh no he isn’t!” “Oh yes he is!,” the actor
hammily inciting the audience’s ever more committed, chorused retort.
Moreover:

Prompted by such lines as “If that big spider (or the gorilla or the
ghost) arrives, you will tell me, won’t you?” the children will shout
out as the spider arrives, unseen by the protagonist in this scene. The
spider can hide in various places, changing swiftly from one place to
another as the hero desperately enquires: “Where? Over here? No,
he’s not . . . Where is he, you say? Over here? No, he’s not . . . You’re
kidding me, aren’t you? He’s not here at all, is he?” and so on, until the
creature finally stands right behind him, moving in unison, to remain
hidden every time the hero spins around until the youngsters (and a
good many adults) are yelling hysterically: “He’s behind you. Look
out behind you!”23

During the action audience members can also be invited up on stage and
involved in set-pieces (e.g., by judging an Ugly Sisters beauty contest, or
assisting a magician or wizard). Conversely, items can be thrown into the
audience, like soft candies or supposed water from a bucket that turns
out to be confetti, and cast members can pass through the audience from
the back, interacting and mingling with them (e.g., the dame seeking a
partner for the ball). There is almost no end to the variety of theatrical
“business” by which Davies can imagine the audience being drawn into the
production. She concludes: “One way or another, the audience should feel
that they are part of the pantomime. It is not something just for the players
to create in isolation . . . Give the audience a chance to contribute . . .
Throughout, they should feel that they are welcome to join in—to sing,
shout, cheer and comment—and that they are a vital element, even part of
the story at times.”24
However, this is all more complex than it may at first look: precise
definition of what is going on here, and how, problematizes words like
A Prehistory of Digimodernism 97

“participation” and “contribution,” “joining in” and “being part of.” For a
start, many of the moments when the audience is interacting with the
cast clearly lie outside the story, extrinsic to the narrative, occurring when
the production has paused (at scene changes) or is yet to start. They irrupt
into the entertainment’s margins when it takes a breather without quite
admitting to it. This separation from the genuine action is signaled spa-
tially (being set outside a lowered stage curtain) and textually: “Many of
the audience’s lines are implicit in the script, though not actually written
down.”25 To a degree they are excluded by their incommensurable multi-
plicity: “The question ‘Which way did they go?’ will create a furor of
instructions and pointing fingers.”26 But they can be highly predictable too:
“A character creeping up on another will set off a chorus of ‘He’s behind
you!’”27 Crucially, the lines delivered by the audience here, while making
them contributors to the drama (understood as the totality of the words
voiced and gestures made in a performance), do not constitute them as
a character. The audience are shouting or gesticulating as themselves,
undisguised, truly not fictively, and yet—uniquely—under license from the
production to behave temporarily as though able to enter the piece and
interact with fictional characters (by encouraging, condemning, helpfully
offering information, etc.). Almost all audience participation in panto-
mime requires the veiled suspension of elements of the production, and
appears beyond its space, time, narrative, script, and dramatis personae.
This does not mean that the audience is deceived, but neither is it genu-
inely involved. What distinguishes pantomime is not that the audience
contributes to it, but that it creates moments and spaces where it suspends
itself, and modes by which the wall separating reality and fiction is broken
down (e.g., the dame reads birthday greetings to children in the hall).
If the audience contributes anything, it is clearly not in an authorial
guise, since they say and do what they are told, and invent little or nothing
(hecklers notwithstanding). Deprived of a role, they are not, unlike full
members of the cast, directed and prompted by the production crew
(the Director or the Prompt), but by other actors; the audience are reduced
then to temporary subactors in the interstices of the production. They are
given the lowest of all speaking parts: they reply to greetings, give mono-
syllabic answers, offer obvious and moralistic commentary. Worse still,
their treatment by the cast results in their “acting” becoming necessarily
shoddy, constrained either to frenzied exaggeration (the demented screams
of “he’s behind you!” due to a character’s amazing obtuseness) or to the
blankly wooden (when brought up on stage, due to the passivity of the
function they are made to fulfill). The audience’s technical ineptitude as
an actor, a line reader or stage presence is deliberately engineered by the
98 DIGIMODERNISM

production and adds to the general fun. However, the consequence of


all this is that the question of what the audience “contributes to” or how it
“participates in” a pantomime is more ambiguous than it is often made to
appear.
Pantomime tends to elicit a kind of easygoing contempt or loving dep-
recation. This may in part be due to the role it gives its audience, although,
conversely, it may as easily give that role because it is already despised:
much about it—the chaos of gender and sex, the corny jokes, the conscious
overacting, the anachronistic characterization, the way that peripherals
(e.g., costumes) never alter while essentials (e.g., dialogue) get revamped
every year—seems indefensible. In short, audience participation (if we call
it that) may not be enough to account for pantomime’s lowly cultural
esteem, but it reinforces it.
A brief survey of other performance arts employing similar modes to
these confirms this last point. The interaction of cabaret artists or stand-up
comics with the audience members they “interview” or who barrack them,
the function of volunteers brought on stage by magicians to undergo their
feats, and other such devices show the same patterns as those detectable in
pantomime’s use of audience participation. In each case the “spectator”
finds it impossible actually to originate text or approach equality with
the performer or to influence in unforeseen ways the development of the
entertainment. This is not to condemn such entertainments: they have
been satisfying audiences for centuries, and the exchange of cash enabling
them is predicated on the audience’s conviction that it makes a much worse
author, director, or performer than those it is paying. Nevertheless, the
performing arts that feature audience participation suffer ineluctably from
diminished sociocultural standing compared to those that don’t, specifi-
cally traditional theater and the concert; the separation of audience and
action is synonymous with cultural prestige.

***
Beyond these stand a long line of texts and fragments of texts: John
Krizanc’s play Tamara (1981), where ten actors play out simultaneous
scenes in the various rooms of a large house—the audience mingle among
them, have to choose which room to go to, cannot experience the whole
play, get spoken to by the actors, have to decide whether to follow one actor
throughout or move among them, and so doing make up for themselves
their sense of the text; the improvised play Tony ‘n’ Tina’s Wedding where
the audience is treated as guests at the actors’ nuptials; Lawrence Sterne’s
invitation to the reader to create a page of Tristram Shandy depicting the
A Prehistory of Digimodernism 99

beauty of widow Wadman, “as like your mistress as you can—as unlike
your wife as your conscience will let you—’tis all one to me”;28 and myriad
TV programs featuring the public as game-show contestant, entrant in
competitions, phone-in caller, vox pop interviewee, and more; and doubt-
less you can think of others—though they will be marginal effects and mere
curiosities, eccentric offers no one ever accepted, or corralled and minimal
roles within a regulated textual environment. The margins, the margins.
So what have we here (it’s easy to imagine a hostile observer thinking,
ending this chapter)? A subform of photography and film, which degrades
and dehumanizes all who have contact with it, populated, it would seem,
by male sleazebags and slimeballs and women with mental health issues;
an obsolete source of shards of news; comedy for smug Generation X slack-
ers, which trades on its misfires; mindless and thoughtless beat repetition;
the failed pseudonovel of an egomaniac suicide; the lowest form of theater
known to man.
My feelings here are divided. Industrial pornography, like an unseen
planet, has exerted a powerful influence on the concurrent development
of mainstream cinema worldwide; Ceefax was extraordinary in its day,
and its eclipse doesn’t obliterate its historical importance; improvisation,
brilliantly funny at best, resembles all comedy in not hitting every single
mark; The Unfortunates belongs to our era, in which Johnson has been the
subject of a surge of interest; and pantomime keeps British theater, perhaps
the world’s most dynamic and technically excellent, financially afloat.
And yet all are marginal. Some were pushed to the side by their
obscenity (ob-scene may mean literally “off the stage”),29 some by their
revolutionary technique or cultural demands, some by their suitability for
children—various forms of marginality. There is nevertheless, and this will
become ever more of an issue as this book goes on, a problematic of quality
about the digimodernist text. How good is it? How good can it be? After
postmodernism interrogated the assumptions implicit in the notion of the
“great work of art,” digimodernism struggles with its possibility (whether it
is capable of greatness). Were these texts marginalized because they were
proto-digimodernist? Or is (proto-) digimodernism a form of artistic
mediocrity destined inexorably for cultural denigration? In short, there are
three alternatives: (1) its lack of prestige is socially determined (because of
inherited prejudices about what art should be, for which it is a “scandal”),
and so reversible; (2) its lack of prestige is aesthetically determined (because
texts that function in this way cannot achieve greatness), and so irrevers-
ible; (3) its lack of prestige is historico-textually determined (because until
recently texts have scarcely been made like this, except as eccentricity or
100 DIGIMODERNISM

curiosity, but now they are, and will be increasingly), and so all bets are off.
The hostile observer might prefer the second view. The case of the long
disparagement of jazz improvisation—the form of proto-digimodernism
whose absence from this chapter I regret most—suggests the first. To me,
the third is the most interesting.
4
Digimodernism and Web 2.0

Polyphonic plenitude, the searching out and affirmation of the plurality of different voices,
became the leading and defining principle of postmodernism’s cultural politics. Just as Goethe
is said to have died with the Enlightenment slogan “Mehr Licht!” (“More Light!”) on his lips, so
at one point one might have imagined postmodernism going ungently into its goodnight
uttering the defiant cry, “More Voices!”
Steven Connor, 20041

In an important sense, of course, Web 2.0 doesn’t exist. (The term belongs
in the antilexicon.) Much of the technology underpinning it has been in
place since the Web’s inception, and some of its most emblematic examples
are almost as old; Tim Berners-Lee is surely right to argue that its common
meaning “was what the Web was supposed to be all along.”2 Well known
since a conference in 2004, and despite suffering from hype—The Economist,
mindful of the dotcom mania, has referred sardonically to “Bubble 2.0”—
the accepted sense of the term is nevertheless a convenient textual category:
it denotes the written and visual productivity and the collaboration of
Internet users in a context of reciprocity and interaction, encompassing,
for instance, “wikis, blogs, social-networking, open-source, open-content,
file-sharing [and] peer-production.”3 Moving beyond read-only information-
source Web sites, the textuality of Web 2.0 sites notably favors (in the
jargon) “user participation” and “dynamic content.” Moreover, “Web 2.0
also includes a social element where users generate and distribute content,
often with freedom to share and re-use.”4 The forms of Web 2.0 are the
most globally important cultural development of the twenty-first century
so far, and they lie at the heart of digimodernism as we currently know it.

101
102 DIGIMODERNISM

Although all Internet use is to some extent digimodernist, the latter


is not reducible to the former, but stretches right across contemporary
culture. Digimodernism doubtless comprises primary and secondary ele-
ments, causative and symptomatic factors, and central and peripheral
areas, and Web 2.0 belongs to the first of each of these binaries (hence its
getting a whole chapter to itself). This is largely a sociological point, and a
reflection on its relationships with the older cultural forms explored later;
it shouldn’t be taken necessarily as some intellectual or aesthetic supremacy.
This chapter faces its own challenges. If the platforms it analyzes are the
subjects of a continual torrent of commentary already from newspapers
and magazines, it’s partly because their perpetual evolution maps on to
those outlets’ immediacy. The time lags between a book being written and
published and read would seem to condemn anything I can say here to
almost immediate obsolescence. It’s true that books about the Internet
appear constantly, but almost all fall into one of two categories. First, there
are the user guides that, written mostly in the mode of advice and the sec-
ond person, offer instruction in how to set up and maintain your blog, how
to get the most from YouTube or to develop your avatar on Second Life.
Such a discourse is predicated on the sense that Web 2.0 is something you
use; not a text that somebody else writes or films or you read or watch, but
a machinery that you, and only you, control and direct; and not a signify-
ing content concerning something else, like a narrative, but a physical act
that you yourself do and is consumed in its own duration. Second, and
more interestingly, there are the books predicated on the notion that Web
2.0 is something you exploit. Separated from its pejorative human context,
the concept of exploitation is almost identical with that of usage: what’s
emphasized here is the assumption that Web 2.0 is to be made use of, an
opportunity to explore, a possibility to benefit from. If by using Web 2.0
you open a door in your life to something new, then by exploiting it you
take advantage of what you find on the other side of the portal. Conse-
quently, exploitation books are just more sophisticated versions of usage
ones; assuming knowledge of the latter, the former move beyond them.
They isolate and examine the scope for objective personal advance con-
tained in Web 2.0, and so focus on how it works not so much to do even
more of it, but to profit from riding its social wave; usage books restrict
themselves to subjective human advance, such as new personal pleasure.
In this common pragmatic spirit, exploitation books are often oriented
toward business practice: how to make money from Web 2.0; how to con-
sume through it; how Web 2.0 is going to reshape the business landscape;
how to survive it and thrive. For the tone of private advice, they substitute
the language of management consultancy.
Digimodernism and Web 2.0 103

Two examples of this genre are David Jennings’s Net, Blogs and Rock ’n’
Roll (2007) and Don Tapscott and Anthony D. Williams’ Wikinomics: How
Mass Collaboration Changes Everything (2006, 2008). Jennings explores the
nature of Web 2.0 and suggests how it may evolve. When he evokes the rise
to prominence of Sandi Thom’s music and the movie Snakes on a Plane
through Internet viral marketing, he is interested in how a product appeared
on the market and was received or appropriated by its consumers; he isn’t
concerned with how good those texts are, or what they might mean; he has
no conception of them as texts, only as objects of publicity and consump-
tion.5 Tapscott and Williams advance the view that the particular organiza-
tion of Wikipedia is the way that companies in future would be best advised
to operate: this is Web 2.0 as the model of microeconomic success. They
assume that Wikipedia is a success because it is used (read and written)
by so many people: it’s a consumerist system of values, whereby the widely
bought product is automatically to be emulated. By a sleight of hand they
then see this commodity as the prototype also of the future manufacturer
of commodities.
I don’t want to reject these kinds of writing entirely, though I suspect
that the latter claim too much too quickly and won’t stand the test of time;
they are overly marked by the spirit of advertising integral to business.
It’s telling indeed that Web 2.0 lends itself immediately and most naturally
to a discourse of practical and physical use. But, while highlighting this
point, I can’t see that these are the only ways you can talk about Web 2.0.
It can also be read textually. Many of these platforms have a hard-copy
precursor: the diary (blogs), the newspaper letters page (message boards),
the script for a play (chat rooms),6 the encyclopedia (Wikipedia). At a sec-
ond degree, YouTube resembles a festival of short films or documentaries.
Social networking sites, slightly more problematically, adapt an earlier
electronic platform, the personal Web page, rather than a pre-Web form
of text, but this is not finally prohibitive of textual analysis. And if Web 2.0
can, on the whole, be assimilated to forms universally considered texts,
then they are texts themselves (of a sort) and can be studied—as I’m going
to here, in a way—textually.
This poses, again, its own difficulty. What can textual analysis tell us
that is not already obvious to all? It isn’t just these platforms’ fame; it’s
their accessibility; above all, it’s their ease of use, once more, by which so
many people have gotten to know them intimately, from the inside out. The
critic is a professional reader; Web 2.0 throws up the writer/reader, a new
kind of textual knowledge and familiarity. A bigger problem still derives
from the necessary incompleteness of the Web 2.0 text. The cultural critic
typically watches entire films, gazes at completed paintings, reads finished
104 DIGIMODERNISM

books, and consequently treats them in their totality. Web 2.0 texts, how-
ever, never come to a conclusion. They may stop, or be deleted, or fall out
of favor (and off search engine results pages into oblivion), but they are
not rounded off, not shaped into a sense either of organic coherence or
of deliberate open-endedness. Items within them, like blog entries, may
have this internal structure, but they fit into an overarching onwardness.
Textual analysis of Web 2.0 must therefore follow the text in time: it must
go with it as it develops, seemingly endlessly, over a lapse of weeks, months,
or years. This distinguishes such analysis from that of any pre- or extra-
digimodernist text: it critiques now what will soon be different. Scholars
do frequently shift their attention from a finished text to its manuscripts
or preliminary sketches, but the interest of these stems precisely from their
final incorporation within a supremely complete textual end-product. On
Web 2.0, though, each version of the text in time is the equal of every other;
similarly, each gives an initial impression of finishedness, dispelled at
varying speeds.
Equally trickily, while the forms to be studied have been chosen for me
(they’re sociocultural powerhouses), the practices of digimodernist analy-
sis that they demand don’t exist yet. In response to this and the other issues,
I’m going to look at these forms as examples of such practice and such
analysis. Each will be read in terms of a theme running through digimod-
ernism as a whole. This will also have the beneficial effect of tying my com-
ments into the next two chapters: finally, I see Web 2.0 as no more than
a subform, albeit the most important, of a wider cultural shift, a context
generally missing so far from discourse about it.
I should make clear from the outset that I come neither to bury nor
praise Web 2.0. Culturally it’s evident that much of what is expressed
through it is ignorant, talentless, banal, egomaniacal, tasteless, or hateful;
textually, though, I can’t but feel that the avenues it opens up for expression
are wildly exciting, unformed, up for grabs, whatever we choose to make
them. This disparity is central to the spirit of the times: ours is an era more
interested in cultural hardware, the means by which communication occurs
(iPods, file-sharing, downloads, cell phones) than the content of cultural
software (films, music, etc.); it’s the exact opposite of high postmodernism.
Given the speed and unpredictability of hardware innovation, this bias is
understandable. It won’t last forever, though; and if there is a Web 3.0 then
this technologist supremacy will have to yield ground to the textual.
Also in need of reformulation will be Web 2.0’s pseudopolitics. These
platforms do not with any ease produce the “antielitist” and “democratic”
impulses vaunted by some of their supporters. Democracy presupposes
Digimodernism and Web 2.0 105

education (this is why children are disenfranchised), but Web 2.0 offers its
privileges equally to the unschooled, the fanatical, and the superstitious;
in fact, it’s closer to populism, that gray area between democracy and
fascism. Its new gatekeepers—the ubiquitous “moderators,” Wikipedia’s
“administrators”—are as powerful as any other, but less transparent and
accountable than many; organizationally, Web 2.0 is essentially neo-elitist,
part, indeed, of its very interest.

Chat Rooms (Identity)

The chat room, though perhaps less popular or less fashionable today than
several years ago (it’s been sidelined by newer Web 2.0 applications), is
a distinctive digimodernist form. Go on to one that’s in full spate and you
see a scrolling page with phrases, remarks, questions, rejoinders, greetings
and partings, complaints and consolations, invitations and exclamations,
all rolling torrentially by. Leave it for fifteen minutes and a daunting jungle
of text will spring up; what grew before you logged on is imponderable.
Visually, this never-ending, forever-turning stream of communication may
resemble the flowing of a minor sea, but its tide never goes out: (discreetly)
compatible with many people’s working habits and extending over territo-
ries and therefore time zones, the sun never sets on the chat room and the
moon cannot reverse its inexorable onwardness. It’s an endless communi-
cative narrative, into which you shyly emerge.
This endlessness may manifest itself by a feeling of futility, a sense that
people are throwing down comments merely in order to fight off their own
boredom or loneliness, and that the “conversation” will never get anywhere
or produce anything. Chat rooms provide unstoppable movement, but not
progression; a discourse with such a stupefyingly high level of evanescence
(even participants will struggle to recall their previous interventions) will
never be able to develop consecutively toward any sophisticated communi-
cative conclusion. Of all the Internet’s digimodernist forms, the chat room
seems the most open: you register, log on, and write your material, contrib-
uting in to a discursive forum. It is, of course, moderated and patrolled for
unacceptable behavior, but if such is your objective you can hive off with
a like-minded fellow textual contributor to a private cyberspace of your
own: the broad, open chat room is thereby narrowed to a small, closed chat
room, but its structure remains intact. The discourse of the chat room
is whatever you make it: unlike with blogs or message boards there is no
privileged intervenant but an apparent equality permits, potentially, an
extraordinary expressive freedom. (And power, as your greeting is answered
106 DIGIMODERNISM

instantly by a stranger 4,000 miles away.) However, the freedom to say


whatever you like is exchanged here for the fact that none of what you say
really matters: it’s not you writing—in your anonymity, you’re not at stake—
and you don’t know who’s out there.
Much of the social comment on chat rooms has focused on “grooming”:
children may be accosted there by pedophiles pretending to be their age
whose objective is to arrange encounters; the victims will mistakenly sup-
pose they are meeting new friends. This scenario has been misrepresented
in terms of a genuine, tangible child being misled by a deceitful adult via
a chat room’s anonymity (or pseudonymity). But in fact there is nobody
genuine or tangible in chat rooms: everybody is invented, elusive, some-
where and nowhere. It’s a discourse created by unknowns for unknowns.
Morally, the issue is rather whether children should be left to roam in a
universe without secure identities, their own or anyone else’s. Agreeing to
suspend identity, which you do on entering a chat room, taking a bogus
name, concealing or lying about your age, gender, place of residence, tastes,
and so on, is somewhat unusual behavior but, in principle, harmless for an
independent individual whose social identity is otherwise settled. A child
who does so dismantles all the systems of protection by which s/he sur-
vives socially. In short, it is the chat room that creates the problem for the
child; the pedophile only exploits it. However, chat rooms are particularly
popular with children precisely because their suspension of identity both
chimes with the unformedness of the infantile self and seems to offer escape
from its unshakable boundaries into a field of exciting possibilities.
In short, three effects apply here. Chat rooms extend indefinitely and
dissolve instantaneously text itself; they suspend the limits and particular-
ity of the textual intervenant; and they subsume the intervenant into
themselves, becoming functionally indistinguishable one from another.
Being in a chat room is a loss of self and an infinite expansion of selfhood;
no longer you, you become the text yourself. Your thoughts and feelings
become text, and in turn create who you are; others’ likewise. There’s an
ebbing away of human content and a seeping of the human into the text’s
ontology. You become a textual figure; you become a character, a fictive
player, within an essentially fictive universe peopled only by invented
selves. You are this text. It’s an alluring, exciting, risky, and ultimately futile
singularity.

Message Boards (Authorship)

On April 6, 2008, the London Sunday Telegraph published an article called


“110 Best Books: The Perfect Library,” which was then uploaded on to its
Digimodernism and Web 2.0 107

Web site. The article lists books sorted into eleven categories ranging from
“Classics” and “Poetry” to “Sci-Fi” and “Lives.” Printed out three months
later the original article runs to eleven pages, each book receiving a cursory
summary, for example: “Flaubert’s finely crafted novel tells the story of
Emma, a bored provincial wife who comforts herself with shopping and
affairs. It doesn’t end well.”7 The comments on the message board beneath
the article run in their turn over 52 pages, or 4 times the extent of their
prompt; there are perhaps 500 separate posts. Quantitatively message
boards swamp their original. Of these 500 or so, about 475 were posted
within 10 days of uploading, the final 25 were spread over 2 months, and
the last was dated 3 weeks before I printed. A message board functions in
time like this: an initial tidal wave followed by a gradual slowing down and
then a sudden drying up; its textual onwardness is contained within this
cycle, and directed obscurely by an anonymous or pseudonymous modera-
tor who also applies rules about what cannot be said. Despite this, the tone
of almost all the posts is the same: they are dominated by criticism, carp-
ing, condemnation, contradiction, complaint, and what the moderator
evidently felt were acceptable kinds of abuse, that is, nonspecific, or insults
aimed at groups other than minorities.
Some of the interest in looking at what people actually say on message
boards is to counter the relentless propaganda promoted by Web lovers,
according to which they might be a “forum” for “communication” among
“communities” on a “global” scale. All of these qualities are present here
technologically and functionally; however, in terms of textual content they
are overwhelmed by their polar opposites, by parochialism, provincialism,
isolation, bigotry, rage, prejudice, simple-mindedness, and anonymity.
What message boards do is, toxically, distribute these human failings to
everyone across the planet in no time at all. This is the picture that emerges
from reading them all: one individual locked in a tiny room sitting at
a computer screen typing out their irritation, projecting their bile into the
atmosphere; and fifteen miles away a stranger doing the same; and five
hundred miles away another, and so on, around the world. All of these
streams of rancor and loathing then coalesce in the sky into a thin cloud of
black and shallow dislike, and fall gently but dishearteningly to earth. None
of the projectors is aware of any other: they spew in a void, and the con-
tents of their irked guts are displayed potentially to everybody forever.
I’d argue that this tends to be the pattern of Internet forums in general, but
the one I’ve chosen to highlight is a particularly vivid example.
The cause here of this venom is the list: almost every post refers to it (not
to the other posts). Although its title may suggest it’s setting itself up as an
encyclopedia for the human species, the key is found in the subcategory
108 DIGIMODERNISM

“Books that changed your world.” You, the implied reader, were influenced
by The Hitchhiker’s Guide to the Galaxy, Zen and the Art of Motorcycle
Maintenance, The Beauty Myth, Delia Smith’s How to Cook, A Year in
Provence, Eats Shoots and Leaves, and Schott’s Original Miscellany. Self-
evidently this is a list compiled with a close eye on its market, on the people
who will pay to read it, the newspaper’s known habitual purchasers.
Market research will have guided the writers to select books aimed at
Britons, usually middle-aged and older, certainly middle-class and “higher,”
with right-wing and traditionalist views: elsewhere in the list come Swal-
lows and Amazons, Churchill’s A History of the English-Speaking Peoples,
and the Diaries of the extreme right-wing British politician Alan Clark.
It also contains a large number of books that such a demographic will
certainly already have read, like Jane Eyre and Rebecca: it’s in the business
of comforting its readers more than of dislocating them.
None of the posts bears in mind the identity or probable goals of the
article’s authors. Many of them respond as though it had been penned by
some transcendent but deeply stupid entity, others as though it were effec-
tively the work of the entire British nation. They do not consider the ori-
gins of its biases, nor do they place it in its media context as essentially a
worthless, paper-filling exercise by staff writers lacking the funds necessary
to send somebody out to find some actual news. It’s absurdly limited in its
range, but then it is aimed at, in planetary terms, a tiny and limited group
of people; it’s not a missive from God to the human race; it’s a set of cozy
recommendations for a group of people with fairly well-known tastes,
which at worst will confirm them in their literary habits and at best will
nudge them toward a good book they don’t yet know.
The response of the posters, however, tends to be that the list is “simply
ridiculous, woefully inadequate,” “twaddle and hype,” “incredibly weak and
. . . pathetic,” “so obviously predictable and prejudiced,” “appallingly orga-
nized,” and “a load of crock.” Almost all of the posts foreground the titles of
books whose omission the posters find scandalous. A recurring feature is
Italian posters abusing what they see as British arrogance and extolling
missing Italian glories:

It’s astonishing that the largest part of the literature in the list comes
from places that, when China and Mediterranean cultures invented
literature, were still in the stoneage. It’s a petty provincial list

Italian poetry is almost completely missing. I suggest Ossi di Seppia


by Montale . . . But also D’Annunzio and his Alcyone is required for
a perfect library!
Digimodernism and Web 2.0 109

Sono italiana e trovo piuttosto irritante che quasi tutti i libri da voi
citati appartengano alla letteratura inglese . . . insomma, manzoni?
leopardi? verga? [I am Italian and I find it rather irritating that almost
all the books you mention belong to English literature . . . what about
Manzoni? Leopardi? Verga?]

You Anglo-Saxons make us Latin Europeans laugh!!!! Where is


the true Catholic bible on this list, and other wondrous non-British
literature? Remember, we Latins civilized you Britons

Hey! there’s life over the earth beyond UK!!!! is not possible to
describe this library list. Is always the same thing. You people are the
best and only you right?? Puaj

Another recurrent group is what I presume to be Americans banging


the drum for one specific American author: three out of six consecutive
writers lament the absence of The Fountainhead and Atlas Shrugged, and
Ayn Rand’s name appears as an amazing omission on almost every page.
None of these posters ever stops to ask (1) whether Rand is known or
popular outside the United States, (2) whether she is known or popular in
Britain, or (3) were she not, whether British people would find her to their
taste. The tone, in these posts and throughout, is belligerent, certain,
avowedly universalistic and actually dreadfully narrow-minded: in truth
Rand has no currency in Britain, or even Western Europe, and no Telegraph
journalist would waste their energy cajoling their countrymen to read her.
The Italian posts are even worse. Whatever the inadequacy of the lowest
common denominator of the Telegraph’s famously middlebrow readership,
it’s fair to say that, beyond Dante, Lampedusa, and Eco, Italian literature
has little or no purchase outside its country of origin. These particular
posts reek of wounded arrogance, nationalism, and insularity, all subjected
to transference, and possibly connected to the reelection in the month this
article appeared of Silvio Berlusconi’s xenophobic and quasi-fascistic
administration. Such posters display nakedly what they imagine they are
denouncing; the sole difference is that while the journalists are operating
tactically and commercially, the posters’ chauvinism is sincere (hardly a
plus point, though).
Post after post simply names excluded books, though with what supposed
purpose cannot be imagined: “Were is The Great Gatsby,” “So what’s with
the lack of Vonnegut?” “Anne Frank?” “Tolstoy, please,” “And what about
Kesey’s One flew over the Cuckoo’s Nest? Or Dashiell Hammett?” “It seems
rather unbelievable to me that so far no one has mentioned Nabokov.”
110 DIGIMODERNISM

Nobody makes a case for their book or author. There are, as ever on mes-
sage boards, a few contributions from trolls designed to annoy posters and
railroad their discussion by, for instance, suggesting Playboy magazine;
equally, there are the usual pontificating and faintly mad speeches packed
with long words and complex sentences and devoid of rational points, like
the one posted by Fred Marshall on April 11, 2008. And post after post goes
by without anyone acknowledging another.
The overriding impression is that almost every post, whether related to
the original article or other posts, appears to be driven above all by the urge
to flatly disagree, to reflexively and bad-temperedly contradict. Uploading
the article just seems to have acted as a kind of lightning rod for interna-
tional contempt, egocentricity, and ignorance. Reading through page after
page of it is dispiriting indeed, because it reveals a systemic failure of com-
munication: in theory, contributing to an Internet forum leads you into a
place of worldwide and instantaneous concert, of debate, where thought is
shared and interrogated among equals; in practice it resembles the irked
and near-simultaneous squawking of an infinite number of very lonely
geese. Recommending a book should be an act of generosity; these posts
sound petulant, hardly able to contain their fury. There is no progression
here, no development, no recognition of the rest of the world: “Where is
Cervantes’ ‘Don Quixote’?” “I must say that Miguel de Cervate’s ‘Don
Quijote de la Mancha’ must be in the list,” “No Don Quixote?” It sounds
like the barking of a petty and frustrated megalomaniac.
At some stage, exhausted by wading through an unending stream
of “[w]hat about ‘The Jungle’ by Upton Sinclair?” and “[n]o Dune?” and
“[t]his list stinks of intolerance and racism,” you forget completely what
originally triggered all of this: a meaningless and pointless itemization of
very good and not very good books for genteel fifty-year-olds hankering
for the return of the death penalty. You are instead lost in the void of
the message board. The emotional tone and the nature of this debate are,
I contend, much the same everywhere on message boards: the internation-
alized insularity, the rage, the preaching, the illiteracy, the abuse, the
simple-minded plethora of non sequiturs, the flow of vapid contradiction,
the impossibility of intellectual progress or even of engagement. Some
boards, like those on the London Guardian newspaper’s Comment is Free
site, are of a much higher caliber than this, but usually no more fruitful or
enlightening; and though some posts are worse than others the good ones
are never enough to drive out the bad.
The one thing you never get on message boards is people saying that
they stimulate communication and community on a global scale (they say
Digimodernism and Web 2.0 111

it in books or on TV). Instead you get this, posted by “open minded french
guy” on June 8, 2008: “I am fed up with this Americano-english proudness
which think themselves as the center of the world.” Ten minutes later, he
returned to add: “I am fed up with this egocentric of this so proudness of
American-English position; the simple idea of thinking ‘a perfect library’
you should first watch a perfect globe.” The fact that he felt his “improved”
insight warranted republication tells you everything.

Blogs (Onwardness)

Of all the forms of Web 2.0, blogs might be the easiest to explain to some
cultivated time traveler from the late eighteenth century who was already
familiar with both diaries and ships’ logs. The former he would know as
a journal intime or intimate daily record, kept by young ladies or great
men; the latter he would understand as a regular, systematic public account
of external activity and events. He would probably then see blogs, fairly
accurately, as a conflation of the two. Books about blogging rightly empha-
size their diversity of type, purpose, readership, and content, but our time
traveler might note that in his era already diaries varied in similar ways,
encompassing little narratives about the quiddities of the daily routine,
small essays of personal opinion, and insights into the hidden operations of
power. What then is new about the blog? Its hyperlinks, obviously, by which
one blog can link to a thousand other sites, but what is textually new?
For Jonathan Yang, the author of a guide to the subject, “[a] blog, or
weblog, is a special kind of website. The main page of each blog consist [sic]
of entries, or posts, arranged in a reverse chronological order—that is, with
the most recent post at the top.”8 The primacy given by blogs to the latest
entry marks a first break with their textual inheritance. The entries in a
diary or log progress from left to right through a book, such that internal
time flows in the same direction as reading about it; but as the eye descends
the screen of the blog it goes back in textual time. Why is this? The reason
lies in the digimodernist onwardness of the blog: it’s a text under develop-
ment, one currently being constructed, being built up, a text emerging,
growing. So is a diary or log, but they can also be read—and are read, by
those who enjoy the diaries of public figures from Alan Clark to Samuel
Pepys, of writers like Woolf or Kafka, or of someone like Anne Frank—as a
finished, enclosed totality. The diary that is not being added to is complete;
the blog in that state is textually dead. Another guide recommends that:
“Posting at least one entry each weekday is a good benchmark for attracting
and holding a readership.”9 Still another warns: “There’s no such thing as
112 DIGIMODERNISM

a non-updated weblog, only dead websites which no one visits” and it


advises bloggers to “set your update schedule . . . deliver on [your audi-
ence’s] expectation that you’re still there, still having thoughts, opinions,
experiences, observations and you’ll start to build a following.”10 Reader-
ship requires the text to refructify, to extend itself, it is synonymous with
the text’s incompleteness of scope; and a nonupdated, hence a nonread and
so a dead blog is not even memorialized: it disappears off the face of the
Web as if it had never existed. No dead blogger is read; there’s no e-Pepys
(at least in early digimodernist principle). The blog is in thrall to its current
revision and self-renewal; it is the hostage of its capacity to become ever
longer, to spread, to add to itself. It is a textuality existing only now, although
its contemporary growth also guarantees the continued life of its past
existence (archived entries): it’s like some strange entity whose old limbs
remain healthy only so long as it sprouts new ones.
So much of the excitement, the interest, and the energy of digimodern-
ism comes from the onwardness of its texts, their existence only in their
present elaboration, as novels exist for the men and women who are writ-
ing them (finished, they fade forgotten into the past). Such a text is never
over, always new; its creation is visible, tangible, its dynamism is palpable
on the screen. As for the interest of the content, all depends here on the life
and the writerly qualities of the blogger in question: the more fraught, the
more world-historical an individual’s circumstances, the more gripping is
their blog; the more banal the life, the duller the posts; the funnier, cleverer,
or more eloquent, the more vapid or egocentric or ignorant a blogger is,
and the quality of the blog will vary dramatically. It is therefore absurd to
have any opinion about the general interest of blogs (again as with diaries).
Their mushrooming number is due to their digimodernist textual instabil-
ity or haphazardness, which promises freedom, possibility, creativity, glam-
our. That so many bloggers cannot respond to this offer is irrelevant when
so many others do.
Also in contrast to earlier modes are the blog’s frequent pseudonymity
and multiplicity of authorship. Some, especially those set up by organiza-
tions, may have several contributors or editors, while most include the
comments and reactions of their readers, entwined with the responses of
the host: “Typically, a blog will also include the ability for readers to leave
comments—making the whole affair much more interactive than a tradi-
tional website.”11 This shouldn’t though be overstated; I can’t quite see that
“blogging is a collaborative effort.”12 However, a definition of sorts does
begin to emerge here: the blog, like Wikipedia, is an ancient form techno-
logically reinvented for a digimodernist textual age.
Digimodernism and Web 2.0 113

Wikipedia (Competence)

Producers of text for chat rooms soon evolved a new kind of typed English,
one favoring phonetic substitutes for real words (“how r u”) and acronyms
(“lol”), and discarding punctuation (“im ok”). This script was adopted and
extended by early senders of text messages, whose cell phones could only
hold a very limited number of characters. But since a chat room contribu-
tion could be, within reason, as long as you liked, and there was no physical
discomfort linked to typing or obvious advantage to speed, this simplified
script had no ostensible purpose. Subconsciously, I suspect, the aim was
to construct chat room text in a specific way: as uninflected by issues of
linguistic competence or incompetence. By reducing the number of spelled
words and by eradicating punctuation, there was less and less a contributor
could get linguistically wrong; by forcing all contributions into a simplified
and clearly artificial and new mould, chat room text rendered all semantic
and syntactical rules redundant—it outflanked them, made the issue obso-
lete. For its detractors, this script was the latest stage in the spread of socially
valorized illiteracy; for its zealots, it liberated text, finally, from its old
elitism, its legalism and dogma, its tendency to exclude and oppress.
On one hand, the emergence of this new script is another sign of
the novelty of digimodernist textuality. In all the changes to text since the
Enlightenment it had not been felt necessary to reinvent the English lan-
guage. On the other hand, the desire for a form of text stripped or freed of
questions of linguistic competence—where nobody is punished for “errors,”
nobody rewarded for “correctness”—had a broadly postmodern origin. If
you ceaselessly call for Steven Connor’s “more voices,” as postmodernism
does, you eventually run up against the literary shortcomings of a stubborn
part of the population. Evidence of this overarching context came with the
appearance of Wikipedia, not a text emptied of linguistic (in)competence
but an encyclopedia stripped or freed of issues of objective intellectual
(in)competence. Until this point any contributor to an encyclopedia had
been compelled to offer some proof of objective qualifications: s/he would
have to have passed certain exams and gained certain diplomas, to have
published relevant texts of a certain importance or been appointed to cer-
tain posts. The right to contribute had then to be earned through demon-
strable achievement, and subsequently to be conferred by others also
applying objective criteria (this is true whatever contingent corruption
may have infected the process). Wikipedia simply swept all of this away.
By definition, its criteria for contributors were that they have access to the
Internet and apparent information on the subject in question; to write for
114 DIGIMODERNISM

Wikipedia you had to be able to write for Wikipedia, and the only person
capable of assessing this ability, in principle, was yourself. Nobody would
be disbarred from contributing on objective grounds. The encyclopedia
had, overnight, been wrenched away from the specialists, from the profes-
sors, and given to their students to write.
Some humanities professors have had the gall to attack Wikipedia: after
a lifetime spent teaching that objectivity doesn’t exist, that “knowledge”
and “truth” are mere social constructs, fictions, they actually had the nerve
to describe this particular construct as illegitimate. On the contrary, it was
easy for its enthusiasts to depict Wikipedia as the glorious fulfillment
of Michel Foucault’s final fantasy: the release of knowledge from its incar-
ceration in power structures, its liberation from systems of dominance,
oppression, exclusion. Condemnation by the professors only confirmed
the veracity of Foucault’s critique and, by extension, the emancipatory
justice of the Wikipedian project. Wikipedia is, in short, a digimodernist
form powered by postmodernist engines; it’s the clearest instance of the
submerged presence of postmodernism within contemporary culture.
For this reason, among others, Wikipedia’s natural home is the English-
speaking world, where post-structuralism found its most uncritical and
energetic audience: its article on itself states that a comfortable majority of
its “cumulative traffic” (55 percent) is in English.13 Within this geography,
there is something stereotypically “American” about Wikipedia’s integra-
tion of a sort of naivety or credulity. There is certainly something ill-advised
about the method by which it accrues what it presents as “truth”: if you
wanted to know the capital of Swaziland or Hooke’s Law you wouldn’t
stop someone on the street, ask them, and implicitly believe their answer;
you wouldn’t even approach a group of students in a bar and subsequently
swear to the factuality of whatever it was they happened to tell you. The
appropriate word here may, though, be not so much credulity as idealism:
the belief that the mass of people will somehow conspire just by communi-
cating with each other to throw up truth is akin to the invisible hand theory
of economics (by which everyone mysteriously and inadvertently produces
prosperity) and the Marxist theory of history (by which the majority of the
population inevitably somehow create a free and just society). The parents
and grandparents of Wikipedia’s writers (or “editors”) possibly marched
against nuclear weapons or protested the Vietnam War; Wikipedia is one
of the most striking expressions of political radicalism and idealism in our
time, though it is also typical of our consumerist age that its domain isn’t
truly a political one. In fact, the grand illusion of believers in Wikipedia is
that they are doing politics when they ought to be doing knowledge.
Digimodernism and Web 2.0 115

Debates about Wikipedia have tended systematically to miss the point.


Is it reliable? What about the dangers of vandalism? Can it be corrupted by
private hatreds, concealed advertising and agendas? Don’t the exponential
rise in its number of articles and the high frequency of their consultation
indicate its epoch-making success? Yet despite their prominence in dis-
course about Wikipedia, none of these questions matters. First, it’s not reli-
able, or, rather, it’s as reliable as the random guy on the street or the random
guys in the pub multiplied by whatever number you like, but neither is this
the final word on the subject; second, the issue of vandalism, of deliberate
wrecking, is just a distraction from the real problem, which is mediocrity,
or unconscious wrecking; third, it is regularly infected by private inten-
tions, though it’s more telling that this can’t in practice be distinguished
finally from the simple act of writing it (consider why, for instance, there
are so many articles on the Star Wars universe); and, fourth, you don’t
judge an encyclopedia by consumerist values—product ranges, sales
volumes—any more than you judge it by political ones. You don’t go to
Wikipedia for freedom or turnover, but for knowledge; creating its text,
you aren’t sending out into the world post-structuralist glasnost or a com-
modity—you’re making knowledge-claims. It is necessary here to separate
the primary and the secondary, the textual item from its alleged social
significance.
The crux of the matter is highlighted by the article on Thomas Pynchon’s
novel The Crying of Lot 49. After a discussion of its characters and a plot
summary you come to a section titled “Allusions within the book,” which
recently asserted that:

The plot of The Crying of Lot 49 is based on the plot of Graham


Greene’s “Brighton Rock” in which the character Ida Arnold tries to
solve the mystery of Fred Hale’s murder. The obscure murder of Fred
Hale happens within the first chapter whereas Pierce Inverarity died
under mysterious circumstances before the book began. Throughout
“Brighton Rock,” we follow Ida in her attempts to solve this mystery
that involves her: figuring out the mob played a crucial role, singing
in bars, sleeping around with some fellows. While “Brighton Rock”
follows Pinkie Brown, the novel’s anti-hero, closer as the novel pro-
gresses, Pynchon chose to make Crying the story of Oedipa Maas or
Ida Arnold . . .
Pynchon, like Kurt Vonnegut, was a student at Cornell University,
where he probably at least audited Vladimir Nabokov’s Literature 312
class. (Nabokov himself had no recollection of him, but Nabokov’s
116 DIGIMODERNISM

wife Véra recalls grading Pynchon’s examination papers, thanks only


to his handwriting, “half printing, half script.”) The year before
Pynchon graduated, Nabokov’s novel Lolita was published in the
United States; among other things, Lolita introduced the word
“nymphet” to describe a sexually attractive girl between the ages of
nine and fourteen. In following years, mainstream usage altered the
word’s meaning somewhat, broadening its applicability. Perhaps
appropriately, Pynchon provides an early example of the modern
“nymphet” usage entering the literary canon. Serge, the Paranoids’
teenage counter-tenor, loses his girlfriend to a middle-aged lawyer.
At one point he expresses his angst in song:
What chance has a lonely surfer boy
For the love of a surfer chick,
With all these Humbert Humbert cats
Coming on so big and sick?
For me, my baby was a woman,
For him she’s just another nymphet.14

This is not vandalism: the writer15 seems sincerely to picture him or herself
contributing pertinent and enlightening information about Pynchon’s
novel (it’s not like introducing typos into the article on dyslexia). To my
mind, both paragraphs strive desperately to connect a novel the contri-
butor has read to another one s/he knows: though there’s an element of
detective fiction about The Crying of Lot 49, almost infinite are the stories
that begin with a mysterious event, while three words referring to a novel
known at the time to almost every American adult and written by someone
Pynchon probably didn’t meet do not warrant a third of a page of commen-
tary. Had I been the author’s professor, I would not have corrected this;
I would simply have graded it, and badly, of course, because it isn’t wrong
so much as not good. It cries out for more education, wider reading, and a
better understanding of how literary criticism works. How do I know this
(or think I do)? Is it because I have objectively demonstrated competence
in twentieth-century literature in English (qualifications, etc.)? Not exactly,
since in order to recognize the poor quality of this critique you need nei-
ther a diploma nor a specialty; you just need competence. But competence
is an objective quality: it doesn’t emanate spontaneously from people; it has
to be socially acquired somehow, and capable of display. No doubt whoever
wrote these paragraphs thinks them competent; I don’t know how, within
Wikipedia’s mechanisms and ethos, you would show him or her that they
aren’t. That ethos holds that they will eventually be improved, mystically
Digimodernism and Web 2.0 117

raised up to a higher level of quality. But who decides what that high qual-
ity consists of, if it doesn’t consist of this? And how do you decide who
decides?
Moreover, the problem of competence is not restricted to one of medi-
ocrity. Wikipedia’s article on Henry James, for instance, of which an extract
follows, is as good as you could reasonably expect any encyclopedia entry
to be. And yet, how do you know it’s good? Who can say?

The next published of the three novels, The Ambassadors (1903), is


a dark comedy that follows the trip of protagonist Lewis Lambert
Strether to Europe in pursuit of his widowed fiancée’s supposedly
wayward son. Strether is to bring the young man back to the family
business, but he encounters unexpected complications. The third-
person narrative is told exclusively from Strether’s point of view.
In his preface to the New York Edition text of the novel, James placed
this book at the top of his achievements, which has occasioned some
critical disagreement. The Golden Bowl (1904) is a complex, intense
study of marriage and adultery that completes the “major phase” and,
essentially, James’s career in the novel. The book explores the tangle
of interrelationships between a father and daughter and their respec-
tive spouses. The novel focuses deeply and almost exclusively on the
consciousness of the central characters, with sometimes obsessive
detail and powerful insight.16

While this is superior to the Pynchon in every respect, the issue isn’t, as
I hope I’ve made clear, “how good” Wikipedia is, but what you can do with
any entry when the objective category of intellectual competence has been
abandoned. By the time you come to read this page, the online original
may have been swept away and replaced by something of inferior quality,
perhaps by the person who thinks Jane Austen an “influence” on Martin
Amis.17 The onwardness of Wikipedia is disguised: you call up an article
and it “looks” finished, though clicking on “history” may lead to five or five
hundred previous versions, saved forever, and evidence that the article is in
constant imperceptible evolution. Go back a month later and it may be
twice as long. Without this onwardness, Wikipedia could not exist: it’s the
textual expression of the open-source wiki software platform. Yet, though
integral to a diary (or blog) or conversation (or chat room), onwardness
moves much more slowly in the realm of knowledge: our understanding
of James is constantly shifting, but not so visibly as to require his encyclo-
pedia entry to be updated every week. A print encyclopedia wouldn’t need
118 DIGIMODERNISM

to revise an entry like this one for ten years, but this negates the meaning
and purpose of open-source software. The entry on postmodernism
currently states: “This article or section is in need of attention from an
expert on the subject” (and it really is),18 but there is no motivation for one
to respond: becoming a specialist is a long, arduous, and costly process,
producing a high-quality summary of such a difficult subject is a time-
consuming and tiring act, and contributions to Wikipedia are unpaid,
anonymous, and capable of being wiped out in seconds. Writing for
Encyclopedia Britannica has (I imagine) only the first of those drawbacks.
In fact, professional recognition and respect for your intellectual product
are the wages that society pays for the hard and endless task of becoming
competent, of becoming an expert. Wikipedia wants the latter without
offering the former; in short, they want to steal your competence.
Proselytizers for Wikipedia trumpet evidence of the accuracy of certain
articles to show that the project is reliable.19 This is an abuse of language: a
broken clock is accurate twice a day, but you wouldn’t “rely” on it to tell you
the time. (Accuracy refers to truth, reliability to its expectation; Wikipedia
often provides the one but can’t furnish the other.) And yet an encyclo-
pedia that can’t be relied on is by definition a failure. Instead, I use, and
recommend, Wikipedia as a pre-encyclopedia, a new kind of text and a god-
send in itself: one that satisfies idle curiosity by providing answers I won’t
have to stake my stick on, and one that eases me into a piece of research
by indicating things that I will verify later elsewhere. The watchword is
unrepeatability: never to quote what you read on Wikipedia as knowledge
without substantiation from a third party. In this context, and with this
proviso, Wikipedia’s digital mode, its hyperlinks, speed, and immensity of
scale, richly compensates for its ineluctable unreliability. Stripped of its
superannuated postmodernist trappings, Wikipedia can finally be seen,
and appreciated, for what it really is.

YouTube (Haphazardness)

Some may feel that I have wrongly evaluated, erring on the side of overgen-
erosity as much as underappreciation, perhaps three of the forms discussed
so far in this chapter. On one hand, blogs have been denounced by Tom
Wolfe as “narcissistic shrieks and baseless ‘information,’”20 and by Janet
Street-Porter as “the musings of the socially inept.”21 On the other hand,
a message board created by the London Guardian for one of its pieces
recently ran a post arguing that “this is yet another article where the major-
ity of the posters appear to take a more nuanced view than the writer.”22
Digimodernism and Web 2.0 119

(Posters, however, have no commercial obligation to be “readable” or


“punchy.”) I accept these points, and without self-contradiction, as I hope to
show. Indeed, even I recognize that much of Wikipedia’s content is so infor-
mative and useful that criticizing it as a form can feel like churlishness.
To see why these divergent views are also valid, we need to distinguish
between two ways of looking at texts. They comprise what can be described
as their grammar (broadly, their underlying discursive rules) and their
rhetoric (simplistically, what they actually say). These elements sometimes
slot together with some strain. Agatha Christie’s novels are rhetorically
focused on slaughter and fear (their page-to-page material) but grammati-
cally foreground a reassuring restoration of order (their overarching and
generic systemicity). The rhetoric of Sex and the City emphasizes female
independence, wealth, and friendship; its grammar pulls the women toward
marriage, children, and home. Writing about message boards I concen-
trated on rhetoric, what people post on them, while my comments on
Wikipedia sought to identify the necessary consequences of its grammar.
Yet all this is made intractably more complex by the fact that one of the
essential hallmarks of the digimodernist text is its haphazardness, that is,
that fundamental to the grammar of the digimodernist text is the way
in which rhetorically everything is up for grabs. The digimodernist text is
classically in process, made up—within certain limits—as it goes along. The
degree of haphazardness of any text is always restricted and should never be
exaggerated; it corresponds to that felt by any writer (like me) trying to put
together a book—indeed, it’s rooted in an electronic and collective version
of that earlier model—but varies in detail from one form to another.
YouTube is a digimodernist form offering a particularly high level of
haphazardness. Grammatically, it’s a mechanism for making freely avail-
able relatively short pieces of film: you can upload yours on to it, and watch
(and pass comment on) other people’s. What you put up on to it and see
are entirely down to you (in the sense of “everyone”). In practice YouTube
concentrates the interests of certain types of user, summarized by Michael
Miller: the recorder/sharer (extracts from Saturday Night Live, Oprah), the
historian/enthusiast (ancient commercials, “classic” TV or music clips),
the home-movie maker (weddings, pets, birthday parties), the video blog-
ger (via Webcam), the instructor (professional or not), the amateur reporter
(breaking news footage), the current or budding performer (of music or
comedy), the aspiring filmmaker (student videos), and the online business
(infomercials, etc.).23 Though availability is constricted by copyright law,
Miller is able to conclude: “So what’s on YouTube tonight? As you can see,
a little bit of everything!”24
120 DIGIMODERNISM

The range on offer can be read as a scale running from professional


to amateur in terms of the résumés—the formal training, the technical
experience—of the people who make them. YouTube places cheek by jowl
highly sophisticated work by career specialists and stuff by people who
scarcely know how to switch on a camcorder. For Andrew Keen this scale,
or rather this duality, is precisely what is wrong with Web 2.0 which, he
argues, has undermined the authority of the professional and unjustly
fetishized the amateur in fields such as journalism (swamped by blogs) and
encyclopedias (engulfed by Wikipedia). Recently, for instance, American
newspapers have laid off their arts reviewers as more and more people
choose to find out about the latest cultural offerings from unpaid and
untrained bloggers. Keen decries this development, but the validity of his
argument is hobbled by his ahistorically static notion of competence as
necessarily enshrined in formal structures like newspapers and Encyclo-
pedia Britannica. In fact, competence may be found anywhere (though it
tends to cluster). A film review, for instance, needs to show competence in
the areas of cinematic knowledge, accuracy of summary, articulacy of
expression, and specific insight; given the tiny numbers of career film
reviewers around (they hold on to their jobs for decades) it wouldn’t be
surprising to discover they don’t have a monopoly on competence, or that
there are people on the Internet who review with more freshness and sym-
pathy than some who have been churning it out forever. Pace Keen, what
matters is that, first, we valorize the category of competence, and second,
those who demonstrate it are rewarded (which is linked to the first); where
it’s demonstrated is unimportant.
YouTube comes in for a bashing by Keen: “The site is an infinite gallery
of amateur movies showing poor fools dancing, singing, eating, washing,
shopping, driving, cleaning, sleeping, or just staring into their computers.”25
Yet, putting to one side the biographical issue of whether such-and-such a
filmmaker was paid for his/her work, “amateurishness” in film has always
been a back-construction of Hollywood’s “professional” conceptions of
expertise, a set of characteristics identifiable as merely the polar opposites
of studio techniques: cheap stock, handheld cameras that shake and
wobble, uneven sound, overlong shots, blurring, offbeam framing, the
staginess of untrained actors, and so on. It’s just the modus operandi of the
“dream factory” turned inside out, making such films look neither dream-
ily smooth nor factory-efficient. Consequently, amateurishness looks “real,”
authentic, sincere, up against the “fantasy” and “commerce” of Hollywood
(Dogme 95). Keen focuses on amateur contributions to YouTube since they
relate to conceptions of Web 2.0 as user-generated content. But—and this
Digimodernism and Web 2.0 121

moves the debate to a digimodernist level—YouTube’s haphazardness


means that it encompasses amateur and professional material, and also
that it permits the imitation of styles. While students and other unpaid
wannabes seek to make their videos look “professional” in order to gain
employment, the trained and salaried rough up their work to make it look
real, authentic, and sincere: a studio feature film, Paramount’s Cloverfield
(2008), has even been made this way, likened by its director, by a character,
and by reviewers to something off YouTube.
However, while the haphazardness of YouTube remains intact—you
upload on to it whatever you want it to have—it is frequently eliminated
within the individual clip, raising the question of where the limits of the
haphazard lie. After all, a book (like this one) isn’t being written forever.
For Jonathan Zittrain, its up-for-grabs heart is the origin of the contempo-
rary Internet’s success: he warns against all attempts to restrict individual
development of its content, arguing that the Internet lives on the openness
that derives from infinite possibilities of personal input. His praise of the
“generativity” of applications highlights what in textual form I call haphaz-
ardness: “Generativity is a system’s capacity to produce unanticipated
change through unfiltered contributions from broad and varied audi-
ences.”26 His focus is on the Web’s grammar as much as Keen observes its
rhetoric: the truth is that their viewpoints coexist.
But can a text remain haphazard forever? Or, if all texts solidify in time,
won’t the fetishization of the haphazard trigger an incessant but vacuous
shuttling to new forms and abandonment of the old? In any event, on
YouTube the haphazard and the amateur are closely linked: many of the
home movie videos appear to have been recorded (semi-)inadvertently,
showing the amazing incidents that supervened during filming of quotid-
ian domestic moments. Such rhetorical (if forged) haphazardness sits eas-
ily here alongside its grammatical counterpart.

Facebook (Electronic)

The secret of Facebook, and I imagine those social networking sites (Bebo,
MySpace) with which I’m not familiar, is its close mimicry of friendship.
Opening an account is like meeting somebody socially for the first time,
finding you get on well and chatting away, though with you doing all the
actual talking. You tell them (you tell Facebook) your name, age, where
you work and live, where you went to college and school, you allude to
your political and religious views; opening up, you discuss, or rather mono-
logue, about your favorite movies and music, TV programs and books.
122 DIGIMODERNISM

As in a conversation at a party you present yourself, put yourself out there,


in the hope of soliciting a complicity, shared interests; you also provide a
version of your face, as interesting or beautiful as you can make it, with the
general aim of coming across to others with all the élan you can muster:
this is how you make friends, or keep them, or get better ones, perhaps.
As time goes by, your input into Facebook comes to feel like the elec-
tronic nourishment of your friendships. If you went clubbing and some-
body took photos, you no longer get them developed at a local store and
hand them around any interested parties; you upload them on to your
page. If you’re feeling down, or up, or tired, or excited, or anything at all,
you can phone your friend and tell them or, alternatively, update your
status. Should you feel like communicating with somebody en tête à tête,
you send them a message through Facebook; but if you’d rather say some-
thing to him/her as though in a group conversation in the pub, you write
on his/her wall; if you’d like to say hi but don’t have the time or the inclina-
tion to chat, you poke instead. If you’ve just finished reading a book and
want, as people often do, to tell your friends about it, there’s an application
that enables you to do so; or you can give them electronic gifts. Nor does
Facebook mimic only existing friendships (since you’re likely to do all this
among people you already know). You can trace friends you’ve lost contact
with; through the establishment of groups on subjects of common interest
you can make new ones.
Of course, you can’t really. None of this is “real” friendship—it’s elec-
tronic friendship. It passes via a keyboard and screen; flesh-and-blood
persons are not involved, only accounts; no palpable items are exchanged
as gifts. Facebook nourishes friendship, but it does not provide it: the
electronic interface is so integral to it that it can be defined, rather, as the
textualization of the modes of friendship.
Facebook is so well designed, however, that it can render almost invisi-
ble this process of electronic textualization; for some, it becomes indistin-
guishable from actual friendship. Such people have been aghast to find the
embarrassing details they revealed about their life being used by employers
and authorities against them; teenagers have posted details of upcoming
parties on these sites and been dismayed when hundreds of strangers
turned up and trashed their absent parents’ homes.27 This kind of thing
results from overlooking the electronic and textual mode of Facebook,
which converts a private communication (a confession, an invitation) into
a public one. Digimodernism, crucially underpinned by the electronic/
technological, has produced a textual rupture so violent that its shock is
still far from being absorbed. Indeed, in the context of Web 2.0, it’s so new
Digimodernism and Web 2.0 123

that it’s been embraced mostly by those for whom everything is new, the
young. As a result, Web 2.0 is inflected by the proclivities and hallmarks
of youth: a mode of social networking that fetishizes the kind (tight peer
friendships) favored by the young; an encyclopedia written by students and
the semiqualified; a database of videos loved by the young or made by and
starring them. Web 2.0 is, like rock music in the 1960s and 70s, driven by
youth’s energy, and just as prey to hype and idealism.
The near-invisibility of the electronic and textual status of Facebook is
linked to this. Web 2.0 seems textually underanalyzed and socially over-
celebrated or overdenigrated because it comes, for now, incandescent with
its own novelty. But there is more going on here than that. As a modifica-
tion of an existing digital mode, the Web page, not of a predigital form like
the diary or encyclopedia, Facebook suggests that the drift of information
technology is now toward the phenomenological elimination of the sense
of the electronic interface, of the text. Increasingly, perhaps, people will feel
that the gulf separating their “real” and their “textual” lives has disappeared;
the thoughts, moods, and impulses of our everyday existence will translate
so immediately into the electronic, textual digimodernist realm that we
will no longer be conscious of transference. It won’t be a question then
of oscillating between offline and online, but of hovering permanently
between those extremes. This conceivable development, which Facebook
foreshadows, would culminate in the emergence of a new kind of human,
one constituted in large part not by the “other” forms of being beloved of
science fiction (robots, etc.), but by digimodernist textuality itself. In this
dispensation, you are the text; the text is superseded.
5
Digimodernist Aesthetics

O Gods dethroned and deceased, cast forth, wiped out in a day!


From your wrath is the world released, redeemed from your chains, men say.
New Gods are crowned in the city; their flowers have broken your rods;
They are merciful, clothed with pity, the young compassionate Gods.
But for me their new device is barren, the days are bare;
Things long past over suffice, and men forgotten that were.
A. C. Swinburne, 18661

A metaphor for this chapter (in the unlikely event one is wanted) might be
a bridge linking those before and after it by a discussion of four digimod-
ernist textual themes common to Web 2.0 and to older forms. The ancient
distinction between creative and critical writing, hopelessly assaulted by
postmodernists and post-structuralists from Barthes to Baudrillard, is
indifferently obliterated by digimodernism: computer technology restruc-
tures the “text” however it positions itself in relation to the “world.” Conse-
quently this chapter can join, say, movies with blogs about them in a shared
historical tendency.
Seeking to summarize a few of the more salient changes associated with
digimodernism, this chapter could easily have been as long as the book it
appears in. The discussion may as a result sometimes seem abbreviated
but, like a real bridge, its full meaning only emerges in the territories it lies
between. It’s a Janus-faced chapter in a second sense, too. On one hand, the
evocations of the death of popular culture, the eclipse of the fictive “real,”
and the superannuation of irony paint a portrait of the disintegrating
embers of postmodernism. However, another story, describing the emer-
gence of a new aesthetics, can also be glimpsed: in the prevalence of the
traits of the children’s story, the hegemony of the apparently real, the spread

124
Digimodernist Aesthetics 125

of earnestness, and finally (and most speculatively) the turn toward an


endless narrative form. The mood here is therefore double: between disap-
pearance and birth, exhaustion and infancy.

From Popular Culture to Children’s Entertainment

On the way to work one morning you pass a cinema screening the latest
Hollywood blockbuster and turn in for a coffee at a bar where you hear a
recent number one song playing and see friends discussing last night’s
prime-time network comedy show. How do you feel about these texts?
Once upon a time, under the influence perhaps of Theodor Adorno’s
critique of the “culture industry,” you might have frowned in contempt:
these are mere factory-made products churned out in standardized form
with the economic intention of appropriating the wages of the exploited
masses and the political aim of ensuring the continued obedience of the
masses to the capitalist status quo. The film, song, and program are artisti-
cally worthless, you would have felt, pseudoart designed to induce a state
of docile passivity in their consumer. As in ancient Rome, bread and cir-
cuses are purveyed to the oppressed people, mindless amusements and
distractions designed to entrench their subservience. These texts do not
give consumers “what they want”; instead, the downtrodden are manipu-
lated into imagining they desire what it is politically expedient to give
them.2
At another time, you might have reacted much more positively: with an
ironic half-smile, perhaps. Instead of the “culture industry,” you may have
thought in terms of “popular culture.” You might have been influenced by
Jameson’s work on Hollywood in Signatures of the Visible, or by the famous
critiques by Umberto Eco of Casablanca and by Baudrillard of Disneyland,
or by the work of Bowie, Matt Groening, or the Coen brothers; you might
have had a taste for films like Diva or the music of Philip Glass, which fuse
“high” and “popular” cultural traits. Such texts can be sites of resistance to
and subversion of hegemonic forces. In any event, they are central to a cul-
ture defined as a media-saturated hyperreality, where electronic represen-
tations precede experience and determine perception. Neither negligible in
themselves nor simply a means to a political end, they are the beating heart
of our contemporary text-drenched reality-system.3
These two responses correlate roughly to a modernist and a postmod-
ernist view; digimodernism might throw up a third reaction, one directed
by the actual content of today’s popular film, TV, and music. This would be
sympathetic to Adorno’s charges of worthlessness, standardization, and
126 DIGIMODERNISM

anti-Enlightenment, but unable to retrieve his sense of a gulf between


“high” and “low” culture (though the ironic smile may have frozen into
a rictus). To describe a complex development simply, what was once
“popular culture” now consists overwhelmingly of a form of children’s
entertainment, to the degree that it has become difficult, if not impossible,
to make the sort of assumptions regarding its significance that became
commonplace in the heyday of Tarantino, Blur, Nirvana, and The Simpsons.
This argument does not depend, as might be imagined, on sustaining
a clear definition of and strong distinctions between labels such as “mass”
or “popular” or “high” culture (notoriously tricky ones). This is because the
shift I am tracing has occurred historically within each medium, and the
dimension of it that concerns me here is the new one, multifaceted enough
in itself. Throughout this section, I therefore sidestep the elusiveness
of “popular” and use it to denote an immediate cultural profitability (quan-
tifiable by accountants) together with a certain social pervasiveness. My
discussion does assume that it is ahistorical to deny en bloc the status of
“high” culture to texts produced or diffused by means of electricity, and
incoherent to use this criterion to distribute them according to a “low/
popular” and “high” duality. In short, classifying Trout Mask Replica and
The Dating Game as “popular culture” and Mrs Dalloway as “high culture”
turns electricity into an irrational fetish, traduces all extant senses of “pop-
ular,” and ignores what must be crucial to the distinction, these texts’ com-
plexity as signifying systems and their various sociological functions.
Most conspicuously, perhaps, American popular cinema and, by almost-
necessary extension, world popular cinema have become a subdomain of
children’s stories. Of the ninety movies appearing on the lists of the top ten
grossing films worldwide every year from 1999 to 2007, forty-five, or
exactly half, are children’s fictions. They include the top four in 2001 and
2004, the top six in 2007, and seven of the top ten in 2006 and 2007; they
encompass five Harry Potter episodes, three installments each of Star Wars,
Pirates of the Caribbean, Spider-Man, X-Men, and Lord of the Rings, and
seventeen cartoons including six by Pixar and four by DreamWorks.4 They
share certain features on the level of their content that can be taken as
characteristic of children’s stories, and have been since the category first
emerged in the mid-nineteenth century, although, for important reasons
that I’ll return to later, this category is neither a watertight nor an unchang-
ing one. Broadly speaking, children’s stories tend to emphasize or rely on:

the foregrounding of children’s experiences, especially (but not only)


when the infant has been separated from the structured stability of its
Digimodernist Aesthetics 127

habitual family unit and finds itself between semi-independence and a


succession of protective and/or guiding surrogate-parental figures;
a predemocratic conception of society: tales of kings, queens, princes,
princesses; of self-appointed vigilantes operating outside of the police
force and unanswerable to an elected government; of crypto-fascistic
and sub-Nietzschean “super heroes” or “supermen”;
similarly, the romanticization of vanished and violently sociopathic
figures (dinosaurs, knights, pirates, cowboys) and the archaic in general
(myths, mummies);
preliterate, that is, visual modes of storytelling (cartoons, comics,
videogames);
an elision of the question of reproduction; sexualized love may be
permitted, but in sublimated form;
an elision of the world of work;
a reflexive and yet inconsistent readiness to dispense with all known
laws of nature and science (anthropomorphized animals, invented
species, impossible ballistics, light-speed travel, outsize creatures, etc.);
and
a diminished interest in psychological plausibility or in interiority in
general.

The list is not exhaustive, but it highlights the continuities between the
recognizable literary category of the children’s story and the dominant
form of contemporary American popular cinema. This argument may
seem old hat: since 1977 and the influential success of Star Wars voices
have frequently been raised denouncing George Lucas and also Steven
Spielberg for “infantilizing” a Hollywood that subsequently turned its back
on complex and troubling social critique in favor of flashy simplicities for
the kiddies. But over the past ten years this development has taken on a
new character, one that renders such an indictment out of date. It’s clear,
already, that the traits I listed have become the default setting in terms of
content of all American popular cinema; they have spread right across the
board; perhaps eighty of the ninety highest-grossing movies referred to
above are mired in them. There’s a recurring tendency, for instance, to
fantasy, or to innocently juvenile sources of humor, or to pseudomythical
mumbo-jumbo; there’s a parallel erasure of adult experiences or actors
aged over thirty-five, and a marginalization of genres (war, musicals,
drama) that adults like—tellingly, the “woman’s picture” has given way to
the “chick flick.” In place of adaptations of Broadway plays or contempo-
rary literary novels, we get films made from comic books (Blade, etc.) or
128 DIGIMODERNISM

videogames (Final Fantasy, etc.) or theme park rides (Pirates of the


Caribbean) or toys (Transformers); harder to measure, but arguably per-
ceptible too, is an emptying out from scripts of the cultural and human
knowledge accreted by age. While it is the most profitable of children’s
story movies that get the most flak from the middle-aged and older, many
are not especially successful; indeed, like westerns and musicals in the
1950s, they dominate popular cinema not just as dollar grabbers but sim-
ply as a proliferation of individual titles, whose infantilized content has
become its staple narratological diet. All told, though, the children’s story
movie is unimaginably lucrative, and doubtless the majority of viewing
hours spent in the world’s theaters is now devoted to gazing at the cine-
matic equivalent of Punch and Judy, conkers, and picture books.
Yet this development is not just to be condemned. The quality of these
pictures runs from terrible through bad and good to terrific: on one side
you have the bludgeoning stupidity of Transformers; on the other, the first
two Harry Potter films, superbly realized and satisfying popular entertain-
ments. Many of the Pixar and DreamWorks cartoons on the top-grossing
lists are masterpieces of their kind whose success indicates only a collective
good taste. It is oversimplistic to reduce these films to the ruination of
cinema. It is also wrong to accuse them of killing “serious” moviemaking:
taken individually, any one of them is the modern successor to the John
Wayne western or the MGM musical, forms made obsolete by postcolo-
nialism, feminism, and rock music; it’s spurious to suppose they’ve dis-
placed Taxi Driver or The Godfather.
Above all, while these films draw on the content of the children’s story,
many of them are shot with the speed, allusiveness, and impact of movies
for adults. Stylistically, these are grown-ups’ movies, their infantilism is
purely narratological (they’re not just “films for kids”); they’re too noisy,
dazzling, and confusing for very young brains to take in, while full-blown
children’s stories are visually too dreary, too slow and obvious, for anyone
other than children. As a result, movies like Transformers, Batman: The
Dark Knight, or Peter Jackson’s King Kong receive certificates preventing
preteens from seeing them. As with the jokes and references for adults
in post-Toy Story cartoons, what is going on here is a refurbishment of
children’s stories as material for the entertainment of young adults. Its core
consumers therefore show a giveaway state of semiaggressive denial of their
favorites’ infantilism, especially clear in adult Star Wars fans condemning
Ewoks or Jar Jar Binks. For the industry, this reconceptualization enables it
to amass sufficient numbers of customers to turn a profit on hugely expen-
sive investments (everyone either is or has been a child; it’s the sole true
Digimodernist Aesthetics 129

universal), and it makes popular cinema seem young and interesting to a


new generation distracted by new technologies. But there is more, much
more to say here, and it will be in both this chapter and the next.

As for the new redefinition of popular music as songs for children, here
is a list of artists, almost all of them purveyors of the kind of anodyne,
industrialized pap Adorno would have recognized: Backstreet Boys,
B*witched, Blue, Boyzone, Busted, Girls Aloud, Hear’Say, McFly, Kylie
Minogue, N-Sync, New Kids on the Block, S Club 7, Britney Spears, Spice
Girls, Steps, Take That, Westlife (and so on, and so on). The relationship of
songs for children to rock and pop has long been an awkward one. Prior to
Bob Dylan’s embrace in 1965 of electric music, pop was uncomplicatedly a
form for people too young to vote, and disparaged by almost everyone else.
In the Beatles’ movie A Hard Day’s Night, filmed in the spring of 1964, they
are shown playing exclusively to audiences aged sixteen or under; at one
point Ringo, wandering by a canal, strikes up a friendship of equals with an
eight-year-old. This reflected the contemporary cultural understanding of
pop, even if the Fabs’ songs at this time reveal an intriguing tension between
a monosyllabic and sexless childishness in their lyrics, and a sense in their
music of a grace and creative potential held in check. This applied also
to Phil Spector’s early “symphonies for the kids,” which married almost
Wagnerian musical ambitions with the lyrical experiences of (junior) high-
school students. Dylan was to inject politics, social criticism, drugs, poetry,
and late modernism into popular song, while the Rolling Stones brought
sex; suddenly pop’s demographic was caught up in a rush to artistic and
personal “maturity.” When in 1967 Scott McKenzie invoked “a whole gen-
eration/With a new explanation,” it had a clear upper age limit but, pro-
vided the explanation was accepted, not a lower one; “young people” could
then be seen as opposed en bloc to the squares, warmongers, and reaction-
aries in an idealistic and lifestyle-driven unity.
In the early 1970s a bifurcation occurred: younger listeners embraced
the Cassidy Family, the Osmonds, Slade, or the Bay City Rollers, dismissed
as inferior pop junk by serious-minded “art rock” aficionados who extolled
the contrasting merits of Led Zeppelin, Genesis, Pink Floyd, and the like.
Between the two floated David Bowie, a figure several years ahead of his
time, whose true importance became apparent when, in the aftermath of
punk’s quest to revivify popular music as a mode of youthful self-expression,
a search began in Britain for a form of pop that would be both genuinely
widely appreciated and socially and politically radical. At various times
it briefly seemed that Adam and the Ants, Scritti Politti, Aztec Camera,
130 DIGIMODERNISM

Culture Club, or Frankie Goes to Hollywood might play such a role; it was
the era of Ian Penman’s “War on Pop” article for the NME and of Simon
Frith’s book Art into Pop, both of which aimed to describe and to bring
about the kind of pop that was supposedly needed: one that was fun, imme-
diate, sexy, cool, but also intelligent, literate, politicized, and socially pro-
gressive. In its marriage of the market and disruption, this music would
take its place within the postmodern cultural-dominant. Pop’s moribun-
dity after Live Aid put such hopes on hold, though they flickered into life
again when the Stone Roses and Happy Mondays appeared on Top of the
Pops in 1990, and were resuscitated anew by the cultural pretensions, fame,
and competing aesthetics of Suede, Blur, and Oasis. If their death can be
given a date it might be January 20, 1997, when Blur released “Beetlebum,”
a single that declared their abandonment of mass market pop as a vehicle
for artistic expression. This notion had long separated British music
from American; if the United States could produce such artists it couldn’t
make them popular (though Madonna came closest), and its mainstream
remained dominated from the 1970s on by an ideology of authenticity and
musical tradition incommensurate with the throwaway, commercialized
smartness and ironic experimentation of post-Bowie pop.
In short, children’s song runs throughout the history of rock/pop as the
brutally marginalized antibody of the “real thing” or as the potential for a
postmodern reconciliation of art and commerce. Whether despised or
expropriated it remained indestructible, though, and as Britpop faded it
reemerged in a musical landscape now cleared of postmodern theory. The
Spice Girls’ first album Spice (1996) is as good an example as any of the
type. Its opening track “Wannabe” begins with a desperate clamoring for
attention followed by a bathetic failure to say anything of note that will
be familiar to anyone who has spent time with a five-year-old; its message,
that a prospective lover must fall in with the female’s friends, reflects a
prepubescent valuation of same-sex friendship (which “never ends”). The
track “Mama” is cloyingly infantile, while “2 Become 1” presents a vision of
sexual love so sublimated it can pass as an account of intense emotional
closeness as much as of carnal mingling. When the Spice Girls extolled
“girl power” it was not generally understood that the first word of the
slogan was to be taken in its primary sense, a confusion it shared with the
expression “boy band.”
In 2000 the coveted Christmas number one spot in Britain was fought
over by “Can We Fix It?”, the theme tune to a TV cartoon for preschoolers,
and Westlife’s “What Makes a Man,” an antiseptic “love” song for ten-year-
olds. If the former is better, relatively speaking, it’s partly because the latter
Digimodernist Aesthetics 131

is sunk in denial of its infantile status: the singers emote like Sinatra on a
record bought solely from the salaries of people who would never choose
to play it. This form of denial is endemic to the genre: one of the Spice Girls
allegedly complained that it was tough during gigs, as they threw them-
selves about the stage doing their exhausting dance routines, to look out at
rows of sleeping children. Evidently she did not draw from this an accurate
assessment of their work: children’s song constantly seeks the kudos of
“real” pop’s past, of the Rat Pack, the Beatles, and Bowie. Consequently
there always seems to be a singer trying to pull off the professional matura-
tion the Beatles negotiated in 1965–66 from children’s entertainers to fully
formed stars: one moment a Britney Spears is in her school uniform and
dancing past her teacher, the next she is desperately trying to invent sex
in music.
In the 2000s, truly popular songs, ones that please (or even interest) a
reasonably wide cross-section of the public, have become rare. The default
setting of what calls itself pop is dedicated instead to selling a peculiarly
idealized version of young adult sexuality to girls not yet wearing a bra.5
It reflects the pedophilic nature of contemporary consumer culture, which
perpetually desires—in fashion, movies, TV, adverts, the Internet, songs—
to sexualize children. This is the version of popular music to which this
decade’s TV talent shows are in thrall, such as The X Factor and Pop Idol and
American Idol. The qualification for judges such as Simon Cowell, Simon
Fuller, and Louis Walsh is to have been successful with past children’s
song acts (respectively, Five, the Spice Girls, and Boyzone). Unconsciously
recuperating the 1970s’ contempt of rock aficionados for “manufactured
groups,” these shows, in their pure digimodernism, enable their audience
to manufacture its own stars. Given the choice, it’s children’s entertainers
they prefer to fabricate.

On TV, the multiplication since the 1980s of satellite, cable, and digital
channels has denationalized the medium: rather than make programs
ostensibly aimed at all age groups, classes, and tastes, channels have, on one
hand, chosen to provide only one form of content (music videos, films,
documentaries, etc.) or, on the other, to target only one kind of viewer.
The latter has led to a profusion of children’s and “youth” channels, from
CBeebies and Nickelodeon for the smaller ones to (in Britain) BBC3,
Sky1, ITV2, Channel 4, E4, and Virgin 1. The essential point here is that
there is little or no equivalent targeting of any other age group: the few chan-
nels with a remit for the “older” viewer fulfill it by rerunning ancient shows
and movies, so that contemporary program-making is understood over-
132 DIGIMODERNISM

whelmingly as the provision of material for schoolchildren. “Youth” chan-


nels implicitly aim, it is true, at a 13–30 age range but, as the 18–30 segment
watches less TV than any other, their viewer tends to be at the younger end,
dragging their programs with it (more on that later). To this torrent of teen
entertainment needs to be added the specific content of the movies and
music channels, which can be deduced from the previous discussions.
An alien watching Britain’s television today would conclude that the
country’s demography had not changed since the Middle Ages. In fact, in
a social shift that will only intensify, Britons under eighteen were outnum-
bered for the first time in 2008 by those over sixty, at whom scarcely
a program (and certainly not a channel) is aimed. The controllers of such
channels tend to justify their narrowness with a rock ’n’ roll rhetoric
emphasizing “rebellion” and “generational conflict,” but the real cause is
brutally economic: in a country awash with credit card debt young people
are easier to part from their cash, and so reaching them that much more
lucrative for advertisers and, by extension, the channels themselves (so
much for Woodstock).6 Whether actual teenagers prefer to watch this
programming is moot, though such qualitative reflections are unimportant
for advertisers eyeing only ratings. High-school TV—for so it can be
labeled—foregrounds stories about petulant but glamorous teenagers
and their interesting confusions; it also favors fresh-faced twentysome-
things, fantasy witchcraft, space opera, “knowing” cartoons, “surreal” or
cruel comedy, and the latest blockbuster movies; conversely, it has no inter-
est in work, politics, or anything much beyond peer sociability, sex, and
consumerism.
As with the spread of traits from children’s stories across popular cin-
ema, the impact of high-school TV has been felt throughout the medium.
Older female presenters are pensioned off while men desperately try to act
a generation younger; news programs and documentaries are drained of
information and jacked up with pseudodrama (the “dumbing down” debate
is in truth mostly about infantilization); knowledge of the past is assumed
to be zero outside of GCSE staples like Hitler and Henry VIII; and interest
in “culture” is reduced to new movies and bands. The shift becomes starkly
apparent by juxtaposing two British sketch shows, the sophistication, com-
passion, and subtlety of The Fast Show (1994–97) and the simple-minded
obnoxiousness of Little Britain (2003–06); a similar change has taken place
in American cartoons from The Simpsons (1989– ) through South Park
(1997– ) to Family Guy (1999–2003, 2005– ).
It’s also visible in the evolution of three New York “zeitgeist” shows.
When Friends began in 1994 it sought the hip, knowing, and contemporary
Digimodernist Aesthetics 133

territory mapped out since 1989 by Seinfeld, a program it became (misguid-


edly) fashionable at one time to describe as “postmodern.” Friends anchored
its six characters in their era: Ross, born in the late 1960s, refers in season
3 (1997) to a video he had made of the hostages returning from Iran (in
1981) and the last episode of M*A*S*H (1983), two events that had marked
his youth. This kind of cultural reference, frequent early on, was later
excised by writers who presumably realized, as the century ended, that
their core audience wasn’t even born in 1983. This gradual uprooting of
the friends from their context was reinforced over the years by a shift in the
characters and plotlines away from realism and subtlety toward infantilism
and simplicity. Each character is given a catchphrase and reduced to the
status of caricature; Joey regresses to almost preschool levels of ignorance
and naivety; Monica turns into a cartoon monstrosity, half-harridan, half-
OCD sufferer (we’re supposed to find both hilarious); and the shallow,
petulant, immature Rachel becomes the show’s normative moral center,
her “weird” girlfriends and “admiring” male friends objectified around her.
The ensemble structure, offbeat comedic rhythms, and observation of a
generation are jettisoned for predictability and story lines that belong
in a schoolyard, such as jealous fights and tearful reconciliations between
the girls. In an episode from season 7 (2000), Ross and Chandler venge-
fully reveal each other’s teenage secrets to Monica; when Chandler tries to
respond at one point Ross unironically crows in an exultant, sing-song
voice, “Wha-tever dude, you kissed a guy!” forgetting, perhaps, that he’s
actually a professor.7 Later seasons also favored flashback episodes to when
the characters were young, in the now-absurd 1980s.
By the end the cast members looked about thirty-eight and acted about
sixteen, a bizarre composite also inherent to the four women of Sex and
the City (1998–2004), who have the interesting and well-paid jobs and
consumer lifestyles of their own age group, and the sexual attractiveness
and wardrobes of females young enough to be their daughters. They could
therefore become objects of fantasy identification for their (junior) high-
school viewers as well as for their peers; in one episode Carrie meets some
prepubescent fans who treat her writings as a lifestyle guide and is, disin-
genuously, appalled. Seinfeld, in which early middle-aged characters like
Kramer and George dressed, spoke, felt, and behaved pretty much their
age, had been left far behind.

This tendency is apparent too in digital media. Videogames drawing on


movies noticeably rely almost solely on boys’ genres: sword and sorcery,
martial arts, space opera, action movies, super heroes, and so on (they
134 DIGIMODERNISM

don’t make videogames resembling anything by Kieślowski or Wong


Kar-wai). Web 2.0 also favors under-eighteens, not only on chat rooms but
also on Wikipedia where the most abstruse of topics finds itself hijacked by
children’s entertainment. The article on Magi, for instance, begins with
Zoroastrianism, Herodotus, Axel Schuessler, and Bhavishya Purana before
coming, in defiance of all protocols for the editing of encyclopedias, to:

The Magi are the three super-computers, Melchior, Balthasar and


Casper, that appear in the anime series Neon Genesis Evangelion . . .
In the game Chrono Trigger, the Gurus of Life, Time, and Reason
are named Melchior, Belthazar, and Gaspar. Magus is also the name
of Frog’s Arch Nemesis . . .
In the game Warhammer 40,000, the term Magos is used to describe
a high ranking official of the Adeptus Mechanicus, a para-religious
cult dedicated to technology . . .
The guards of Imhotep’s tomb in The Mummy and The Mummy
Returns are actually an ancient group of people called the Medjay, but
people often mistake them for the Magi.
In the visual novel Fate/Stay Night by Type-Moon, characters who
are proficient in sorcery often refer to themselves as magi.8

This is placed under the rubric of “popular culture.” While Adorno would
bridle at the latter term, the former seems especially unjustified to me: in
reality they’re just a subsection of that sliver of electronic textuality beloved
of sixteen-year-olds.
Why has this happened? First, it’s important to be clear about the dispa-
rate forms of this shift. Popular film has embraced children’s stories reshot
for young adults; popular music has become the semisublimated packag-
ing of adult sexuality for young children; popular TV has increasingly
chased after the 13–18 age group either through content about that demo-
graphic or by dragging material whose focus lies elsewhere. Videogames
and Web 2.0 show a similar bias, though they are too recent to have under-
gone a historical transformation. Some of the reasons, alluded to already,
are specific to the evolution of each medium. You can’t ignore more general
social changes either, from the greater readiness of parents to select family
outings with their smallest children in mind to the heightened indepen-
dence of young teenagers. More broadly still, it can be argued that society
has been infantilized, particularly through a consumerism that fetishizes
spending and sees work as an irrelevant burden: the sports team or the
shopping mania of some men and women, both consumerist, can be linked
Digimodernist Aesthetics 135

to a failure of maturation, an economic refusal to outgrow childhood pre-


tending, play, and dressing up. Assuming ignorance of the law, govern-
ments oblige public places to post signs prohibiting smoking (though not
yet ones telling you not to murder); public bathrooms remind you to “now
wash your hands”; trains advise passengers not to forget their belongings
as though too young to travel alone. A whole book could be devoted to
such phenomena.
More relevantly here, the shift is both symptom and result of the
eclipse of postmodernism. If it’s old hat to make cultural material from
postmodernism or from the forms it itself made obsolete, then the void has
to be filled with something else; and it is precisely postmodernism’s take
on such materials that has become out of date. The notion of “popular
culture”—of texts so complex they deserve the name “culture” and addressed
to such a wide demographic they merit being called “popular”—is integral
to a postmodernity that defined itself as the aftermath of the avant-garde.
I would argue that film, music, and TV decreasingly justify the label of
“mass” or “popular” culture. Such a category is becoming a thing of the
past; the texts I’ve been exploring were made for and are enjoyed by niches,
nor do they have any wider resonance or influence—they’re ghettoized.
What’s mistaken today for “popular culture” is in fact a narrowly focused
children’s entertainment plus “noise”: relentless comment-cum-promotion
from surrounding media outlets (newspapers, magazines, the Internet,
etc.). This energetic support system conceals the fact that, for instance,
fewer people will watch Transformers than the BBC’s religious affairs
program Songs of Praise (average viewer age over sixty). Transformers is
“noisy”; Songs of Praise is “quiet”; neither is “popular.”
It was possible to speak of popular culture in the days of Madonna or
Seinfeld, and inevitable in the age of the Beatles or The Sound of Music or
I Love Lucy. Today, the pressure of cultural fragmentation has made this
terminology inadequate, and the decline in the numbers of people going to
the movies, buying music, or watching TV is partly attributable to this
closing in of cultural content. Twelve-year-olds can (and did, in the 1930s
in cinema, the 1970s in TV) enjoy what fifty-year-olds like, but the latter
will not embrace what the former rave about. Postmodernism, of course,
delighted in the end of cultural universalism, ironically saluting with one
hand what would cut off its other. Economic forces did much of the rest;
rock music’s hegemonic ethos, which will be discussed in Chapter 6, also
rushed in to succeed the crippled king.
Writing this, part of me warned that these arguments were unoriginal,
that “everyone knows this.” I carried on in part because, while journalists
136 DIGIMODERNISM

and media professionals (or former professionals) frequently make similar


points, they are not recognized at all by two critical cultural arbiters: the
producers (TV channels, film studios, record labels, etc.) and academia. It’s
not in the commercial interests of the former to acknowledge such devel-
opments, since to do so would block off the adult income stream and dis-
credit the product in the eyes of younger customers; and so—as happened
while I was writing this—Boyzone play an evening gig at a racecourse near
Oxford to an audience all old enough to have spent the day gambling on
horses: to admit to the nature of what they sang and how (and normally to
whom) would have lost them that booking. Academia faces other obsta-
cles, less tangible. It took universities a long time to engage seriously with
texts dependent on mechanical and electronic reproduction, and post-
modern cultural theory, which provided a prism through which they could
be theorized and understood, allowed them also to be newly valorized.
In the Cambridge Companion to Postmodernism, published in 2004, Cathe-
rine Constable devotes six pages to a glowing review of John Woo’s Face/
Off, a cornucopia of cinematic sadism and slaughter that is appalling in
just about every way a narrative can be.9 For such a study, or those made
of the high school TV show Buffy the Vampire Slayer, this shift raises
two questions. If the prism becomes obsolete, what becomes of the valori-
zations it generated? And, if “popular culture” is no more, what happens
to “high culture” and the postmodern interplay of the two? That Adorno
has recently come back into theoretical fashion may be telling in this
regard.
However, there is a different way of interpreting this shift. As well as
instantiating the retreat of postmodernism, it can be seen as the troubled
gestation of a new form of narrative, as symptoms of a restructuration
of narrative for a digimodernist textuality. This fascinating possibility will
be explored in segmented form across this chapter and the next, but some
initial points can be made here. In the 1970s or 1980s it was common
for thirteen- or fourteen-year-olds to read adult popular fiction, like Isaac
Asimov, Georgette Heyer, Frederick Forsyth, or Agatha Christie. In the
wake of the crossover success of J. K. Rowling’s Harry Potter novels
(1997–2007), for a time children’s novels became the popular fiction of the
world. Thirty percent of the sales of the first three books were to and for
readers who were thirty-five or older,10 and they topped the bestseller lists
of both adults’ and children’s fiction. Philip Pullman’s His Dark Materials
trilogy (1995–2000) and Mark Haddon’s The Curious Incident of the Dog
in the Night-Time (2003) achieved the same commercially successful and
unprecedented reformulation of the relationship of children’s literature to
adults’. All are children’s stories by the traits I outlined above; and yet these
Digimodernist Aesthetics 137

characteristics are not set in stone, nor is the distinction between child
and adult texts a black and white one. Usborne Books publish versions for
six-year-olds of Jason and the Golden Fleece, the Arthurian sagas, Robinson
Crusoe, and Gulliver’s Travels, all written before the category of “children’s
fiction” was invented and all considered great “adult” literature. It is, I think,
to texts such as these that Rowling and Pullman look.
Rowling’s first novels in the series are postmodernist pastiches of
previously existing children’s narratives, scraps stitched together under
her new generic hybrid: the semirealist English boarding school fiction
(Tom Brown’s Schooldays, Enid Blyton’s Malory Towers and St. Clare’s, etc.)
with the fantasy tradition (wizards, dragons, unicorns, magic potions, etc.).
The characters spring from two sources: Anthony Buckeridge’s 1950s–60s’
Jennings novels about an adventurous prep schoolboy and his deferential
best friend; and Blyton’s Famous Five books, where roaming kids investi-
gate shady dealings—Hermione is a composite of George (independent,
skilful, robust) and Anne (diligent, feminine, anxious) for a postfeminist
age. It is, indeed, always the late 1950s/early 1960s here: the children travel
by steam train and Ford Anglia, sit silent and fearful in classroom rows,
and receive letters from home. But it is also the past of all children’s stories:
when, at the end of Harry Potter and the Sorcerer’s Stone (1997), the gigan-
tic dog is sent to sleep by playing it music, thereby accessing the epony-
mous treasure, Rowling winks to Jack at the top of his beanstalk playing the
harp to knock out the giant and steal his goose that lays golden eggs. Works
of late postmodernism like Chicken Run or Shrek, the Sorcerer’s Stone and
the Chamber of Secrets read like parties held by a clever and witty hostess
where previous texts can frolic, Billy Bunter with Jason and the Argonauts
(who also do the creature/music/treasure trick) and Swallows and Amazons
with Ali Baba (who also unlocked doors with Latinate gibberish).
The later, longer novels attempt, unsuccessfully I think, a move away
from pastiche and irony to a self-sustaining mythological world that can
be seen as tentatively digimodernist. The backward-looking familiarity
of the shorter, earlier works, which made them amusing and exciting but
also immediately nostalgic, ebbs away; the stories also become engulfed in
their other digimodernist innovation, the seven-book series (see the final
section of this chapter), again, I feel, ultimately unsatisfactorily. Having
been so much of their time, it’ll be interesting to see how they survive (I’m
not an optimist).
Pullman’s His Dark Materials, however, breaks with the postmodernist
burden. While there is no doubt that Pullman anchors his trilogy in the
traits of children’s stories I sketched above, he refurbishes them by justify-
ing their use through adult science: the many-worlds interpretation of
138 DIGIMODERNISM

quantum mechanics, underwritten by chaos theory applied to biology (by


which reruns of evolution would produce vast developmental differences).
In other words, the books’ generic infantilism is reinvented as a fictional
exploration of ideas associated with Hugh Everett or Stephen Jay Gould.
Validated by the adult world of advanced science, the trilogy counts among
its intertexts two pieces of adult science fiction, Keith Roberts’s Pavane
(1968) and Kingsley Amis’s The Alteration (1976), which also posited a
twentieth century derived from a sixteenth where the Reformation (and so
the Renaissance, the scientific revolution, industrialization, and European
colonialism) was rolled back or never happened or turned out very differ-
ently. Powered by adult thought and an adult richness of expression, the
trilogy addresses mighty issues: the structure of the self, the form of the
cosmos. Finally, the distinction between children’s and adults’ stories dis-
appears: in terms that Homer, Defoe, or Swift might have recognized (and
theirs are the precursors to Lyra’s adventures), this is just literature. (This is
not to set Pullman at their level necessarily.)
Like Gulliver’s Travels, His Dark Materials is a work of lacerating social
and moral criticism couched in the mode of exciting adventure. Bolvangar,
where children are violently deprived by the Church of their “daemons” or
souls, is a critique of religious education as a process of dehumanization
driven by a fear of sex (or original sin, or “Dust”). A remote set of buildings
run by strict adults where children are drilled and cowed, exercised and fed
in packs, it resembles a (satirized) boarding school; lost in frozen wastes
where no birds or animals come, and full of kidnapped people terrified
of the nameless and “scientific” cruelties to which they are systematically
subjected, Bolvangar also evokes Auschwitz. The assault on religious edu-
cation that Pullman buries in The Golden Compass is a devastating one,
and pure adult fiction: intertexts might include Ken Kesey’s One Flew over
the Cuckoo’s Nest.
His Dark Materials strikes me as unlike anything written to that point; it
seems to seek to renew fiction itself out of a wholly new engagement with
children’s literature (whether it succeeds is another matter). For every echo
of children’s stories it strikes (the Pied Piper of Hamelin hovers over the
activities of the Gobblers), it shows adult traces too (Abraham’s sacrifice of
Isaac at the climax of The Golden Compass). It too clearly builds toward
self-sustaining mythology; conversely, The Amber Spyglass was short-listed
for the (adult) Booker prize as, simply, a novel. The essential point here is
that, being the first texts we encounter, children’s literature trains us to
read, to evaluate and make sense of adult literature: it teaches us what
literature is. What seems to be happening in the hands of Rowling and
Digimodernist Aesthetics 139

Pullman is an attempt to redefine the nature of narrative and the novel by


going back to their sources: to the first writers of both (Homer, Defoe, Swift),
to the first-encountered examples of both (children’s fiction). And this is
symptomatic of something more seismic.
Briefly, a parallel case can be made for Peter Jackson’s Lord of the Rings
film trilogy (2001–03), equally unprecedented and a renewal of cinema
from out of the traits of children’s stories I listed above. Jackson’s fusion of
CGI and real settings, of virtuoso camera work and thrilling music is an
overwhelming and seminal achievement. It too embeds a shift from adven-
ture to mythology, and consequently toward narratives that break out from
both realism’s bankruptcy and postmodernism’s antirealist impasse.

The Rise of the Apparently Real

The apparently real, one of digimodernism’s recurrent aesthetic traits, is so


diametrically opposed to the “real” of postmodernism that at first glance it
can be mistaken for a simple and violent reaction against it. Postmodern-
ism’s real is a subtle, sophisticated quantity; that of digimodernism is so
straightforward it almost defies description. The former is found especially
in a small number of advanced texts; the latter is ubiquitous, a consensus,
populist, compensating for any philosophical infirmity with a cultural-
historical dominance that sweeps all before it. And yet there are also signs
that the apparently real is beginning to develop its own forms of complexity.
For postmodernism, there is no given reality “out there.” According
to Baudrillard: “[t]he great event of this period, the great trauma, is this
decline of strong referentials, these death pangs of the real and of the ratio-
nal that open onto an age of simulation.”11 The real is, at best, a social con-
struct, a convention agreed in a certain way in a certain culture at a certain
time, varying historically with no version able to claim a privileged status.
Invented, the real is a fiction, inflected by preceding fictions; if the real is
something we make up, it has also been made up by others before us.
In Cindy Sherman’s celebrated series of photographs Untitled Film Stills
(1977–80), a solitary woman appears in a variety of urban settings in
what seem to be images from 1950s–60s’ movies: this one surely is from
Hitchcock, that one must be Godard, doubtless this other something by
Antonioni. But which movies? You can’t quite remember . . . Of course, this
woman, variously dressed, wigged, and made up, immersed in her narra-
tives of anxiety and ennui, alienation and off-screen perversity, is always
Sherman herself; the photos can be seen as self-portraits of unreal selves.
The films don’t exist; the “real” here is a movie, and not even a “real movie”
140 DIGIMODERNISM

at that. The photos are fictions, or, rather, they are fictive fictions, invented
fragments of what would be, if they existed, inventions. The plates of the
real shift; “[t]here are so many levels of artifice” here as Sherman herself
says, and what is finally represented is the act itself of representing a woman,
or a woman’s historicized act of self-presentation, in an ontological hall
of mirrors redeemed by Sherman’s wit, her subtlety, and exhilarating
feminism.12
As a result, to believe in a reality “out there” becomes a form of paranoia,
the unwarranted ascription of meanings to a universe that cannot bear
their load. Oliver Stone’s film about the Kennedy assassination JFK (1991)
mixes historical footage with fictional material shot thirty years later to
propose a welter of conspiracy theories explaining what “really” happened
in November 1963. If the textual real is a mishmash of manufactured film
sources, all equal, the functioning of the “real world” is inevitably going to
wind up seeming overdetermined and paranoid. Pynchon’s The Crying of
Lot 49 (1965) follows Oedipa Maas’s quest, similar in some respects to that
of Stone’s Jim Garrison, to uncover the “truth” about what appear to be
secret activities cascading through American life. She finally arrives at four
possible conclusions: that there really is a conspiracy out there, or that she
is hallucinating one, or that a plot has been mounted against her involving
forgery, actors, and constant surveillance, or that she is imagining such a
plot.13 Pynchon doesn’t resolve these multiple and incompatible versions of
the “real.” Other postmodernist novels and films, like The Magus, Money,
The Truman Show, and The Matrix, would also dramatize fabricated reali-
ties involving professional actors and round-the-clock surveillance, and
yielding similar interpretive options.
The aesthetic of the apparently real seems to present no such predi-
cament. It proffers what seems to be real . . . and that is all there is to it.
The apparently real comes without self-consciousness, without irony or
self-interrogation, and without signaling itself to the reader or viewer.
Consequently, for anyone used to the refinement of postmodernism, the
apparently real may seem intolerably “stupid”: since the ontology of such
texts seems to “go without saying,” more astute minds may think they cry
out for demystification, for a critique deconstructing their assumptions.
In fact, the apparently real is impervious to such responses. While it’s
true that a minimal acquaintance with textual practice will show up how
the material of the apparently real has been edited, manipulated, shaped
by unseen hands, somehow as an aesthetic it has already subsumed such an
awareness. Indeed, though paradoxically and problematically, it seems to
believe it has surmounted Sherman’s and Pynchon’s concerns, perhaps
Digimodernist Aesthetics 141

considering them sterile or passé. In 2007 it emerged that a number of


apparently real British TV shows had in fact undergone devious trickery
at the hands of their production companies or broadcasters. Newspapers
reported this as “scandal,” the supposed betrayal of their audiences, while
TV insiders explained that this aesthetic’s reality was only apparent, as its
name suggested, not absolute; viewers, unfazed, carried on watching them.
The apparently real is, then, the outcome of a silent negotiation between
viewer and screen: we know it’s not totally genuine, but if it utterly seems to
be, then we will take it as such.
In truth, apparently real TV, such as docusoaps and reality TV, has to be
considered “real” to a decisive extent to be worth spending time on. Its
interest derives from its reality; reject the latter and you lose the former.
The reality in question is narrowly material: these are genuine events
experienced by genuine people; these are actual emotions felt by actual
people. It’s a shallow, trivial reality, the zero degree of the real: the mere
absence of obvious lying; hence the importance of “appearance” within the
aesthetic, of visible seeming. This supremacy of the visual makes the aes-
thetic’s natural environment television, film, and the Internet; the triumph
of appearance carries it beyond the true/false dichotomy and the wrought
“fictiveness” of Weir or the Wachowski brothers.
The difference between the docusoap and reality TV, genres born in the
1990s, is not clear-cut, nor is it significant here. (Reality TV is sometimes
distinguished by its celebrity participants or its pseudoscientific premises.)
The child of the traditional documentary, the docusoap inherits all the
truth of a form once defined in opposition to TV fiction (sitcoms, drama);
to this it splices, in accordance with its name, the stuff of soap, of
“ordinary” life—these are true accounts, then, of everyday experience.
People are filmed at work, on vacation or at home doing nothing very
special; everything that is most recognizably stressful or tedious about con-
temporary life—learning to drive, getting married, renovating or cleaning
or buying houses, checking in at airports, disciplining small children—is
foregrounded. These semiuniversal (hence “ordinary,” that is, “real”) situa-
tions are portrayed from the perspective of “ordinary” people, the suppos-
edly humdrum individuals embroiled in them. This personalization and
apparent intimacy are intended to convey an interior reality corresponding
to the banally genuine exterior.
In either case, the digimodernism of reality TV and the docusoap is
clear: the participants improvise the immediate material. Such shows create
structures and manage recording processes around essentially extempo-
rized content. They present haphazard material, captured and molded by a
142 DIGIMODERNISM

semi-invisible production company. Traditional TV (sitcoms, news, drama,


etc.) monopolizes the creative roles; apparently real TV hands over the
writing and direction—the fabrication of dialogue, the choice and sequenc-
ing of actions—to the wit, moods, and duties of the people taking part.
As the production company don’t back off completely, the “reality” can
only be apparent (what would’ve happened had they not been there?14);
and yet the direction in which the content of the show will move genuinely
does become haphazard in a manner similar to the openness of a Web 2.0
text.
Web 2.0 depends so critically on the apparently real that it gives a name
(“trolls”) to those who reject it. Wikipedia, message boards, and social net-
working sites clearly require, in order to function at all, a level of sincerity
in their users (impossible to measure objectively). Writing what you don’t
believe or know to be untrue defeats the object of these sites. The appar-
ently real is prevalent on amateur YouTube clips, and underpins blogs:
“Honest blog writing is universally cited [sic] as a requirement of the
genre . . . all bloggers demand attempted truthfulness at all times.”15 Indeed,
newspapers, in a familiar move, have highlighted the “scandal” of the
“sinister” machinations of businesses or institutions to pass themselves
off online as “real” (or “viral”). The exception to this reliance might be
chat rooms, where fictive selves wander free, but even they have a pressure
toward encounters in the “real” world that imposes on participants a
permanent engagement with the appearance of their authenticity. In the
world of the performing arts, David Blaine’s shift from “conjurer” of fabri-
cated, “magical” realities to the subject of apparently real feats of physical
endurance is emblematic of the spirit of the times.
The apparently real may be thought such a naïve and simple-minded
aesthetic that it vitiates any text it dominates, and examples of this can be
found. Jackass, in both its TV and film formats, deploys the aesthetic as
a kind of inverted pornography: instead of young people performing plea-
surable acts for the (erotic) delight of watchers, Jackass has them perform
agonizing ones for the (comedic) pleasure of its viewers. To gain any enjoy-
ment from watching it’s necessary to believe in the reality of its set-pieces;
moreover, it’s probably essential to feel that this reality outweighs any other
consideration. At one point in Jackass: The Movie (2002) a cameraman
genuinely throws up on-screen; the guys roar with laughter, doubtless
because their aesthetic creed states that any actual, filmed physical suffer-
ing must be hilarious. This is the apparently real as personal degradation.
Indeed, the aesthetic has often been exploited to record the harassment
of members of the public; along with Jackass and myriad prank shows,
perpetrators of “happy slapping” attacks, where cell phones are used to film
Digimodernist Aesthetics 143

actual assaults on people for the later amusement of viewers, are also fond
of this. The apparently real can in such cases become no better than a
guarantee of suffering.
More rewardingly, I can think of at least three masterpieces of the
apparently real. One of them, Daniel Myrick and Eduardo Sánchez’s film
The Blair Witch Project (1999), appeared so early—only weeks after The
Matrix—it was probably conceived by its makers as postmodernist horror
in the style of Scream: explicitly cine-literate and self-reflexive, it fore-
grounds its own (ostensible) making like a filmic Beaubourg and, with
interpretation of its main events radically undecidable, privileges instead
its acts of representation, its shooting. Shifts between color and black and
white constantly remind us that what we are seeing is a created text. Yet its
sense of the real is Janus-faced: made for an initial outlay of $22,000,
its marketing was orchestrated for free on the Internet by means of planted
speculation that the events it shows “really happened,” while an alleged
“documentary” on the events (also by Myrick and Sánchez) was screened
on the Sci-Fi channel. The film itself begins with the caption: “In October
of 1994, three student filmmakers disappeared in the woods near
Burkittsville, Maryland while shooting a documentary. A year later their
footage was found”—and supposedly pieced together by the directors—
so the film passes itself off throughout as real. In consequence it offers no
explanation for what happens to the students, though lots of suggestions,
and it stops rather than ending; when I first saw it just after release its
famously devastating final shot was followed by darkness, silence, and the
lights of the theater coming up . . . that was where the tape had run out.
As with amateur YouTube clips, docusoaps, and reality TV, the apparent
reality of the footage is conveyed by its awkwardness in comparison to
Hollywood technique: blurred images, wonky framing, self-consciously
wooden “acting” early on (things get more raw in the woods), natural light-
ing, choppy editing, periods of total darkness, handheld camera shake, dis-
torted angles, underwritten “character,” inarticulate “dialogue,” and so on.
The students film in happier times a couple of staged scenes for their docu-
mentary, which become a benchmark of professionalized “fakery” against
which their amateur “truth” seems even truer. Apparent reality is so textu-
ally embedded in The Blair Witch Project it survives on to the DVD, where
a deleted scene is labeled “newly discovered footage.” Yet the film isn’t
a hoax that you can “see through.” Instead, its narrative concerns the
apparent emergence into reality of what had previously been considered
“legends” and “stories”; it depicts the gradual passage of what the students
are investigating from the status of “tale” to bizarre and enigmatic truth.
As a result the film’s dominant motif is the ambiguous appearance of
144 DIGIMODERNISM

“reality” itself. Hence the suspended ontology of the final shot, explicable
but impossible, intelligible but imponderable. The film therefore holds on
extratextually (in its marketing, packaging, etc.) to an apparent reality its
own textuality has generated.
This in turn derives from the circumstances of the film’s shooting.
Heather, Josh, and Michael (really their names) really did get lost hiking
in some woods, and were harassed and scared at night (by Myrick and
Sánchez); they improvised the dialogue as though in reality, genuinely
carried the equipment and shot nearly all the footage (later really edited
by the directors); they were given less and less food during the eight days
they were out there to incite genuine discord among them. The effect, in
short, was to underpin the film’s textual apparent reality with the shoot’s
near-reality.
As for Ricky Gervais and Stephen Merchant’s TV series The Office
(2001–03), Ben Walters rightly traces its aesthetic to two forms of televisual
storytelling increasingly in vogue since the 1990s: naturalism (The Royle
Family, the Alan Partridge vehicles) and vérité (ER, The Larry Sanders
Show). Counterparts of the docusoap and reality TV, both bore witness to
the growing importance of the narratological “real” in TV fiction without
directly addressing the issue to any significant extent. The Office owes much
to the aesthetic of the docusoap; it looks like a TV program about everyday
life in a dreary workplace, intimately shot, and its final episodes draw on
the idea that the earlier ones have now been aired, such that new characters
recognize David Brent as “that awful boss” from the BBC2 show.
What distinguishes The Office from any of its influences, however, is its
use of a technique by which characters’ eyes frequently move toward the
filming lens, but not “as an echoing exercise in postmodern referentiality.”16
Instead, these eye movements, which can be voluntary or involuntary, open
or furtive, and in their duration range from almost-imperceptible flickers
through glances to actual looks, constitute the camera as an implicit, silent
character. In short, they characterize the camera; or rather, as we never see
or hear the show’s (fictional) makers, they characterize and fictionalize
you, the viewer. Tim looks to you appealingly, as an ally in his war of intel-
ligence and sensitivity against Gareth’s stupidity and boorishness; Brent
looks to you deludedly, as an “admiring” audience for his supposed toler-
ance and comedic brilliance; myriad characters look toward you embar-
rassedly, in shared solidarity or even guilt, as the implicated witness of the
cringe-making mess that they themselves are unwillingly part of. Each of
these glances attributes to you a character, a personality, a certain level
of sophistication and social awareness, a certain set of post-PC values, or
Digimodernist Aesthetics 145

a certain opinion of what is happening. Never before on TV had characters


looked to the viewer in such a way as to invent him or her both as a physi-
cal presence and as an individual; and the implied viewer’s personality,
background, and social values are decisively outlined by the timing and the
nature of these glances—for instance, the viewer is, or you are, constituted
as left-liberal by the behavior of the characters toward you as undeniably as
Gareth is made to be right-wing. You are written, fictively, as the show’s
normative but compromised moral center, with views close to those of Tim
(but without his self-loathing) or Neil (without his bland acceptance of
capitalist brutalism). This isn’t to say that “we identify” with characters;
it’s not “we” whom the show produces, it’s you, and it produces you as a
character catapulted into the scenes, planted—like some hologrammatic
projected presence—within them. This is why the show can be so excruci-
ating: because we feel what’s happening as though we’re present at it, as if
it’s actually occurring—objectively, it’s no more excruciating than any other
successful sitcom.
The Office is therefore structurally unique: while the circumstances of
its filming were entirely fictive (unlike The Blair Witch Project and Borat),
it appears to be real less through its handheld cameras, low-key acting, and
semiplotlessness (its techniques and content) than through the role it
invents for its viewer, or seems to. In cultural-historical terms Gervais and
Merchant subsequently went backwards: Extras (2005–07) really is an
echoing exercise in postmodern referentiality, a play of media signifiers
and irony on a flattened surface that critics appreciated but not the public.
The Office, by contrast, opens up an additional comedic dimension: while
keeping its viewer actually at bay (there’s no breaking of the fourth wall, no
awareness or disruption of the ontological gulf separating viewer and
events), it gives him or her a wholly implicit identity and intense, but actu-
ally illusory, “direct” experience of the action, as though s/he were really
there.
The third masterpiece, Sacha Baron Cohen’s Borat: Cultural Learnings of
America for Make Benefit Glorious Nation of Kazakhstan (2006), is another
complexly woven tissue of apparent realities. Baron Cohen had first
appeared as Ali G, a white West Londoner in love with African-American
hip-hop and a mythical Jamaican outlaw lifestyle: “For real” was his—very
ironic—motto of approbation. Even on his own fictive terms Ali G was
bogus; feeling slighted he would ask: “Is it cos I is black?” as if in denial of
the actual color of his skin. Though some critics charged Baron Cohen
with a racist lampooning of black culture, Ali G mocked instead a white
appropriation of “blackness” as old as, and to a great extent identical with,
146 DIGIMODERNISM

rock ‘n’ roll. The character can be read as a satire on the fictions, imperson-
ations, and cultural mystification that have long underpinned the recep-
tion of American youth culture; Ali G was to Staines what Mick Jagger
had been to Dartford (both inevitably wound up in the United States).
In his openly filmed debates with middle-aged representatives of official
institutions or the bien pensant liberal orthodoxy, he would appear as a
fictive invention, they as themselves (a polarity integral to Baron Cohen’s
humor, though alien to postmodernist theories of self). In these discus-
sions he would push as far toward the margins of ignorance, stupidity, sex-
ism, homophobia, and criminality as he could get away with; misidentifying
his persona as apparent reality, his guests, though ever more affronted, let
him do so.
Borat, however, gave Baron Cohen a more dangerous and relevant
target for his satirical venom: the United States itself or, more precisely, that
side of the United States that had repeatedly led it into military action in
the Middle East (these really are “cultural learnings of America”). Borat the
character was also born on British television but found his true purpose
the other side of the Atlantic. In the film he is presented from the outset
as the fictive embodiment of the most insultingly regressive stereotypes
about the Middle East, in order to draw from the Americans he meets the
expression of those actual prejudices of theirs that underscored the war
in Iraq. “Kazakhstan” here is no more than a lightning rod, deliberately
chosen as an almost-unknown (in the eyes of his targets) but vaguely
Middle Eastern piece of land (as one of the film’s writers noted, real Kazakhs
look nothing like Borat). Officials from Kazakhstan reacted with fury to
the film, castigating it as lies and abuse; in doing so they were responding
to one level of its apparent reality without recognizing the subtlety with
which Baron Cohen deployed it. I don’t think for a moment that Baron
Cohen had any interest in “genuine” Kazakhstan: it’s a fictive construct
that the movie depicts, a spurious racist prejudice designed to elicit the real
racist prejudices of genuine Americans. It is then a bold and politically
radical piece of work, a devastating assault on actual American ignorant
primitivism that conceals its anger and brilliance behind an entirely bogus
presentation of invented Kazakh ignorant primitivism. A polemical study
of one aspect of contemporary Western orientalism, Borat unmasks
through its fictions the true system of values—the anti-Semitism, the
assumed cultural superiority, the bloodlust, the fear of the other, the paro-
chialism, the naivety, the certainty, the unthinking patriotism, above all
perhaps, and most disturbingly, the blind and empty desire to “help”—
which makes possible, even inevitable, American attempts to control, colo-
nize, and “save” countries like Iraq. The film gives us, then, a fictitious self
Digimodernist Aesthetics 147

from a spurious country in real situations with actual people, playing at


bogus clowning in order to dramatize some—alleged—historico-political
truths.
There are three concomitant observations that can be made about the
textual functions of the apparently real: its deployment of a (pseudo)sci-
entific discourse; its engulfing of the self (“addictiveness”); and its immer-
sion in the present.
The postmodernist real favored a rhetoric of the literary: since the real
was a fiction it made sense to read, to decipher it; similarly, it was concep-
tualized as written, created as an aesthetic object. The literary became the
metaphorical model for interpretation through the text’s supposedly fictive
ontological status. The digimodernist turn toward a scientific discourse-
repertoire is audible in the evening highlights shows during a run of Big
Brother, where clips frequently start with a voice-over solemnly intoning
something like: “Day forty-seven in the Big Brother house” or “11.07 p.m.
Dale, Bubble, and Mikey are in the bedroom. It is forty-three minutes since
the argument in the kitchen.” This is the discourse of laboratory research,
where records of results are kept carefully documenting dates, times, places,
and the identities of participants. The function of this log-keeping is con-
firmed by Big Brother’s use of a resident academic psychologist whose role
is to interpret the program’s human interactions as though they formed
part of some experiment s/he was conducting. Elements of the show’s for-
mat, such as the isolation and continuous observation of the subjects being
studied, do indeed suggest a putative experiment. Other docusoaps and
reality TV shows have adopted this research-lab structure, adding to the
isolation and surveillance a third essential feature, the introduction, whereby
a foreign body is placed inside the observed environment to see what abre-
actions (explosions? assimilations?) would ensue. Wife Swap (2003– ) is
perhaps the most successful of such programs, and ends each time with
an analysis of “results” as if a genuine experiment has taken place leading
to an advance in human understanding. Provided it was alien to its new
surroundings anything could be introduced anywhere, with “interestingly”
unpredictable and filmable consequences; and so classical musicians were
trained to perform as club DJs, regular families were inserted into the
lifestyle of the Edwardians, and TV professionals dressed and ate as if in
the 1660s.17
Though such shows adopted some of the methods and the language
of anthropological or historical or sociopsychological investigations, it’s
unlikely that any finally made a contribution to knowledge. By the stan-
dards of actual scientists, the “experiments” were inadequately prepared
(insufficient samples, contamination of participants, no control group, etc.),
148 DIGIMODERNISM

while some of the “experts” interpreting the “results” seemed of dubious


academic authority. In That’ll Teach ’Em (2003), a documentary series
made by Channel 4, a group of high-achieving teenagers was placed in an
isolated house and subjected to the practices of a 1950s’ private school:
heavy uniforms, draconian discipline, period English food, daily organized
sports, separation of the sexes, science practicals for the boys (stinks and
bangs) and home economics for the girls (cooking), ferocious exams, and
so on. They were filmed for a month and at the end the “results” studied:
the boys had fallen in love with science, they all hated the food and the
uniform, each had lost on average about seven pounds in weight, they
seemed happier and more natural, they had mostly failed the exams, and
so on. Though fascinating and suggestive in itself, the show did not, as
educationalists hastened to explain, actually produce any usable research
findings: the discourse and rhetoric of the scientific experiment had been
only that.
The number and variety of programs during the 2000s ringing changes
on the tropes of the experiment (isolation, observation, introduction,
results, experts) have been so vast that sometimes viewers might have felt
like apprentice anthropologists or psychologists themselves. If occasionally
the rhetoric seemed a fig leaf for voyeurism and trash TV, the producers
of such shows would defend them as offering “insight” into, for example,
“gender differences,” stealing the language of academics filling out an appli-
cation for funding for their research. More elaborate uses of the apparently
real would turn these tropes inside out. The stunts shown in Jackass or
mounted by David Blaine could be read as grotesque parodies of medical
research; Borat, as its subtitle makes clear, is a work of pseudoanthropo-
logy; the disappeared filmmakers were engaged on a university research
Project.
Moral panic has also surrounded the digimodernist text’s alleged addic-
tiveness. It is commonly reported, both by researchers and the mass media,
that such digimodernist forms as text messaging, e-mail, chat rooms,
videogames, reality and participatory TV, and the Internet in general have
addictive properties. It is, however, problematic to describe any form of
text as addictive since it produces no heightened physical reaction (unlike
drugs) and is rarely a trigger for intense emotion (unlike gambling); much
digimodernist text may actually induce a sense of monotony. However,
the keyboard trance is a recognizable phenomenon, whereby users click
half-bored and semihypnotized endlessly from electronic page to electronic
page, to no visible end. The digimodernist text does seem to possess the
property of overwhelming the individual’s sense of temporal proportion or
boundaries; it can engulf the player or user or viewer, who experiences
Digimodernist Aesthetics 149

a loss of will, a psychological need for textual engagement that exceeds


any realistic duration or rational purpose. Digimodernist texts can be hard
to break off from; they seem to impose a kind of personal imperialism, an
outflanking of all other demands on time and self. This derives from their
apparent or experiential reality: combining manual action with optical
and auditory perception, such a text overpowers all competing sources of
the real.
There are two possible explanations for this: first, that our seeming
impotence before the digimodernist text stems from its novelty and our
consequent inexperience and incapacity to control the (semi)unknown; or
second, that the digimodernist text truly affords an intensity of “reality”
which is greater and more engulfing than any other, including unmediated
experience. Evidence is conflictual, and it may be too soon to say.
Finally, digimodernism’s sense of cultural time also differs from that of
postmodernism. Delighting in the quotation, the pastiche, and the hybrid-
ization of earlier texts, postmodernist culture was often backward-looking;
historiographic metafictions such as Julian Barnes’s Flaubert’s Parrot, John
Fowles’s The French Lieutenant’s Woman, and A. S. Byatt’s Possession
explored their very contemporary attitudes through an encounter with the
textual past. Postmodernism also emphasized a new sense of history as
constructed in the present, and, in novels like Toni Morrison’s Beloved or
Graham Swift’s Waterland, a sense of the past as a haunting of the present.
The apparently real and digimodernism are by contrast lost in the here and
now, swamped in the textual present; they know nothing of the cultural
past and have no historical sense. The difference is clear in cinema: where
Baudrillard or Jameson identified a depthless “nostalgia for a lost referen-
tial” in 1970s’ films like American Graffiti and Barry Lyndon,18 digimod-
ernist historical movies like The Mummy, Pirates of the Caribbean, and
King Kong make no effort to reproduce the manners and mores of the past.
Instead, their actors behave like people from the 2000s, clad in vintage
clothing and rushing through their CGI-saturated story. All attempts at
mimicking past human behavior are given up by a digimodernism which
assumes, in TV costume dramas like Rome (2005–07) and The Tudors
(2007– ), that people have always talked, moved, and acted pretty much as
they do today, and have ever had today’s social attitudes (equality for
women, sexual outspokenness, racial tolerance). In short, digimodernism
is, as the debate on addictiveness confirms, the state of being engulfed by
the present real, so much so it has no room for anything beyond; what is, is
all there is.
The apparently real also has a wider context, of course, evident in chang-
ing social notions of the textual hero. Classical Hollywood fashioned the
150 DIGIMODERNISM

“star,” the impossibly glamorous, absolutely remote, and seemingly perfect


figure produced by and identical with its movies. By contrast, infused
with a tarnished romanticism, post-1960s rock culture foregrounded the
artist-hero, the on-the-edge voice of a generation grafted into his audience’s
context yet far more insightful and brilliant than you or me. The contem-
porary notion of the “celebrity” is something else again. Its distinctive
feature isn’t that so many people portrayed as famous are almost completely
unknown—an effect of the collapse of “popular culture” into niches—but
the virulence and loathing, the spitefulness of the discourse surrounding
them. Celebrity magazines and TV programs picture famous women
with their hair all messy, their makeup undone, their cellulite on show, or
their grotesque weight gain (or loss) to the fore; lovingly dramatized are
their relationship hells, their eating disorders, their career meltdowns,
and their fashion disasters. You’d think the readers or viewers had a
personal vendetta against them. What’s happening is that the assumed
“realities” of the female reader/viewer (her supposedly actual anxieties) are
projected as the apparent reality of the famous female; it’s a globalized,
textual version of a malicious idea of woman-to-woman gossip. In conse-
quence, this discourse strips the “celebrity” of everything but her fame:
rather than see her as competent in some sense (talented at acting or sing-
ing, physically beautiful, etc.), she is constructed as exactly the same as
anyone else, except famous. This is a prevalent coding: the aesthetic of the
apparently real is a textual expression of the social death of competence.

From Irony to Earnestness

Ihab Hassan placed “irony” in his famous column of terms representative


of postmodernism (he opposed it to modernism’s “metaphysics”).19 Accord-
ing to the title of Stuart Sim’s Irony and Crisis: A Critical History of Post-
modern Culture (2002), it may be considered one of postmodernism’s most
characteristic traits.20 Irony is also key to the postmodernist philosophy
of Richard Rorty, where it inherits an intellectual landscape denuded of
foundations or metanarratives. Most trenchantly, Stanley Aronowitz states:
“Postmodernism is nothing if not ironic; its entire enterprise is to decon-
struct the solemnity of high modernism.”21 For Gilbert Adair, more prolix:

As the Modern Movement sputtered out in a series of increasingly


marginalized spasms of avant-gardism . . . what was sought was an
escape route out of the impasse. And this was found in a knowing
retrieval of the past . . . making these strategies operative a second
time around (the postmodernist always rings twice, as you might say)
Digimodernist Aesthetics 151

by inserting them within ironic, if not entirely ironic, quotation


marks.22

In the wake of 9/11, some voices in America called for what would come to
be known as the “new sincerity,” defined by Wikipedia as: “the name of
several loosely related cultural or philosophical movements following
postmodernism . . . It is generally agreed that the principal impetus towards
the creation of these movements was the September 11th attacks, and the
ensuing national outpouring of emotion, both of which seemed to run
against the generally ironic grain of postmodernism.”23 There was a politi-
cal subtext to this, understandable after such a trauma, in that sincerity has
traditionally been identified as a typically American trait; to have more of
it is to reinforce Americanness. On the Côte d’Azur in Lawrence Kasdan’s
film French Kiss (1995), Meg Ryan’s character exclaims that, while the local
women may be mistresses of guile and ambiguity, “I cannot do it, OK?
Happy—smile. Sad—frown. Use the corresponding face for the corre-
sponding emotion.”24 This distinction between American naturalness and
straightforwardness, and European sophistication and game-playing, is at
least as old as Henry James. Sincerity is here rooted in notions of New
World innocence and childlike uncontamination as much as it underpins
the curious British belief that Americans don’t get irony and the French
conviction, expressed by Baudrillard, among others, that Americans are
typically naïve.
However, “new sincerity,” at least in such terms (and to the degree
that you trust Wikipedia), has been made redundant by an international
digimodernist earnestness that wipes out postmodernism’s irony and pre-
dates the attacks on the World Trade Center. While sincerity is a value, a
conscious moral choice reassuringly (in troubled times) under the control
and will of a speaker, digimodernist earnestness, like postmodernist irony,
has deep roots in contemporary culture. It can therefore seem a compulsive
mode, involuntarily swamping its speaker. Digimodernist earnestness, as
far as a cultural mode can be, is necessary, that is, a sociohistorical expres-
sion, not a personal preference. It cannot be called for or promoted as
it’s already here, and right at the heart of our culture. The following extract,
for instance, comes from a 1999 movie that made almost a billion dollars
worldwide; it’s spoken in a toneless voice, unmodulated and flat but exud-
ing gravitas:

PALPATINE: There is no civility, only politics. The Republic is


not what it once was. The Senate is full of greedy,
squabbling delegates. There is no interest in the
152 DIGIMODERNISM

common good. I must be frank, your Majesty. There is


little chance the Senate will act on the invasion.
AMIDALA: Chancellor Valorum seems to think there is hope.
PALPATINE: If I may say so, your Majesty, the Chancellor has little
real power. He is mired by [sic] baseless accusations of
corruption. The bureaucrats are in charge now.
AMIDALA: What options have we?
PALPATINE: Our best choice would be to push for the election of a
stronger supreme chancellor, one who could control
the bureaucrats, and give us justice. You could call for
a vote of no confidence in Chancellor Valorum.
AMIDALA: He has been our strongest supporter.
PALPATINE: Our only other choice would be to submit a plea to the
courts.
AMIDALA: The courts take even longer to decide things than the
Senate.25

The Phantom Menace is one of a group of digimodernist films that can


be called mythico-political: the Lord of the Rings, the Matrix, and the Star
Wars prequel trilogies, Gladiator, Troy, Alexander, and so on. Such films
foreground the stuff of politics: executive decision-making, government,
administration, parliament, councils and votes, armies and warfare, taxa-
tion, alliances, separatist movements and rebellions. But they do not depict
this as politics is familiar to us, as a wearying but unavoidable game of
horse-trading and palm-greasing and vapid posturing and big swinging
dick machismo, as, at best, the “art of the possible.” Instead, politics is por-
trayed as a matter of vague but profound gravitas, of weighty consideration,
of deep solemnity and eternal values. It’s essentially a child’s conception of
politics, reduced to an air of ineffable importance and emptied of content,
voided too of adult psychology. It’s deprived of sex (power sure isn’t an
aphrodisiac here; more like an inhibitor) and, tellingly, of the petulance and
pettiness and cruelty that make unassailable adults sometimes resemble
kids. Instead, a child’s eye view of grown-ups’ supposed infinite strength,
obscure seriousness, and unknowable remoteness prevails.
Earnestness in cinema is also conveyed by wise old sages like Gandalf,
Dumbledore, Obi-Wan, Yoda, and Xavier, objectified quantities of immea-
surable and ancient experience, who intone imposing but hollow nostra like
“with great power comes great responsibility”; it’s embodied by messianic
figures like Neo and Frodo, and narrativized by portentous battles by an
Digimodernist Aesthetics 153

Anakin or Harry with the “dark side.” It’s visible too in the shift from
the postmodern camp, irony, and depthlessness of the 1960s’ TV shows
Batman and Spider-Man to their more recent cinematic versions. The
Spider-Man trilogy starring Tobey Maguire is especially rich in earnestness:
its first installment (2002) ends with the hero musing, “This is my gift.
My curse,”26 and almost all of the second (2004) is taken up by the angst,
hand-wringing, and sulky self-communing of the three solemn leads. It’s
shallow and narcissistic, and so tediously transitional, but what it really
isn’t, is ironic.
It mustn’t be concluded from this that earnestness is merely humorless-
ness. It’s true in general that earnestness, especially when so labeled, will
have an unattractive image: it suggests a very unsexy and exaggerated
pseudograndeur that frankly needs to chill and lighten up; “irony” had
sounded knowledgeable (or “knowing”), cool, hip, undeceived, in control
and skating pleasurably over the surface of things. In cinema earnestness
does derive frequently from the attempt to shoot children’s material for
young adults. But, more interestingly, it also stems from the shift of cinema
toward mythological subjects or toward ancient-historical or apocalyptic
scenarios. This, as I explore in the next chapter, is partly due to what CGI
can give cinema, the reality-systems beyond our naked-eye universe that it
dramatizes convincingly. But it is equally linked to an evolution in narra-
tive after postmodernism, away from the realist/antirealist impasse toward
a mythopoeic form more reminiscent of medieval storytelling. This is a
fascinating and as yet embryonic shift, and the overblown or heavy absur-
dities of earnestness in films like X-Men, The Chronicles of Narnia, or The
Golden Compass, where the fates of civilizations are at stake but never felt
to be, are a very early—and wholly inadequate (but then all babies start
with faltering, falling steps)—symptom of it.
Earnestness in contemporary pop derives from a parallel disjuncture
between adult material and childish consumer. In reality TV and the docu-
soap I find a different cause: the absence of critique, of critical intelligence.
This is paradoxical, since these are top-heavy forms with a crushing weight
of authorial directedness: a voice-over tells you how to interpret what
you’re seeing, an “expert” is on hand to tell you what it all means—it’s
infantilized. But the experts are frequently pseudoauthorities (semiquali-
fied members of academically marginal disciplines), or lecturers from
the “soft sciences” sweetening and dumbing down their insights from the
social to the superficial. In Wife Swap, for instance, the evident differences
in status or values would seem to provoke an understanding based on
154 DIGIMODERNISM

ideological or Marxian or historicist or even Freudian terms, which would


broaden the discussion, give it an abstract and thereby general significance.
Instead, the different ways of raising a family are portrayed as “lifestyle
choices,” as though these people had made up their equally valid beliefs,
habits, and practices in a void. British TV also favors docusoaps about
families moving abroad, usually to Spain, France, or Italy, like Channel 4’s
No Going Back (2002–04) or the BBC’s Living in the Sun (2007– ). Such a
premise invites a welter of social, political, historical, and moral consider-
ations to do with imperialism, appropriation, assimilation, identity, global-
ization, and so on. Ignoring all such general issues, these programs focus
instead on the consumerist viewpoint (the acquisition of “property,” deal-
ing with paid workmen, etc.). They ask no questions of expatriation, set it
in no contexts; they are rigorously complacent and ignorant shows, which
cannot see beyond what is thought to be good spending. Totally deprived
of any analytical perspective on what they present, they cannot but be ear-
nest (though not humorless).
Many videogames embrace the aesthetics of the recent movies described
above, and while I don’t find Web 2.0 textually earnest the engulfing (not
“addictive”) effects of these texts, the impact of the keyboard trance, evacu-
ate the ambivalence and alertness of irony, and the acute consciousness
of critical intelligence. The pasty-faced, glass-eyed stare associated with
engagement with such texts has been assimilated to that of the zombie by
the film Shaun of the Dead (2004). So many of the elements of the digimod-
ernist text explored so far in this chapter—the pseudoscientific framing,
the absence of historical consciousness (however problematic)—are con-
densed into earnestness, which is not exactly seriousness. Seriousness
might be thought a rational response to the problems of our time, the
faraway wars and too-near terrorism, the economic upheavals and social
estrangement, but earnestness is a depoliticized, indeed desocialized qual-
ity; it’s a cultural turn toward the mythic, the consumerist, the electronic-
textual. In truly objective terms, earnestness is excessive; it’s really a
discursive effect, something that emerges from changes in cultural content
inflected by dominant personal values. But it is no use in the “real world,”
though embraced notably, from the turn of the century on, by both Tony
Blair and George W. Bush. Their rhetoric became, after 9/11, mythicized
and emptied of all critical intelligence (especially of hard facts). Their
apocalyptic-Manichean visions, their contempt for actuality, and their
complacency made them digimodernist politicians in the worst possible
sense.
Digimodernist Aesthetics 155

A culminating question would seem to be prompted here: is the readerly


state of digimodernism one of credulity? This would appear to have been
a recurring theme: the naivety of Wikipedia, the simplicities of children’s
entertainment, the irrationality of earnestness, the belief in apparent reali-
ties, the falsity of pseudoscience, the loss of control of engulfment—none
of these suggests a sophisticated reader/viewer. Postmodernism prided
itself on the “media literacy,” the smartness of its ironic, knowing textual
recipient, who could identify the quotations, the sources, and allusions,
who was aware of the conventions and practices of textual production, who
could piece together the discontinuous fragments and appreciate depth-
lessness as a positive quality. This kind of cultural consumer was once
essential to interpret movies like Robert Altman’s The Player (1992), the
Schwarzenegger vehicle Last Action Hero (1993), or Tarantino’s Pulp
Fiction (1994), or to enjoy The Simpsons’ textual riches. In an era inflected
by the themes so far, though, such a person is unnecessary, even obsolete.
S/he may have been succeeded by the “universe geek,” who knows every
single detail about a fictional reality-system. It’s hard to mourn the death
of this postmodern textual consumer: just as wall-to-wall irony got tire-
some and restrictive, so perma-knowingness wound up looking self-satis-
fied and vacuous.
There is another way of seeing all these traits: not in terms of regression
from sophistication (not as “credulity,” or “infantilism,” etc.), but in terms
of breakup and re-formation. It can be argued that a new, though in many
ways old, form of narrative is percolating through our culture. This is
canceling out the certainties of a previous generation and making us all
children of the text; it is so new that all our responses to it are so far naïve
and unformed, half-sized, a reaching beyond our grasp. This is speculation,
and the last section of the chapter will indulge in some.

The Birth of the Endless Narrative

Of all the pages and arguments making up this book, those in this section
are the ones I feel most uncertain about. It’s a risk I’m willing to take because
the issue, however much I may misunderstand it, fascinates me. But
although a book like this unavoidably posits its author as a fount, if not of
wisdom, then of belief, here I grope in the dark, the points are indistinct to
me, and this may even be a nighttime of my own making. Perhaps I might
say: this is an argument that could be put forth by somebody unknown,
which I have imagined and am quoting with all due detachment.
156 DIGIMODERNISM

In the first chapter of Mimesis (1953) Erich Auerbach famously studied


the different modes of representing reality in two ancient texts. Homer’s
poems, he concludes, are characterized by “fully externalized description,
uniform illumination, uninterrupted connection, free expression, all events
in the foreground, displaying unmistakable meanings, few elements of
historical development and of psychological perspective.”27 In other words,
everything in them is exteriorized, clarified, expressed, everything connects
explicitly and openly, the characters do not change, and the poems “con-
ceal nothing, they contain no teaching and no secret second meaning.”28
On the other hand, the Old Testament is characterized by “certain parts
brought into high relief, others left obscure, abruptness, suggestive influ-
ence of the unexpressed, ‘background’ quality, multiplicity of meanings
and the need for interpretation, universal-historical claims, development of
the concept of the historically becoming, and preoccupation with the prob-
lematic”; that is, a narrative that is discontinuous, suggestive, in need of
interpretation, claiming “truth,” showing characters changing, allusive, and
irreducible.29 When I first read these lists of hallmarks, as a twentieth-cen-
tury literature specialist my response was that the former reminded me of
Tolkien’s Lord of the Rings; and the latter, the modern novel.
Can a hypothesis be floated here, extrapolating from Auerbach’s
distinction? That somewhere in the eighteenth century the “biblical” mode
gained, especially through the burgeoning form of the novel, cultural
supremacy over the Homeric. The latter would hitherto be thought fit
for children (the Arthurian sagas, the 1001 Nights, etc.), to be outgrown
and given up for the biblical which would seem more sophisticated, more
modern, more complex and true. This shift would suit a society reared
since the Reformation on biblical stories (Pilgrim’s Progress was influential
here); the development of character would suit a world built by empiri-
cism, individualism, and mercantilism; the focus on historicity, obscurity,
and interpretation would mesh with a rationalist, scientific mind-set; the
biblical emphasis on intrafamilial and social problematics would appeal to
a rising bourgeoisie for whom the elite conflicts of Homeric heroes seemed
akin to yesterday’s feudalism. The Homeric mode would be relegated (not
extirpated); nineteenth-century realism and its twentieth-century crisis,
renewal, and repudiation would all be worked out in terms of the biblical
mode.
The case of Tolkien then struck me as intriguing. For most twentieth-
century scholars, Lord of the Rings was the last taboo text. Malcolm
Bradbury’s history of the British novel from 1878 to 2001 doesn’t mention
it at all until, in 1997, it wins a public poll of the books of the century, and
Digimodernist Aesthetics 157

he then pigeonholes it as “elaborate learned children’s fantasy.”31 Germaine


Greer once said she had nightmares in which Lord of the Rings would
turn out to be the most important novel of her time. And yet scholars of
medieval literature loved it. Could it then be that two incompatible and
distinct forms of fiction, of storytelling, were juxtaposed: that in fact Lord
of the Rings was not a ghost haunting “true literature” or stuff for kids,
but a masterpiece of a mode of storytelling increasingly and aggressively
marginalized by the monopolistic impulse of another? In this case, you
could not in all fairness judge an example of one mode by the criteria of the
other. Everything I had criticized in Tolkien—Frodo’s lack of depth, the
simplicity of the plot, the absence of ambiguity and obscurity to interpret,
the embrace of ruling-class experience as the only valid kind—was not
actually a fault at all, but my misapprehension of a hallmark as a shortcom-
ing, like condemning beef for being a really bad dessert. Suddenly it seemed
possible that there were two parallel versions of postwar English literature,
incommensurate, belonging to separate modes with distinct rules and
conventions; suddenly, Lord of the Rings was for me what it had always
been for medievalists, narratologically and aesthetically validated. That
didn’t make it necessarily good: it might be a poor example of its type; but
its type had a right to exist.
A second hypothesis could be erected on top of the first. Might the
Homeric/Tolkienesque mode of storytelling have gradually returned since
the 1970s to its former place as the narratological dominant? Or, if this was
overstating matters, could it have emerged from its post-Victorian eclipse
as something disdained or “primitive”; might it have reached, at least in
some quarters, a much higher degree of acceptance and popularity than
had once been the case? The origin of this reflection was again indirectly
Auerbach. In contrasting the two modes he focuses on two examples: on
one side, he considers Odysseus’s return to Ithaca and the moment at
which, his scar becoming visible, the narrative breaks off for a leisurely
account of how the hero received this wound; on the other side, he studies
Genesis 22, the story of Abraham’s sacrifice of Isaac. The former destroys
suspense, the latter is unbearably suspenseful; the one opens itself up,
suspends its story to expand toward an addition that may seem needless
but is nevertheless enriching; the other is economical and unilinear, abso-
lutely oriented on its problematic, relentless in its absorbed drive toward its
own resolution. As Auerbach notes, the Homeric aesthetic does require that
kind of “interlude”; the point is that you could retell the story very coher-
ently without mentioning it. The Odyssey is “endless” in the sense that it
can be renarrated by selecting and reorganizing or shortening or extending
158 DIGIMODERNISM

its components: these are detachable, can be recombined (as by Joyce and
Kubrick), and vary in importance. In this way it seemed that Tolkien’s
equivalent to Odysseus’s scar might be Tom Bombadil, a figure met by
the four hobbits in The Fellowship of the Ring in a lengthy section entirely
omitted by Peter Jackson’s film version. To have included him would have
been enriching but was not necessary; it was an episode rather than one of
the subplots regularly cut from literary adaptations, although endlessness
is not reducible to the episodic. Such a narrative is stitched together out of
repeatedly appended bits and pieces: it’s limited really by the fatigue of the
reader/listener, and it’s telling that Tolkien himself felt his 1,500-page story
“is too short.”31 You could indeed just keep adding more. The beginning
and the end are largely set in stone, but how much of the middle you’d want
and which episodes are down really to the skillfulness of the storyteller and
the tastes of his/her readers or listeners.
The ostensible content of this form is today found particularly, of course,
in narrative-heavy videogames such as World of Warcraft or The Elder
Scrolls, which draw heavily on post-Tolkien imagery, and where the player
him/herself plays the role of the storyteller, reshaping the given fictive
materials in a distinctive (hopefully skillful) way. These thoughts may seem
to gather up the threads of this chapter: the shift in status of the Homeric/
Tolkienesque mode is consonant with the move to the cultural center
ground of the traits of the children’s story; this mode is earnest, not ironic;
and its creation of an autonomous reality-system frequently relies on
pseudoscientific discourses, notably historical, geographical, and anthro-
pological/zoological. Alison McMahan has identified “a new umbrella
categorization system” of American film narrative blending myth, fairy
tale, drama, and what she calls the pataphysical film, claiming that “[t]his
system . . . applies to every film coming out of Hollywood today.”32 Both
the name and the nature of McMahan’s “pataphysical film” strike me as
problematic. However, the fusion of myth (yielding endlessness), fairy tale
(children’s story), and drama, both in American films like the Matrix
trilogy and internationally with Ang Lee’s Crouching Tiger, Hidden Dragon
or Zhang Yimou’s House of Flying Daggers, represents cinema’s response
to the retreat of postmodernism. Realism is superannuated, postmodern
antirealism is bankrupt; here lies a solution, a way out of the impasse. It’s
a better option than the “wistful return[s] to realism” suggested by various
literary critics as the aftermath of postmodernism, such as “dirty realism,”
“deep realism,” “spectacle realism,” “fiduciary realism,” and “hysterical
realism.”33 Such terms are likely to be fully intelligible only to other critics;
Digimodernist Aesthetics 159

in the not negligible world where narrative is embraced solely for pleasure,
more radical developments are underway. The Tolkienesque in my after-
Auerbach sense is prevalent in videogames, in Hollywood (as content), and
in popular fiction (Germaine Greer may by now be having nightmares
about posterity’s take on Terry Pratchett’s Discworld). Yet endlessness as
a digimodernist textual-narrative characteristic does not mean only the
spread of neo- or pseudomedieval storytelling modes and content.
Indeed, the thrust of this—still hypothetical—argument runs in the
opposite direction: it is our new taste for endlessness in fiction that has
created a demand for the Homeric/Tolkienesque. Another layer of possible
argument here: at the time of the invention of cinema, contemporary nar-
rative was almost exclusively structured in one of two ways (essentially
the same in singular and plural quantities): as a once-and-for-all unique
account of events and characters (Jude the Obscure); or in terms of the
format serial, in which many of the same characters would recur from story
to story doing pretty much the same things in altered circumstances, never
ageing, scarcely developing, and barely if at all remembering or showing
awareness of their own past adventures (the Sherlock Holmes stories).
Cinema inherited these possibilities, giving us Citizen Kane and, in the
format serial, the Thin Man or Charlie Chan movies, among others. TV
inherited them from cinema: in the 1960s or 70s, for instance, TV fiction
favored either the one-off film like Cathy Come Home or the format serial
like Fawlty Towers and Starsky and Hutch. (Mini-series, like Roots, were
extended one-off films.) And yet TV carried within itself from its inception
the germ of endlessness, also found on the radio: the soap. Mocked and
marginalized, the endless soap was placed socioculturally relative to the
finite TV narrative as Tolkien had been to “literary” fiction.
Digimodernist narrative, it can be asserted, favors the endless. This sug-
gests that, in this hypothetical argument, endlessness is the fictional form
of onwardness. By “endlessness” here I don’t mean, of course, that the story
literally goes on forever: each narrative has in practice a finite number of
words, scenes, or episodes. Instead, I am using it as the highly simplified
catchall for a variety of similar and overlapping narrative forms, all of
which open the storytelling up internally and estrange it from its supposed
destiny. Instances are listed below in no particular order:

a narrative that is ostensibly complete in itself but capable also of


endless additions, extensions, reorderings, reassemblies, all of which
yield a new sense of the “whole” while the “whole” is never definitively
160 DIGIMODERNISM

established due to the narrative’s internal rhythms (the Star Wars


franchise);
a narrative form so open and haphazard in detail it resembles the subjec-
tively endless flux of life and unfolds as though it were (reality TV);
a narrative form established so as to go on in principle forever, capable
of being halted only by external interference (a TV executive’s decision)
and not by anything intrinsic to the story (soaps);
a narrative form that mixes completeness on the episodic level with a
carryover of a certain quantity of material into succeeding episodes, so
that characters “remember” and act on a restricted amount of their past,
and age as in real time, giving to a very long fictional series the sense of
a single continuous shape constantly fractured and depleted (Friends,
The West Wing, Sex and the City, etc.);
a narrative form based on modes drawn from ancient or medieval oral
legend, and therefore immensely long, heroic, and externalized, struc-
tured by the regular opening and closing of episodes within the whole,
usually the creation and temporary resolution of threats to the hero(es)
(Lord of the Rings, the Harry Potter sequence, His Dark Materials, etc.).

In Britain, most people born since about 1980 experience narrative pri-
marily as endless in these senses. Whatever its (debatable) aesthetic merits,
this storytelling mode has become dominant for the generation that grew
up into digimodernism.
When I first saw what was then called Star Wars in 1977, I assumed it
was a one-off: after all, it ended with the total annihilation of the enemy
(though I was vaguely aware Darth Vader had escaped). I can’t remember
when I heard that there would be a “sequel,” but I do distinctly recall read-
ing around then that the movie I’d enjoyed would be the first in a series of
nine. This soon proved unfounded: there would be only three . . . Putting
to one side the issue of how and when Lucas conceptualized his project, the
point here is rather the project’s very elasticity: it could be and has been
expanded endlessly. So doing, the story is not “completed,” not even today,
perhaps not in my lifetime: it’s extended, broadened, renewed, in principle
forever. The prequel trilogy had to be fiddled with to get it to mesh with the
originals (McGregor had to imitate Guinness’s voice, Portman to be coiffed
like her “daughter”) but, more subtly, the originals changed shape too
under its retrospective influence. Their titles were reworked into chapter
headings (no independent story would be as feebly named as A New Hope);
Palpatine’s absence in episode four suddenly seemed a gap in the narrative.
The six films cohered only by reimagining the last three episodes as the
continuing story of Anakin, which they clearly weren’t, causing relative
Digimodernist Aesthetics 161

disaffection toward the prequel trilogy among many adult fans of the origi-
nals. Endlessness means not only the scope for repeated addenda, but the
resultant reshufflings and reorderings of the “whole,” while each
bit is discretely detachable and of varying quality—the story can be reorga-
nized, rethought, reedited. And beyond the films come the books, the
videogames . . . This narrative form is so reminiscent of myth or ethnonar-
rative it’s necessary to stress an obvious difference: the mode of the Star
Wars or the Matrix “universes” is not oral; it’s electronic-digital. Moreover,
the multiple and social authorship of a Beowulf runs up against the copy-
right and franchising of today’s texts. If (broadly speaking) the narratologi-
cally “ancient” or “medieval” is reinscribed in our culture, its mode of
diffusion is lost: authorship and textual sociality function differently in our
time, and all passes via digitization.
Star Wars is really one twelve-hour film (at least). Endless narrative,
as its name suggests, is liable to be very long, and it’s indicative of contem-
porary taste that recent movie versions of the Titanic disaster or of King
Kong last twice the duration of their 1950s’ or 1930s’ forerunners. Such
extendedness in turn suggests endlessness as its narrative structuring prin-
ciple, and makes its implied reader/viewer the fan-geek, who has the time
and inclination to learn the infinite details of this fictive universe. As con-
tinuing narratives The Matrix is seven hours long, Lord of the Rings ten,
while Pirates of the Caribbean, a sixteen-minute theme park ride, lasts
461 minutes as a story (with more to come). It achieves this expansion
by mechanically opening and closing its story (escape-capture-escape-
capture) and nonchalantly producing new tasks for the protagonists to
accomplish and new mythic items to do battle with. Immensely long nar-
ratives should historically come as a surprise: it was once assumed that
increasing demands on free time would inevitably make stories shorter
and shorter (“Ken Russell, when asked why he had shifted over into MTV,
prophesied that in the twenty-first century no fiction film would last longer
than fifteen minutes”).34 Compressed and tightened since the passing of
the Victorian age, by the 1930s most British novels, literary and popular,
tended to come in under 300 pages. The aptly named Big Read, however,
a 2002 BBC TV poll of the citizens’ favorite fictions, seemed almost to set
400 pages as a minimum: Tolkien and Pullman featured among the wordy
classics of Austen, the Brontës, Dickens, and Hardy, many of the shorter
books dating back to the now-anomalous mid-twentieth century. I am of
course concentrating here on popular taste, exemplified by the tens of
millions of copies sold and lovingly devoured of Pratchett’s 36-novel Disc-
world series and, even more notably, the 3,000 or so pages of the Harry
Potter sequence.
162 DIGIMODERNISM

The success of Rowling’s creation lies above all in its narratological


onwardness. The books are entertaining, but no internal factor, no invention
of content or style explains such an amazing triumph. By making each
novel the latest installment in an overarching narrative, Rowling innova-
tively generated such a thirst for the next one it would sell tens of millions
of copies—uniquely—on the day of publication. Adults would fight over
them in Asian airport bookstores; French readers unable to wait for trans-
lation would make bestsellers of their English originals. Such unprece-
dented behavior, and the enthusiasm and fascination the novels provoked
during their production between 1997 and 2007, was prompted over-
whelmingly by Rowling’s never-before-seen use of digimodernist endless-
ness. On the arguable cusp of shifts in narrative taste she designed a set of
stories in which elements of format serial (carryover of characters and set-
tings, parallel instances of danger obliterated and tasks accomplished) feed
into an onward battle between good and evil. It’s often noted that readers
awaited the latest installment of a Dickens novel with a similar feverishness.
But the resemblance is social, not textual: they were one-off narratives.
Rowling’s endlessness is characterized not by mere length (a symptom) or
continuity (a corollary), but by the opening and closing of a narrative
within a broader fictive arc and an extended temporality, the creation of
semi-independent stories within an ongoing quasi-endlessness (though
Harry’s adventures are in fact foreclosed by the limits of an English second-
ary school education).
On TV the role of narratological pioneer was played by soaps, which
first adopted this pattern of opening and closing story lines within a
global onwardness. Twenty-five years ago (if memory serves) the four
British terrestrial channels aired about ten hours of soaps a week; today
the figure, across five channels, is closer to ten hours a day. This insatiable
growth is divided among three principal groups: teenager-oriented and
often Australian stories set in leafy suburbia (Neighbors, Hollyoaks); gritty
English melodramas focused on pubs (Coronation Street, EastEnders); and
workplace-centered continuing narratives, which take occasional breaks
(Casualty, The Bill). British terrestrial TV schedules are dominated by a
dozen such shows, all thriving and totaling between them perhaps 20,000
episodes and a couple of centuries of production. Jeering at them, so popu-
lar in the age of Dallas (1978–91) and Dynasty (1981–89), is today compli-
cated: they are no longer marginalized; they are structurally the essence of
TV drama. The Simpsons, so postmodern and self-conscious and anchored
in 1950s–60s’ sitcoms, would joke about the artifice by which the events of
Digimodernist Aesthetics 163

any one episode don’t connect to any other. Growing to independence and
leaving behind earlier cultural models, TV, a constant, rolling medium like
radio, has increasingly sidelined the format serial in favor of continuing
narrative. Beginning with Hill Street Blues (1981–87), modern flagship fic-
tions such as ER (1994–2009), The West Wing (1999–2006), The Sopranos
(1999–2007), Sex and the City, and Lost (2004– ) have been structured by
an opening/closing episodic form within an ongoing framework. Such
stories require some memory (spawning the fan-geek) and, while highly
plotted locally, are not oriented toward any “final” goal. Soaps are distin-
guished from them by their content or production values, not their tempo-
rality. Sex and the City and Friends may have stopped by pairing off their
principal female with her long-term man, but they didn’t “conclude” that
way; under endlessness the last bit has no special weight, just as nobody
cares that the Canterbury Tales are actually unfinished. This shift in the
focus of interest from the overall arc to the minute-by-minute detail may
help explain the fantastic popularity since 1995 of Jane Austen (ceaseless
adaptations, reworkings, biopics, etc.), whose total narrative structures are
generic, predictable, and banal (girl meets boy) but whose every page is
intricate, subtle, and fascinating.

Endlessness clearly has nothing in common with the “open” narrative


beloved of postmodernism and post-structuralism, which eschewed
“closure” solely on the level of its interpretation and not in terms of its
material extension. There isn’t space here to explore properly the question
of the link between endlessness and narrative content, which sometimes
seems to revivify ethnonarrative tropes and sometimes to renew realism
but transcends both. Instead, I’ll conclude by considering an example
of digimodernist endlessness in action, in part to dispel any impression
I’ve given of asserting an historical circularity “taking us back” to ancient
or medieval narrative forms. Digimodernist endlessness derives its possi-
ble existence from old forerunners, but its shape and detail emerge from
the social, cultural, and technological specificity of the electronic-digital
world. It’s distinctly new (assuming that this hypothetical, quoted argument
can be shown to hold water).
In “The One with the Breast Milk” (1995) there are, as is usually the
case in Friends, three plot strands.35 In one, Monica goes shopping with
Ross’s new girlfriend behind Rachel’s back, incurring her double jealousy;
in another Joey is threatened by the success of a new cologne salesman
at work; in another Ross is intimidated by the thought of tasting Carol’s
164 DIGIMODERNISM

breast milk. All three interwoven stories are resolved within the episode,
respectively: Monica and Rachel are reconciled, Joey sees off the competi-
tor, and Ross tastes the milk. But although each story is introduced, devel-
oped, and completed inside twenty-two minutes, understanding its full
significance is impossible without reference to much that has happened
before then: the back story of Ross’s long unrequited love for Rachel,
recently discovered by the latter who is now in ironically unrequited love
with him; the back story of Joey’s faltering acting career, which necessitates
a day job; the back story of Ross’s divorce from the now-lesbian Carol,
which constructs his relationship with her as inevitable sexual humiliation.
Indeed, the episode contains implicit content from almost all of the previ-
ous twenty-five episodes. For a new viewer, this isn’t the confusion that
comes from unfamiliarity with character and relationship; indeed, know-
ing that Monica and Ross are siblings or that Ross is a professor doesn’t
take you very far. It’s a lack that can only be fully restored by watching the
show from its start. Consequently, seeing any one Friends episode enriches
your understanding of all those you’ve seen before, regardless of the order
you come to them in, while you can also follow any episode in isolation
from the 235 others.
For a long time the scope of Friends lay within the lyrics of its jangly
theme tune: the disappointments of early adulthood, a “joke” job, no
money, an abortive love life, and consolation for this from friends. All three
plot strands illustrate these themes. Focusing on failed progression, on
stunted developments, the show avoided any threatening changes: charac-
ters got jobs but not promotions requiring relocation abroad; they got
married but were immediately divorced. Five or six seasons in, and as
the characters moved into their thirties, the writers began to relax these
constraints in the interests of verisimilitude but still found ways of reintro-
ducing the past, by, for instance, bringing back ex-partners from several
seasons earlier to add complexity and spice to wedding preparations.
Throughout, then, the present remains populated with the past, and it also
flows forward into the future. Ten years after first viewing “The One with
the Breast Milk,” it’s easy to think of the nourished baby growing up into a
child who will play practical jokes on Rachel, or to ponder the fact that
Rachel will one day work in the department store (called here “her house
of worship”) Monica shops in, or to recall the interminable saga of Ross
and Rachel’s on-off relationship, the recurring motif of Ross’s sexual humil-
iation, and the absurd ignominies of so many of Joey’s acting jobs (the
cologne standoff, to underline what he should be doing, is a pastiche of a
Western). So the episode is (1) complete in itself, (2) dependent on a flow
Digimodernist Aesthetics 165

of information from past episodes, and (3) locked in to much that will
ensue for as long as the series will run, but—crucially—as repetition, not as
an elaboration forward and leaving behind; or, rather, as variations within
a field of action to be traversed in all directions but never abandoned.
As a result of this triple temporality, you could (1) watch only this episode
and enjoy it for what you think it is, (2) insist on seeing all twenty-five
episodes before it and enjoy it as the growth outward from their previous
content, like reading chapter twenty-six of a new novel, or (3) watch every
one of the other 235 episodes without ever realizing you’d missed this one
(unlike a novel). It’s a multiple, complex interweaving of time schemes
suited both to fans and to occasional viewers, by which episodes can be
seen in any order but gain from being watched sequentially (they none-
theless appear to start in medias res—there’s no immediate continuity).
The story opens and closes, opens and closes, on many levels and at many
varying speeds.
6
Digimodernist Culture

[L]iterature, Richard said, describes a descent. First, gods. Then demigods. Then epic became
tragedy: failed kings, failed heroes. Then the gentry. Then the middle class and its mercantile
dreams. Then it was about you—Gina, Gilda: social realism. Then it was about them: lowlife.
Villains. The ironic age. And he was saying, Richard was saying: now what? Literature, for
a while, can be about us (nodding resignedly at Gwyn): about writers. But that won’t last
long. How do we burst clear of all this?
Martin Amis, 19951

First published in 1989, Steven Connor’s Postmodernist Culture: An Intro-


duction to Theories of the Contemporary became a primer in the subject for
a generation of students. It includes consecutive chapters devoted to post-
modernism in architecture and the visual arts, in literature, in performance,
in TV, video, and film, and in popular culture (rock music, fashion).2 This
is such a logical way of exploring postmodernism that Connor was to
draw on it fifteen years later for an edited book on the same subject for
Cambridge University Press, with chapters on film, literature, art, and
performance.3 A cultural-dominant will by definition spread across artistic
fields, and a survey of recent developments in those where it prevails will
organize salient points about its character.
This chapter is structured in a similar way to undertake a parallel analy-
sis of its cultural-dominant successor, but with two variants. First, there are
some differences of field. Connor included architecture, art, video, fashion,
and performance because postmodernism, in Jameson’s words, “is essen-
tially a visual culture,”4 but I do not; I include videogames and gave Web 2.0
its own separate chapter because digimodernism favors the optical only in
conjunction with the manual/digital; and I include radio and music because
of a similar shift away from the “spectacle” society announced by Debord

166
Digimodernist Culture 167

and later picked up by assorted postmodernists. Second, each section of


this chapter deals with a crisis in its medium. This stems from the fact that
early digimodernism manifests itself above all as a rupture in conceptions
of textuality. Connor worked on the (reasonable) assumption that he could
examine the meanings and strategies of postmodernism within a continu-
ous form of material textuality; although the nature of each medium’s
surface altered with postmodernism—new concerns, styles, modes—he
was able to take for granted that the underlying structural principles of
textuality remained the same. But digimodernism is nothing if not the
redefinition of this inheritance. Consequently, each medium is embroiled
in its own violent shift: these are tales of new contents and new techniques,
of disenchanted critics, embattled theorists, and disoriented consumers,
but above all of survival in a new textual landscape.

Videogames

It’s amusing to play with the idea that certain cultural forms lie at the very
heart of certain cultural movements, embodying, in some sense, their most
emblematic characteristics. For modernism it might have been cinema,
newly invented; though Michael Wood has warned against such an identi-
fication on the grounds that silent films overwhelmingly anchored them-
selves in traditional narrative modes, anyone seeking a quick and strong
sense of what European modernism was about could do worse than watch
such studies of the machine, the city, dislocation, and anxiety as Sunrise or
The Man with a Movie Camera. It can be argued too that the format of the
mass-distribution daily newspaper, equally new, lies behind Ulysses: Joyce’s
novel, also a kind of encyclopedia of one day, comprises a sequence of
disparate forms of writing oriented on a major city and, through mise en
abyme, uses journalism and advertising as motifs (similar points can be
made about The Waste Land). As for postmodernism, its sense of the
swamping influence of the “spectacle” and the precession of the image
owed much to the spread of television; its delight in mixed registers, tones,
and genres suggests the experience of channel-hopping across blurred and
mingled fragments of myriad cultural discourses. Also central to postmod-
ernism, it could be said in the same spirit, was the recent invention of the
theme park, the acme of the simulacrum.
Whatever the validity of these identifications, we can say that, for
digimodernism, the role of formal exemplum is taken by the videogame
(hence its primary position in this chapter).5 It could be objected that
videogames predate the arrival of digimodernism by a couple of decades,
168 DIGIMODERNISM

but so technologically did cinema and television anticipate their synecdochal


wholes. It’s more relevant that only around the turn of the millennium
did videogames reach the level of textual sophistication and cultural sig-
nificance first attributable to cinema in the mid-1910s and to TV in the
early 1960s. Digimodernism therefore inherits all the academic disdain
and social marginalization intrinsic to videogames since their emergence.
Yet the figure of the computer game player, fingers and thumbs frenetically
pushing on a keypad so as to shift a persona through a developing,
mutating narrative landscape, engaging with a textuality that s/he physi-
cally brings—to a degree—into existence, engulfing him or herself in
a haphazard, onward fictive universe which exists solely through that
immersion—this is to a great extent the figure of digimodernism itself.
And as computer games have spread in their appeal across age-ranges,
classes, and genders, they have become a synecdoche for an entire new
form of cultural-dominant.
However, to this claim too it might be objected that videogames cannot
be considered properly as texts, which would surely be necessary for them
to be central to a digimodernism itself exemplified by a new form of
textuality. After all, Scrabble or Baccarat or checkers have never been
classed as texts, and they share with videogames not only a subcategorical
term (board games, card games) but the whole ludic vocabulary: players,
rules, winning, losing. If videogames are texts, they will be the first of their
type to achieve the status; it’s possible on occasion to read individual
instances of game-playing textually, like the Fischer-Spassky world chess
championship of 1972, but not the game itself.
The problem of whether videogames are texts is linked to the debate,
increasingly heard since the late 1990s, over whether they are “art.” Some of
them, it has been asserted, show at least as much narrative richness and
complexity, detail, beauty and scope of imagination, and subtlety and
power of emotional effect as their contemporary film and literary counter-
parts; if the latter are art then so are videogames; and it is the games that are
held to instantiate these qualities, not one specific example of their playing.
(However badly I play such a game it will still be art, just as Guernica
remains art whether you think it so or not, or so the argument would go.)
Within academic studies of videogames there’s a split between those who
prefer to see them ludically and those who would see them narratologi-
cally: the latter, for instance, study Lara Croft as a “character” as they might
read Ally McBeal or Thelma and Louise or Bridget Jones; the former would
all but reduce Croft to the boot you might go round the Monopoly board
“as,” the object permitting the player’s entry into and movements around
Digimodernist Culture 169

the ludic universe. Some videogames, like the many versions of chess or
golf available, are electronic adaptations of existing games or sports; others,
like Peter Jackson’s King Kong or Spider-Man 2, are electronic versions of
existing narratives, especially movies. This tension is so integral to video-
games it has marked them since their inception: while Pong redesigned
table tennis, Asteroids was intended to echo currently popular narratives
(the original Star Wars trilogy, Close Encounters, etc.). Over the years, mov-
ies and videogames have converged on occasion almost to the point of
fusion (though only a very narrow set of low movie genres).
This resource-stripping doesn’t, however, establish videogames as an art
in their own right. Moreover, the lists of qualities (complexity, subtlety,
etc.) commonly ascribed by enthusiasts to certain games are clearly para-
sitic on existing conceptions of art. Yet the one thing you would expect
of a new form of art would be its separateness from older ones, just as you
wouldn’t expect a baby to look exactly like its mother—you’d anticipate a
redistribution of family traits. Books with titles such as Video Game Art
reduce the form to visual imagery, which they study as one might analyze
a film’s cinematography.6 Yet a landscape that you play through and charac-
ters you play off are entirely distinct from (though not wholly different
than) landscapes and characters that you watch. One might just as rele-
vantly study the rendering of some carved chess figures: their beauty would
be real, and interesting, but the pieces wouldn’t derive their meaning from
it. The visual imagery of videogames resembles in its functionality the
look of a building (a game’s “architecture”), but you don’t play buildings
either. And whether videogames are art or not, or texts, you definitely
play them.7
This issue can be resolved, I think, through what I take to be, functionally,
the rupturing novelty of videogames: their grammatical reliance on super-
subjectivity. All games are subjectivist in their basic operation because “I”
play them: I am physically involved in the actions and deliberations, the
incidents and maneuvers of play. While gaming can be watched it’s clear
that any audience is peripheral and insignificant; all that matters is the
playing self (in theater and movies if nobody watches there is no perfor-
mance). Subjectivity in a traditional game is literal: it’s really you who win
and lose (the source of games’ emotional pull) even if it’s mediated through
inanimate objects like pieces, tokens, an iron or ship; and this is carried
over into the fundamental structure of videogames, their ludic heart. Yet
the subjectivity that videogames allow is actually a super-subjectivity.
Super-subjectivity can take many gaming forms (I give here only the
briefest of sketches). A player’s self can map on to many game selves: in a
170 DIGIMODERNISM

soccer game, s/he can incarnate all eleven members of their team plus the
coach during one matchup alone, plus all the players and coaches of all of
the other teams during a single session of play; in ten hours a player might
map him or herself on to hundreds of different selves. (Pathologically this
one-to-many correspondence can be considered as latently schizophrenic.)
Conversely, the game self mapped on to may be a single fictive individual,
that is, a character, with a name, history, traits, feelings, social place, and so
on, though set in a universe where “selfhood” is invested with personal
power(s) or an ego-emphatic lifestyle impossible in the real world. Playing
as such a character you really are him/her (pathologically, this is narcis-
sism) and you inherit all his/her enhanced rights, strengths or invulnera-
bility, diminished responsibilities and eliminated needs or weaknesses.
Indeed, whether equipped with a personality or not, the game self assumed
by a player may possess a subjectivity more extreme, forceful, or immune
than the player’s own: s/he can be killed many times, or can slaughter
with impunity, or drive cars at 200 mph and step unscathed from infinite
appalling crashes, and so on. Knowing no fear, stripped of the consider-
ation of consequences, this subjectivity seems heroic, mythical, legendary;
knowing no shame or guilt, no psychological attachment to the past or the
external world, it has the pathology of the psychopath. Alternatively, the
player’s self may map on to anthropomorphized creatures that retain
the (disavowed) consciousness of humans while furnishing a whole new
set of qualities and powers. Such a player remains him/herself, only much
more so.
By super-subjectivity, you play through your gaming self or selves: you
play, then, as yourself (it’s you whose game ends when all your lives have
gone) but vastly inflated. The process of self-identification that is involved
owes something to the ways in which readers and viewers identify with
characters in fiction, but the textual universe of games gives it a distinctive
ontology. In gaming, you can often switch the object-self of your super-
subjectivity from one instant to the next, whereas film or literary identifi-
cations tend to be deeper and more inflexible; and while the latter rely on
an optional self-recognition by which the character is felt to be “just like
me” or to embody “my values,” gaming super-subjectivity enforces self-
identification at a grammatical level: either you identify yourself thus, or
you don’t play the game. These structural considerations are rendered more
complex again by multiplayer action, whereby your super-subjectivity
interacts with, is thwarted by, or joins forces with somebody else’s.
As videogames have developed so far, super-subjectivity seems their
most essential feature. It also distinguishes them clearly from other forms
Digimodernist Culture 171

of game. More could be said here: how super-subjectivity correlates to


a gaming super-objectivity (mighty enemies, indestructible machinery,
cosmic stakes, overwhelming landscapes); how, in a structurally identical
process, the game produces its own “mastery” through practice and repeti-
tion, leaving little to externals like training or innate talent (cf. chess). It
should be apparent even from this outline that super-subjectivity heals the
supposed split between videogames’ ludic functionality and their narrative
content; they are then seen as a ground-breaking amalgam of the two. Not
every videogame has a narrative, of course, but it is in the union of the
ludic and the narratological that a dominant segment of them can be
read as digimodernist texts. Moreover, such games become texts solely
within a digimodernist reconceptualization of textuality: producing
meaning by their use, they are onward, haphazard, digital/manual, con-
sumer-productive, usually anonymously authored (these days), and eva-
nescent in the sense of being permanently superseded (if only within their
own franchise). Videogames can be what no prior ludic form could, a text,
uniquely because they exemplify a digimodernism characterized by a new
formulation of textuality. This relationship is of course reciprocal: in a
world where videogames were impossible, digimodernism would be incon-
ceivable too. In line with this wider context, videogames see their funda-
mental textual function, super-subjectivity, echoed across the cultural
landscape: it stands easily alongside Wikipedia’s nonobjective expertise,
message boards’ illusory community, chat rooms’ extended and fictive
selves, blogs’ instantaneously globalized intimacy, and the offer made
universally by digimodernist TV or radio to “write” or “produce” national
entertainment shows.
However, while super-subjectivity unifies the ludic and the narratologi-
cal in one gesture, and thereby establishes certain videogames as texts, it
may prevent them from qualifying as art. Nabokov’s description of the
latter is compelling: “Beauty plus pity—that is the closest we can get to a
definition of art.”8 Many have argued that videogames provide beauty or
its aesthetic correlates; the other term, though, is categorically different,
essentially moral. This is not to call computer games necessarily “pitiless,”
though a telling proportion do valorize ruthlessness, icy inhumanity, a
state of implacable brutalization. Such a contingent tendency to indiffer-
ence toward “them” is a by-product of the grammar of super-subjectivity.
An extended, expanded, overfurnished, hyperequipped “I” will struggle of
necessity to convey compassion for “us,” to express pity for the human con-
dition, because both pronouns occupy the domain of the first person: they
are incommensurable within the same logical-grammatical space. Pity in
172 DIGIMODERNISM

art derives from the universality of loss, from inexorable ageing and
tarnishing and forgetting and wearying, from the inescapable mortality
of self and loved ones: “Where there is beauty there is pity for the simple
reason that beauty must die: beauty always dies, the manner dies with the
matter, the world dies with the individual.”9 A similar problem seems to be
faced by digital art, that is, art produced either using or within digital tech-
nologies: often visually arresting and intellectually interesting, it tends to
feel shallow for the same moral reason.10 The digimodernist crisis of textu-
ality alluded to above is in both videogames and digital art already (though
not insuperably) apparent: the two cultural modes most integrally reliant
on digital technology struggle to constitute themselves art.

Film

If videogames face challenges, that is nothing compared to what some see


as the contemporary condition of cinema. In 2005, Dustin Hoffman
described film culture as “in the craphouse,”11 an evacuated and posterior
state echoed by Peter Greenaway two years later:

If you shoot a dinosaur in the brain on Monday, its tail is still


waggling on Friday. Cinema is brain dead . . . Cinema’s death date
was 31 September 1983, when the remote-control zapper was intro-
duced to the living room, because now cinema has to be interactive,
multi-media art . . . [US video artist] Bill Viola is worth 10 Martin
Scorseses. Scorsese is old-fashioned and is making the same films
that D. W. Griffiths was making early last century . . . We’re obliged to
look at new media . . . it’s exciting and stimulating, and I believe we
will have an interactive cinema . . . Cinema is dead.12

I don’t know what exactly provoked these reflections, and Greenaway is in


part being playful (as his versions of prehistory and the calendar attest).
There has never been a dearth of bad movies; so what fundamentally or
structurally can have changed since the heyday of these senior citizens?
Mark Cousins’ The Story of Film (2004) suggests a recent watershed in cin-
ema history. He divides the medium into three periods, 1895–1928
(“Silent”), 1928–90 (“Sound”), and 1990–present (“Digital”), and subtitles
the last: “Computerization takes cinema beyond photography. A global art
form discovers new possibilities.”13 It is most likely this that Hoffman and
Greenaway were evoking. There are two ways of considering the impact of
“computerization” or digital technology on film: superficially in terms of
Digimodernist Culture 173

CGI (computer-generated imagery) or digital experimentation; and more


broadly as a putative redefinition of the nature or the “possibilities” of cin-
ema itself. It can indeed be argued that this coming of a third age of film,
dated by Cousins between the release of James Cameron’s The Abyss (1989)
and his Terminator 2: Judgment Day (1991), has reshaped irrevocably the
aesthetic category of filmic value. When in 1962 Sight and Sound magazine
polled international critics on the greatest films ever made, their top ten
included a movie released only two years before; in a similar poll in 2002
the most recent entry was almost thirty years old.14 It may be that this
reveals the terminal decline of cinema. It is also possible though that it
indicates the creative bankruptcy of a certain kind of filmic achievement.
We’ll try to glimpse its successor.
It’s been claimed that on the canvas of Les demoiselles d’Avignon you can
actually see the fault-line, the graphic moment at which Picasso crossed
over from figurative painting to abstraction, the great artistic revolution of
the first years of the twentieth century. A group of films likewise manifest
the cinematic shift of that century’s end and beyond: they display a tangible
boundary line separating what film had been up to that point from the new
possibilities. Looking at some of them will open up the question of what
computer-generated imagery has brought to movies in terms of their con-
tent, style, and form.
Steven Spielberg’s Jurassic Park (1993), the first mainstream movie to
rely centrally on CGI, uses it, of course, to bring its dinosaurs to life. In its
premise, this is a piece of popular postmodernism. It’s a contemporary
palimpsest of the Frankenstein myth by which advanced but lethally
overconfident scientists find a way to revivify the dead, inadvertently
wreaking havoc (DNA and chaos theory stand in for Shelley’s interest in
chemistry and her Romanticist worldview). Focused on a theme park filled
with Baudrillard’s simulacra, copies from a lost original, it was to prompt
(though this hasn’t been recognized) a literary postmodernist treatment of
these motifs, Julian Barnes’s theme park fable England, England (1998), in
which a satirized Baudrillard-substitute appears. The first half of the movie
unfolds, as such stories do, through mise en abyme, depicting the creation
of aesthetic objects or experiences which then achieve autonomy and turn
on their creator. There’s nothing in Spielberg’s premise that isn’t a confla-
tion of postmodernist pastiche with a strand of postmodernist theory.
Early on a character observes that in the park dinosaurs and humans,
separated by millions of years of evolution, have been flung together. This
duality corresponds to that of CGI and previously existing cinema; the for-
mer gives us the dinosaurs, the latter the reality we already knew. So CGI
174 DIGIMODERNISM

brings into filmic being the other, something from another world, another
time; the monstrous, the impossible, what lies outside our observable
world. This was inevitable, since conventional cameras had for a hundred
years been able to capture the naked-eye universe, that medium range of
vision which rests only on the here and now. Everything that belongs
beyond this circle is provided by CGI. Already in Terminator 2 CGI had
made material, not elements of the distant past, but fragments of the future,
projected into our present. Watching Spielberg’s movie it must have
occurred to some studio executives that all cinema’s historical monsters
could be resuscitated by CGI, and so came Roland Emmerich’s Godzilla
(1998) and Peter Jackson’s King Kong (2005). The second half of Jurassic
Park is both cinematically and ecologically controlled by the dinosaurs,
who are finally glimpsed roaring triumphantly while a banner about them
ruling the world floats symbolically down. CGI has been flung up against
conventional filmmaking, and prevailed; it is also, in this incarnation, won-
drous, truly magical, jaw-droppingly so. Its reliance on CGI is, of course,
one reason why popular cinema has undergone infantilization: CGI-domi-
nated movies look like a child’s magic show, a firework display, a kiddies’
theme park; the Spielberg-like entrepreneur introduces two under-twelves
to the scientists with an allusion to his “target audience.”
A similar line separates the postmodern from the CGI in Emmerich’s
Independence Day (1996). This is a blend of commerce and subversion,
a megabucks blockbuster and brainless neo-con flag-waver that weirdly
and transgressively climaxes with an act of world-saving anal rape. It
depicts the arrival of the imperialistic aliens in media terms, as they hijack
and cause the malfunction of satellites and, through them, television sets;
the long preamble of warnings and chaos is mostly conveyed via TV broad-
casts watched by captivated crowds, a national and international outbreak
of transmitting and staring that establishes the events as essentially spec-
tacular. This apocalypse will only be televised. Constructed from the modes
of viewing and passivity, directed also at the spaceships themselves, the
script calls up a host of movie allusions to It Came from Outer Space, E.T.,
2001: A Space Odyssey, and Close Encounters, among others. The whole is a
palimpsest of The War of the Worlds with Wells’s deadly bacteria wittily
replaced by a fatal computer “virus.”
Nevertheless, in its exact middle and at its conclusion Independence Day
crosses the line. It’s vital here that, contrary to most of its sources, it has
presented the aliens as psychotically malevolent; the only human response
can be to annihilate them first, so two dogfight sequences are played out.
Cinematically they differ radically in their mise-en-scène from the rest of
Digimodernist Culture 175

the film: the concern with representation, spectacle, and watching is sup-
planted suddenly and violently by an immediate and visceral engagement;
the screen is filled with fizzing lights and careering craft, the humans dodge
and fire, spin and attack the enemy in fast, kinetic, and material involve-
ment with a digitized world. It doesn’t look “real” in either conventional or
postmodern terms; it looks like a computer game, a more sophisticated
Space Invaders. The enemy whizzes brightly at and around you while you
try to avoid being hit and fire madly back in a survivalist blur. The change
of aesthetic is temporary but absolute; the US President, until then just
another viewer/broadcaster, is transformed into a fighter pilot.
This use of CGI to make film resemble preexisting videogames has
become, of course, widespread: among the many examples of games made
into movies are Final Fantasy (2001, 2005), Tomb Raider (2001, 2003),
Resident Evil (2002, 2004, 2007), and Doom (2005). It can even be argued
that the videogame has replaced theater as cinema’s other. From its incep-
tion film struggled to distinguish itself from the mere recording of what
belonged on a stage; in One A.M. (1916) Chaplin played a drunk returning
home late at night, and the camera sat before his supposed living room
following his misadventures from middle distance like a spectator in the
front row at the music hall. The maturation of cinema required it to find its
autonomy, to shrug off this dependence, this mechanical reproduction of
theatrical performance; it never entirely succeeded either, as the movement
of actors, directors, and writers between the two suggests. CGI cinema
arguably replaces that ambiguous reliance on photographed theater by a
new frère-ennemi: the fabrication of reality-systems and human experience
by computers, the precise area of expertise of the videogame. Consequently,
while bad 1930s’ movies look literally “stagey,” many CGI pictures look
slightly “unreal,” that is, insubstantially computerized. In the latter the
videogame is never far away; or, rather, it is never further away than theater
and vaudeville were from Chaplin or Welles, Renoir or the Marx brothers.15
This shift lies at the root of much contemporary complaint about cinema;
but in principle it reorients film, it does not destroy it.
Stephen Sommers’ The Mummy (1999) regenerates a CGI-made ancient
Egyptian and his dead world’s practices, and an early 1930s’ horror movie.
This double revivification makes for the presence throughout of two lan-
guages, two epochs, but also two aesthetics and tones. On one side you have
some very self-aware, self-mocking, and depthless Middle Eastern hokum,
which knows itself to be the latest in a low tradition already remodeled in
postmodern terms by Raiders of the Lost Ark (1981). Filled with inverted
commas, it’s a self-parody that surfs its intertexts. Ranged against all this is
176 DIGIMODERNISM

the CGI world of the dead with its digimodernist earnestness and mythol-
ogies, its unquestioning embrace of remote abstract values, its sacrifice and
eternity and tragedy, and its apocalyptic destructiveness: flights of locusts,
plagues of beetles, murderous towering sandwalls—epic forms of slaugh-
ter. Once again, the CGI domain is “evil” without ours really being “good,”
since the latter has junked, in its ironic reflexivity, all moral dichotomies.
Instead, the postmodernism here defangs the horror, and the CGI invigo-
rates, with unexpected reciprocity, the film’s weary postmodern strategies.
If The Mummy runs these discourses together, Jackson’s King Kong, another
remake of an early 1930s’ movie, gives us an hour of popular postmodern-
ism engulfed in its second hour by rampaging digimodernism: the journey
to Skull Island is all knowingness, mise en abyme, transtextuality, and cine-
literacy, then the film is taken over by the mythology and devastating vio-
lence of the ape and the dinosaurs. By now the former seemed gratuitous,
mere forelock-tugging to obsolete film school theory. By contrast, Guill-
ermo del Toro’s Pan’s Labyrinth (2006), which juxtaposes a similar “real”
period (the early 1940s) with a CGI realm of fairy tale and horror, omits
entirely the postmodern as presumably superfluous. Its intertwining of
ontological levels is richer and more suggestive than that of many more
commercial movies; at the same time, and not unlike Jackson’s Lord of the
Rings trilogy, it excitingly opens up new narrative possibilities through a
redeployment of the traits of the children’s story.
In the Harry Potter series (2001– ) the line divides the Muggle from the
magic world: every episode ritualistically reestablishes the former so that
the joyous crossing into the latter can ensue (we’re never rid of the vile
Dursleys). This traversal is so important it’s multiply enacted: by traveling
diagonally (to Diagon Alley), by accessing a fractional railway platform, by
voyaging on extinct means of transportation—it’s wondrous and symbolic.
Once in Hogwarts, the magical dimensions of boarding school life emerge
from the familiar routines, calendar, and characters for which fictive fore-
runners have long prepared us (they play sport in houses, not cricket but
its magical near-homophone). One kind of reality is conveyed by conven-
tional means, the other by CGI: corridors, detentions, headmasters, and
janitors by one method, swirling staircases, talking centaurs, and moving
paintings by the other. In the Chronicles of Narnia series (2005– ), the fault-
line is materialized, from the original children’s novels, as the back of a
wardrobe; but it also marks a different form of narrative, as nonsignifying
and temporal history passes into meaning-rich and eternal allegory.
Many critics struggle with CGI cinema. Those weaned on 1960s’ mod-
ernism lament the loss of a distinctive authorial “vision” and the lack of
Digimodernist Culture 177

philosophical or political engagement; those raised on 1980s’ postmodern-


ism bemoan the absence of irony, depthlessness, or disruption. These have
been swallowed up by CGI cinema’s self-sustaining ontology and its extra-
materiality, by its redistribution of the creative burden and its taste for
earnestness and myth. Critics who read films in terms of genre also have
problems with CGI. Its heavy presence bleeds out the differences between
many traditional genres: forms such as fantasy, adventure, the creature
feature, sci-fi, the disaster movie, the historical movie, space opera, sword
and sandals and sword and sorcery become increasingly homogenized,
leading to the emergence of something we can call CGI cinema. In such
films, with their restructuring or debilitation of genre, three recurring nar-
rative typologies indicate the textual impact of computerization.
The first is the apocalypticist movie. CGI is excellent for the ending of
worlds and the aftermath of their destruction, for the spectacular oblitera-
tion and the display of the cadaver of the reality-systems we know. You
could separate this into two halves: on one hand, the use of CGI to create
world-devastating effects, like the tornadoes of Jan de Bont’s Twister (1996)
or the megatsunami that wipes out the Eastern seaboard at the end of
Mimi Leder’s Deep Impact (1998) or the asteroid showers of Michael Bay’s
Armageddon (1998) or the aliens ripping up New England in Spielberg’s
The War of the Worlds (2005); and, on the other hand, the evocation of
wrecked and ruined worlds, their familiar landmarks (especially the Statue
of Liberty) symbolically overthrown. The apocalypticist use of CGI permits
the making of seemingly “engaged” movies dealing with climate change,
rendering global cooling in Emmerich’s The Day after Tomorrow (2004),
for instance, a vision of overpowering desolation; it also encourages the
dramatization of scenes foreshadowing or echoing the streets of New York
on 9/11, as in Independence Day and Matt Reeves’s apparently real Twin
Towers allegory Cloverfield (2008).
CGI-apocalypticism focuses on destruction rather than violence. In this
it differs from late postmodernist-auteurist films such as Oliver Stone’s
Natural Born Killers (1994), Michael Haneke’s Funny Games (1997), and
Fernando Meirelles and Kátia Lund’s City of God (2002), which dwelt with
detailed relish on the gloating cruelty of humans. By contrast, here we get
a deluge of impersonal wrecking, of smashing, crushing, erupting, explod-
ing, and so on, and we get uncountable brutal deaths; but not the docu-
mentation of human sadism of a Tarantino. If some see CGI as cinema’s
death-knell, this is an upside that I’m grateful for. CGI-apocalypticism
foregrounds victimhood; it’s paranoiac. Movies with all this destruction
tend to turn into survivalist epics where people interminably scream and
178 DIGIMODERNISM

try to escape from destroyers of worlds (aliens, apes, cold, etc.). They come
to seem very puny faced with these overwhelming, overpowering sources
of annihilation, and their stories shrink emotionally with them. Such films
set up the personal or political, human triggers for the arrival of their CGI,
and they spend time establishing human relationships and predicaments to
be traced through the subsequent CGI-driven bombardment. In practice,
the latter swamps the former: we wind up caring about and believing in
nothing else. So an ironic doubling occurs: while the characters run around
trying not to be obliterated by the CGI-made killers, their actors are textu-
ally erased by them.
As Spielberg showed, the second major narrative use of CGI is to
revivify the past (rather than blot out the present): the CGI-historical movie.
It revitalizes vanished places, ruined buildings, lost worlds. As this is, by
the nature and name of the technology, a visual resurrection, it works most
effectively among civilizations whose written histories have come down to
us but whose visible sites have been half-erased (though some residual trace,
to signal the very act of reconstitution, is essential). Beginning with Ridley
Scott’s Gladiator (2000), CGI-history brings us the ancient and medieval
world. Gladiator rebuilds imperial Rome and restores the Coliseum to its
pristine entirety: this is CGI as the work of a sort of architectural heritage
trust renewing the past’s pure look. In the same vein, CGI operates as a sort
of historical reenactment society busily restaging past battles. There’s much
to learn here about the close-quarter combat that disappeared forever in
1914–18; again and again a sword is raised only to chop down at its enemy
while seething masses hack and slice each other in the background. CGI
can recreate the decapitation and the impaling and the disemboweling
attendant on the sword, spear, and lance with startling truthfulness. It can
also present an ancient or medieval army of tens of thousands of men
standing on a plain, CGI-made extras producing a sight not seen with such
verisimilitude in living memory. CGI evokes the actual scale and horror of
such warfare; if it suggests these subjects to filmmakers, it also suffuses
them with powerful conviction. The same can be said of the gladiatorial
scenes fought in the Coliseum under Scott’s direction.
This is (alpha) male history; this is (great) man’s historiography, all
emperors and generals and warlords, kings and captains, spurting blood
for their noble causes. It’s the sort of history, indeed, that the ancients and
medievals wrote of themselves: of power and battles, coups and scheming,
armies, wars, and conquests. It’s not the history of the poor, the weak, or
even the female. CGI, as did these men, builds dazzling cities, assembles and
Digimodernist Culture 179

launches beautiful and vast fleets, unleashes infinite and phallic-thrusting


battalions. These films play with the boundary between the homosocial
and the homoerotic, as in Oliver Stone’s Alexander (2004), whose CGI
magnificently restores ancient Babylon. Historically Alexander is empty
(you’d learn more from a 250-word encyclopedia entry), and driven instead,
like all these films, by values that no one believes in today, like honor, glory,
filial duty, and ancestral pride. Since the characters don’t seem genuinely
motivated by them either, nothing seems at stake. Lost values are not
recalled by CGI.
Alexander, very uncomfortably, resembles neo-con propaganda for the
Anglo-American war in Iraq. A light-skinned “hero” supposedly guided by
the father of Western philosophy heads east to conquer and subjugate the
lands of dark-skinned, illiterate, and overemotional “barbarians,” bringing
them freedom and good government. I suspect this message is inadvertent;
it’s the ham-fisted coincidence of a hero-making digimodernist cinema
with the poisoned grand narratives of digimodernist politics.16 This context
also dominates Scott’s Kingdom of Heaven (2005), which restores twelfth-
century Jerusalem (another ancient city) and reenacts a phase of the
Crusades. The film exists largely to denounce Christian-Muslim enmity as
driven by religious fanaticism, venality, and bloodlust: it’s a present-day
argument in old clothes.
The third strand, CGI-myth, often blends with CGI-history when it is
not colonized by nonexistent species or impossible physics. All the above
applies just as well to Wolfgang Peterson’s Troy (2004), for instance, which
every reviewer compared to Alexander. CGI-myth favors, in the image of
Achilles and Hector, the legendary, the heroic, the superhuman: the three
X-Men films (2000, 2003, 2006), miniaturizing the traditional/CGI cinema
duality, foreground characters mostly conventional but with added stupen-
dous powers, to the point where you can see the ordinary extend into
the extraordinary and (implicitly) older cinema expand into newer. The
same is true of the Fantastic Four films (2005, 2007). In this area, however,
two specific CGI-mythologies dominate in the form of entire and self-
sustaining reality-systems.
The first, George Lucas’s Star Wars prequel trilogy (1999–2005), was of
course part of an already-existing franchise. Indeed, Lucas was good
enough to give us a postmodernist Star Wars trilogy (1977–83) and then,
twenty years later, a digimodernist one. The former is depthless and likeable
adventure, all flash and buzz and no substance, aimed very straightforwardly
at children; it’s a retro-extravaganza of rehashed cinema including British
180 DIGIMODERNISM

1950s’ World War II movies, Kurosawa, Triumph of the Will, long-gone


Saturday morning serials, old sci-fi B-movies, The Searchers, Tracy/Hepburn
comedies, and so on. Its incommensurable ingredients made its narrative
messy and its tone uneven and awkward; composed of scraps of texts from
decades earlier, this trilogy was literally from “long ago,” and it looks worn,
instantly familiar. The prequel trilogy, however, gleams brightly in high
definition, and its intertexts are its generic peers. The story of Anakin has
a mythic sophistication, resonance, and fascination; the prequel films
build inexorably to the awesome grandeur of the second half of Revenge of
the Sith, all epic doom and unimaginable fates, vast destinies and impon-
derable moral complexions. The climaxes are apocalyptic showdowns to
determine the outcome of galaxies, not a shooting contest to knock out an
enemy stronghold. In all, Lucas shifts from adventure to mythology, from
simple dangers and the daredevilry of a good-looking boy to the complex
histories of regimes and the rise and fall of empires. The prequel trilogy
does sometimes slip into pseudovideogame, as at the climax of Attack of
the Clones and the first half of Revenge of the Sith, where you’re in among
slaughtering and slaughtered digital creatures in worlds that don’t exist. Yet
even that failing only exposes the way in which the two trilogies exemplify
incompatibly distinct aesthetics: the mechanical Star Wars, and the com-
puterized one.
Peter Jackson’s Lord of the Rings trilogy (2001–03), the second dominant
CGI-mythology, is an extraordinary achievement. It’s impossible in a few
sentences to do justice to this symphonic, epic, overwhelming piece (really
one long film, a unity like its original): beyond irony and outside postmod-
ernism, it’s exalting, addictive, engulfing, and it seems to reach for and
achieve implications and responses that film had not previously known or
sought. The CGI here is integrally embedded in the filmmaking, one with
both the style and the substance; it doesn’t extend anything or replace or
update; it’s indistinguishable from the text’s conception and execution,
ubiquitous and near-invisible. The first fully fledged masterpiece of digi-
modernist and CGI cinema, it could not have been made with such imagi-
native scope and persuasiveness without computerization.
Temporally, it works on at least three levels: as a dramatic evocation
of World War II (though Tolkien resisted this allegory); as an eternal medi-
tation on the human condition, the Ring and Sauron standing at different
times for power, sin, God, sex, the sun, death, and desire; and as a medieval
pagan/Christian return to Romance or oral storytelling modes. Reawaken-
ing the roots of English and European literature, it appears to reorient
our sense of life: silently suspending the need for propagation and work, it
Digimodernist Culture 181

relates being to the infinite, to evil, community, mortality, fear, to friend-


ship and love and joy; to geography and history and posterity, to moun-
tains, trees, animals, ecologies. All of Gaia mobilizes against Mordor in a
oneness that feels Shakespearean, ancient, pagan, and Green. The film is
primal: it suggests existence when purified of the banal minutiae of human
relationships (the anti-Austen). Here’s the rub. Tolkien’s elimination of
sex and labor made his text taboo within an academia used to realism and
its challenges and crises. Yet nothing seems to be missing here: it’s not a
consolation world for misfits unable to deal with this one, but a concen-
trated and ennobled experience of its own. This experience is in turn easy
to criticize: it’s for elites and men only, focused on travel and war rather
than anything resembling ordinary life. The trilogy has been condemned
as a simple-minded and pointlessly elaborate bubble floating weightlessly
far from the known world. Yet do texts have to be about our world—about
language—about themselves? Ultimately you can judge them only on the
richness and power of their aesthetic achievement. I feel too (though this
will be sacrilege to some) that Jackson as a filmmaker commanding cam-
eras and editing is a much greater master of his creative tools than Tolkien
was as a user of words. His trilogy feels revolutionary, is visually exciting
and beautiful, sonically mesmerizing, massively detailed and vastly wide-
ranging, and gripping over nine hours. A postmodern favorite, Singin’ in
the Rain featured in the 2002 Sight and Sound top ten films of all time: it
mixes high and low culture, foregrounds the processes of its medium, is
ironic, allusive, and parodic. Lord of the Rings may replace it one day, but
not yet I fear.
Implicit in Lord of the Rings’ distribution of creative achievement is
CGI cinema’s redefinition of filmic value. It downplays character (the inte-
riorities of Frodo, Neo, and Wolverine could be written on the back of one
postage stamp); equally, it doesn’t give much to actors to do (but the truly
terrible work of some, like Christensen or Farrell, is due not to technology
but to the casting of pretty youths for the children’s story movie market).
Moreover, CGI cinema is not plot-driven in the conventional sense. The
first half of Francis Lawrence’s I am Legend (2007), for instance, conveys a
vision of a postapocalyptic New York that is overwhelming: the familiar
city empty of people, its streets, along which deer hurtle, lined with rusting
cars and overgrown with grass, dirt and dust everywhere, the only sound
birdsong—it’s deeply affecting (if conceptually familiar). Yet there is no
plot (it finally splices one together from old zombie movies). It isn’t that all
the invention has gone into the visuals; rather, a conceptualization of cin-
ema dominated by CGI may feature dramatic moments but not narrative
182 DIGIMODERNISM

complexity or momentum. On the whole, though, the major cause of the


artistic failure of many CGI movies is their reliance on the most sterile
kind of infantilism, and nothing in the technology itself. Most conspicu-
ously lacking from such films, for those used to an older aesthetic, is con-
temporary social relevance, a critical engagement with the world beyond
the theater. Instead, the ontology of CGI cinema is largely self-sustaining
and self-generated; this is not a cinema that “reflects on” or “represents”
either the outside world or other texts; it’s a cinema of immanence.
These three strands are not “genres”: they’re just the kind of narrative
stuff which CGI suggests to the minds of directors. All three may turn up in
the same movie, as in Emmerich’s 10,000 BC (2008) which includes pseu-
dohistory with resurrected extinct animals, a mythic quest, and the destruc-
tion of an unnamed civilization. Debates about which one a given film
“belongs to” are misconceived. In any case, the role of digitization in movies
may already have passed on to a new phase. The controls are being reversed.
The use of CGI to remake old films, improve existing effects, renew genres
or extend our naked-eye universe was never going to satisfy some people;
and a handful of more recent movies relegate conventional filmmaking
methods and content to that which is fed into a dominant digital mode, pro-
ducing a deliberately nonrealistic, computerized cinema. Robert Zemeckis’s
Beowulf (2007) is of course a work of CGI-myth, one of many accounts of
Anglo-Saxon or Celtic legends in the past decade or so. It’s made with the
use of “performance capture” technology by which Anthony Hopkins, John
Malkovich, and Angelina Jolie speak their lines, emote, gesture, and move
conventionally before a camera, and the record of their impersonations of a
fictive self, subsequently amplified and embellished, is sent through a com-
puter to arrive on a movie theater screen in digitized form, pure synthesis.
They act in a specially prepared and empty space: there is neither studio set
nor location, dispensing with the photographed objective world essential to
live action cinema since its invention. Beowulf is not quite successful but it
nevertheless represents, not futile stylization, but a potentially seismic shift
in the aesthetic structures of cinema.
Similarly, movies have been made since 2004 using “digital backlot”
techniques, which produce a visual style as peculiar, powerful, and exciting
as any yet seen in cinema. “Real” actors play conventionally in and among
a computer-generated “physical” world that often threatens to engulf them
completely. Here CGI is not a tool to enhance photography; it’s primordial
in all its conscious and overpowering artifice. An early example, Kerry
Conran’s Sky Captain and the World of Tomorrow (2004), is vitiated by a
ridiculous script, confused direction, inadequate acting, and a misguided
jumble of interwar textual allusions, but its retro-futurist look is often
Digimodernist Culture 183

terrific. Zack Snyder’s 300 (2007) is CGI-history (it reenacts the Battle
of Thermopylae) with its homoeroticism and uneasy neo-con utility. An
adaptation of a graphic novel, it deliberately looks “drawn” and artificial.
There is no attempt here to fuse the CGI with the naked-eye universe;
cinema itself is surrendered to its digitization. Much of 300 is absolutely
fascinating cinematically and suggests a brave new world of filmmaking of
whose topography I don’t pretend to know anything. The challenge though
will be to find subjects as original, rich, and arresting as the mise-en-scène
itself. As for digital “rotoscoping,” where actors are filmed conventionally
and the footage then painted over by animators as in Richard Linklater’s
Waking Life (2001) and A Scanner Darkly (2006), the technology suits
themes of derealization and identity loss and the evocation of dreamstates
by looking what it is, both real and invented. Beyond this, it’s hard to see
what narrative applications such insanely labor-intensive work could have.

What, in summation, does CGI bring to films? I’ve already suggested that
it brings what we cannot see with our own eyes or with existing technology
(telescopes, microscopes): the noncontemporary, the nonexistent, the
nonscientific. On one level it can be argued that CGI has added nothing
new, since apocalypse, history, and myth are, as subjects, almost as old as
cinema itself. The real revolution is ontological. CGI embodies neither the
contents of the mind nor of the world, neither idea nor substance. To over-
simplify, traditional cinema lay between two poles: at one extreme, you
could station a camera somewhere, record what happened in front
of it, and relay the images via a projector on to a screen, as the Lumière
brothers did in the 1890s to depict workers leaving a factory or a train
arriving in a station; at the other extreme, you could make films out of your
imagination, like Méliès in 1902 when he pictured a rocket ship landing in
the eye of a moon recreated as a human face. These two poles can be defined
platonically as thought versus actuality, the mind against the world, inven-
tion versus fact. In practice, all films (including these) blend the two: “The
ability of a shot to be about both what it objectively photographs—what is
in front of the camera—and about the subjectivity of its maker explains
the alluring dualism at the heart of cinema.”17 In 1967 you might have
watched both Disney’s The Jungle Book (cartoon, pure imagination) and
Andy Warhol’s Empire (what a camera placed before a skyscraper hap-
pened to record). All cinema can be—or could be—seen as the outcome of
a negotiation between the two poles: the interior and the exterior, the
dreamed-up and the already existing.
In CGI cinema a third element is added to this ontological structure.
Seemingly “natural” images, apparently of the world, are yet immaterial,
184 DIGIMODERNISM

insubstantial; and yet they are not just expressions of thought either, not
just products of imagination, intention, or invention. CGI lies closer to the
“world” of the Lumières, but it’s not our world; and, although consciously
manipulated, it’s not reducible to the contents of the filmmaker’s head
either. Such images are the actual material stuff of the movie, but are not in
themselves any such thing. Charles Foster Kane is framed and lit so that his
creator can imply things about him; but the tens of thousands of creatures
awaiting the Battle at Helm’s Deep aren’t an expressive tool, they’re genu-
inely there . . . except they aren’t: they’re digitized. It’s in the aftermath of
films like 300 and phenomena like Gollum, where digitization breaks free
of mere “special effects” to become a film’s conceptual and aesthetic point
of departure, that a separate and new level of cinematic ontology has
become identifiable.
Outgrowing its earlier role as a supplier of striking images, digitization
has restructured the reality of film. Its importance is reflected in a plethora
of “CGI narratives” without actual computerization, from Chuck Russell’s
The Scorpion King (2002, a spin-off from The Mummy, which restores
Gomorrah) to Kevin Reynolds’ Tristan and Isolde (2006); and in culturally
esoteric CGI movies like Ang Lee’s Crouching Tiger, Hidden Dragon (2000),
or Zhang Yimou’s House of Flying Daggers (2004). Furthermore, it can
be argued that this destabilization of film’s mind/world duality (ever com-
promised) has been countered immediately by the appearance of a
fourth element (or axis). Fiction/fact, a reformulation of the duality,
reworked the Lumières’ material actuality as the documentary film of
a Flaherty or Jennings, which ostensibly provided a celluloid record of the
lives of people remote to the viewer. The assumption of objectivity did
not withstand the challenge of 1960s’ postmodernism, however, and the
form increasingly incorporated the figure of the director as a factor in its
content.
In recent years a new factual genre has been noisily inaugurated: the essay.
A filmmaker has a thesis, a strong and definite opinion; s/he marshals
evidence for it, collects and shapes different kinds of visual material sup-
porting his/her case. I’m thinking of films like Morgan Spurlock’s Super
Size Me (2004), Michael Moore’s Fahrenheit 9/11 (2004), Al Gore’s An
Inconvenient Truth (2006), Robert Greenwald’s Wal-Mart: The High Cost of
Low Price (2005), and Kirby Dick’s This Film is Not yet Rated (2006). There’s
no attempt here at documenting people’s objective lives, skewed or not by
the presence of the lens. Instead, these movies seek to explore and establish
a preconceived viewpoint, their production’s raison d’être. This is not a
“scandalous” betrayal of the documentary ethic, but a new and perfectly
Digimodernist Culture 185

valid approach to factual cinema. Rather than fetishize “what happens out
there” (with or without their intervention), such filmmakers begin with
a thought, an argument, and research and present imagery and data con-
firming it. (Super Size Me, whose apparently real aesthetic underpins a
pseudoscientific “experiment” discourse seen in Chapter 5, is typical: if
Spurlock had thrived eating fast food he wouldn’t have had a movie—he’d
have had an advert.) It’s anachronistic though that such films are nomi-
nated for Best Documentary Oscars alongside very different kinds of movie
like Jeffrey Blitz’s Spellbound (2002) or Alex Gibney’s Enron: The Smartest
Guys in the Room (2005). The essay is formally distinct from a documen-
tary: it’s a thesis not a portrait, op. ed. not reportage, and a key instance of
the digimodernist aesthetic of the apparently real.
Of the four axes of digimodernist cinema the CGI movie, the essay, and
the documentary are in robust health, while the cartoon, the film of pure
imagination, was never stronger, as I’ve discussed elsewhere. Pixar is the
world’s leading studio, and successful international cartoons over the last
decade or so have been legion: Hayao Miyazaki’s Spirited Away (2001) and
Howl’s Moving Castle (2004), Marjane Satrapi and Vincent Paronnaud’s
Persepolis (2007), Sylvain Chomet’s Les Triplettes de Belleville (2003), and
Ari Folman’s Waltz with Bashir (2008), among others.
What’s struggling is the film that until very recently seemed to epito-
mize the art of cinema: the personal, distinctive authorial vision or critique
of the material, social world. This encounter of a certain mind (individual,
characteristic, skeptical, politicized) with a certain actuality (often violent
or sexual, harsh or disturbing) enjoys critical prestige: it’s the cinema of
Eisenstein, Welles, Hitchcock, Godard, Antonioni, Kubrick. It’s bound up
with the notion of the auteur, first described in 1950s’ France but as old
as movies (Griffiths, Scorsese, Greenaway himself) and probably indestruc-
tible even by digimodernism. Contemporary films that position themselves
within this conception of (art) cinema, however, appear out of date and
sterile, echoes from another era. When Sofia Coppola plays out the
climax to Lost in Translation (2003) with the Jesus and Mary Chain’s “Just
Like Honey” or soundtracks the royal court of Versailles with Bow Wow
Wow in Marie Antoinette (2006), she seeks the individual distinctiveness
of the 1960s’ auteur: Godard gave his guerillas the titles of his favorite
movies as codenames in Week End (1967), Fellini filmed his dreams, fears,
and memories; and so Coppola, a fan of postpunk rock music, puts it in her
pictures whether it belongs there or not (it really doesn’t). The theme of
Lost in Translation—a weary, lonely but strangely attractive middle-aged
man connects with a clever, beautiful but lonely young woman—is a virtual
186 DIGIMODERNISM

parody of French auteurist cinema of the 1970s; similarly, the film’s casual
anti-Japanese racism bespeaks the time when American and European
fears of the mighty yen were rampant (e.g., Blade Runner’s Nipponized
Los Angeles [1982]). Lost in Translation is the auteur film as nostalgia for
the auteur film. So is Michael Winterbottom’s 9 Songs (2004) which seeks
to fuse the revolutionary energy of rock music with sexual explicitness
and psychological claustrophobia in a strained recreation of films such as
Performance (1970), Last Tango in Paris (1973), and Ai No Corrida (1976).
It’s vitiated by the dull mediocrity of its songs, the joylessness of its sex, its
reactionary assumption of the male gaze, and an overriding feeling of
belatedness. Just as CGI movies don’t do sadism, digimodernist cinema has
no programmatic use for sexual explicitness (cf. earnestness, infantilism);
it’s a hallmark of an eclipsed cinematic modernism.
Canny auteurs have turned their attention instead away from the
exhausted values of the 1960s/70s toward digitization itself. Cousins, who
has the traditional anti-Anglophone bias of the “serious” British film critic,
thinks the Frodo trilogy “added nothing to the schemas of the movies” and
that if America “raced into the future of cinema technology . . . others . . .
thought through the implications of the new technology more rigorously.”18
For him, Alexander Sokurov’s Russian Ark (2002) “shows that, far from
being at an end, the history of this great art form is only beginning.”19
Sokurov’s film comprises a single unbroken ninety-minute shot, never
before feasible, recorded directly on to a computer hard drive embedded in
the camera and so bypassing film and tape. The infinitely gazing and mov-
ing camera journeys through the Hermitage’s endless rooms, past its art-
works and around its history; the “tourism” of the spatial premise and the
“passing” of time match conceptually the ever-rolling cinematic eye. Yet the
grammar/rhetoric problem rears its head again: the digital means of expres-
sion are awesome, miraculous; what’s expressed with them seems pointless
and shallow, pretty only because of its reverential treatment of its location,
and uninterestingly conventional in its nineteenth-century worldview.
This tension or slippage between ground-breaking digital filmmaking
and inadequate content recurs among contemporary auteurs. Lars von
Trier’s The Boss of it All (2006), while taking up some of the Dogme tech-
niques explored in Chapter 1, is the first film made using “automavision”:
the cinematographer chooses the best possible fixed position for the
camera, then a computer program randomly makes it tilt, pan, zoom, and
so on, producing off-kilter, irrational, and uneven framing and exposure.
With the sound recorded in similar fashion, the digital actually becomes,
Digimodernist Culture 187

to a considerable extent, the “author” of the film text. This destabilizing


rudderlessness suits the film’s satire on corporate responsibility flight, but
it’s hard to see automavision being more widely useful. In another directo-
rial self-erasure, Abbas Kiarostami “filmed” Ten (2002) using two small
digital cameras fixed to the dashboard of a moving car and pointing at its
driver and passenger. Returning to an aesthetic not far from the Lumières,
the film exists in the apparently real gap between fiction and documentary,
like an auteurist Borat indicting a country not for its insularity but for its
sexism; however, the material feels as constricted and relentless as the tech-
nique. The best of such films is, I think, Mike Figgis’s Timecode (2000),
a digimodernist masterpiece in spite of its unsatisfactory resolution. It’s
a twenty-first century 8½: composed of four simultaneously filmed ninety-
minute digital shots each occupying one quarter of the screen, it follows a
moment in the interconnected lives of four people and their overlapping
acquaintances as they work in a film production company and see their
private lives unravel. It resembles a cubist cinema, allowing the viewer to
apprehend the action in all its multilocational and polyrelational parts
at the same time; its reality and meaning are expanded exponentially by
this renunciation of cross-cutting in favor of continuity, contiguity, and
simultaneity. Faced with four competing visual narratives, the viewer
makes up his or her own optical content and textual experience, switching
from camera to camera in uneasy freedom seeking clues. It’s an astonish-
ingly bold concept, which redefines the fundamental grammar of cinema.
A spatial and temporal shift is implicit here. When Michelangelo
Antonioni’s L’avventura, released in 1960, appeared two years later on the
Sight and Sound critics’ poll, it owed its place not only to the originality
of its filmmaking but also to its recognizable deployment of modernist
forms explored by literature decades earlier. A character in 8½ (1963)
asserts that “cinema is irredeemably fifty years behind all the other arts.”20
Digimodernist cinema, for better or worse, reflects American technologi-
cal leadership21 but, more importantly, is located near the forefront of the
arts. Cutting-edge texts are supposed above all to challenge old-fashioned
critics and audiences, but technological innovation can also bemuse those
creators, of whom there seems no shortage, unequal to the artistic possibil-
ities on offer. There are now no models or signposts to follow and plenty of
wrong turnings ahead. Beyond such considerations lurk darker economic
worries, the prevalence of digital text theft (downloads) and shrinking
theater audiences; the era of the Internet may reinvent cinema as a domes-
tic cultural mode.
188 DIGIMODERNISM

Television

This reorientation of cinema is reflected in changes undergone by televi-


sion, though not in equivalent ways. We saw in Chapter 5 how schedules
have been redrawn by the spread of genres virtually unknown a generation
ago like docusoaps and reality TV, by teen-oriented programming and by
the relentless expansionism of continuing narrative drama. In the next chap-
ter we will glance too at the new and swamping forms of consumerist TV.
To these generic changes can be added the new content stemming
indirectly from the post-1980s’ multiplication of channels (via cable and
satellite as much as digital technology, but all will soon be subsumed into
“digital TV” when the analogue signal is extinguished). Since TV is expen-
sive to produce, and populations have not increased at the same pace as
channels (fortunately for women), contemporary TV shows are almost
invariably watched by far fewer people, and therefore made for much less
money (advertisers pay less to reach smaller audiences), than in the past.
Almost every program, regardless of the noise made about it elsewhere in
the media, is today a niche text. The need for cheapness is everywhere
apparent textually: it’s evinced by the spread of compendium shows made
up largely of thematically linked fragments of archive footage (top 100s, etc.)
commented on by inexpensive publicity-seeking minor entertainers, and
filling up hours of the schedule; by shows where people sit around talking
interminably in studios (TV that in fact resembles visually most radio); by
higher rates of repeats, with the same program aired perhaps four times in
a week, the same film shown twice a day, and long-running series screened
in episode order from beginning to end on a permanent loop; by, indeed,
whole channels of regurgitated programs. One particular expedient is the
show filmed on the hoof, on the street or in somebody’s home, say, with no
script or rehearsals or studio time or fixed camera setups (all costly); this
fashionably approximates digimodernist haphazardness (as “liveness”) and
apparent reality (use of the “actual” public). The docusoap and reality TV
appeal to controllers as much for their budgets as for their modish success
with viewers. (The converse is the show made lavishly with funds derived
from overseas sales and therefore blandly internationalized, reassuringly
familiar, and expensively undemanding in tone.)
Above all, digimodernist TV is synonymous with the seeming end of the
era of pure spectacle. It composes, apparently, a “second wave” of televi-
sion, a new age in which the spectacular screen, at which a passive and cap-
tive audience once sat staring in silence, is no more. The VCR, videogames,
and Ceefax (and its like) long ago broke both the broadcasters’ imposition
Digimodernist Culture 189

of spectacle and their monopoly on the screen’s output; today, inversely,


TV programs in their turn swarm across other media from the Internet to
cell phones, a development called “convergence” by the industry whose
unifying impulse permits the regularity and systemicity of digimodernism.
In the postspectacle landscape, channels and shows appeared with which
you could physically engage, whose very purpose indeed was that bodily
engagement. Their pioneers, shopping channels, were not sources of images
like traditional TV, but existed solely to prompt digital or manual action,
the dialing of a phone number or going online, to stimulate physical (con-
sumerist) behavior by which the individual used the screen. Channels or
shows devoted to games such as roulette or bingo or “guessing the phrase”
set a lone presenter in a studio before a ludic platform and interacting with
an invisible audience themselves participating physically in the games at
home via Web site or phone. News programs solicited and transmitted
viewer-shot footage (by cell phone) of dramatic events and viewer-written
comment (by text or e-mail) on their bulletins. Viewers were enjoined to
“press the red button” to choose from a menu of options within and around
a given program; soccer fans were able, during live matches, to select
their own camera positions from which to watch the action, or to choose
to see repeated highlights instead, or to consult statistics, or to split the
screen between the full game and the performance of just one player. Not
all of these were successful, some were gimmicky, some were clearly near-
fraudulent, others futile or unwatchable. But all were postspectacular. All
presumed that the TV “viewer” (a first-wave term, construing television as
nothing but optical information) had given way to an agent, who watched,
behaved, did, and viewed in a digital/manual/optical spiral. This seemed to
take TV into a new dimension: postviewer, postscreen (connoting separa-
tion), postbroadcaster, postprogram.
The BBC’s A Picture of Britain (2005) was a flagship of such digimodern-
ist TV.22 It consisted of (1) a series of six BBC1 TV programs journeying
around the United Kingdom and discussing the influence of its landscape
on the country’s painters, writers, and composers, (2) a series of BBC4 pro-
grams showcasing photos of Britain taken by contemporary professionals
and encouraging the public to contribute their own, (3) a dedicated BBC
Web site for these uploaded amateur photos, (4) a parallel Tate Britain exhi-
bition of classical British landscape paintings, (5) a BBC radio program
about contemporary regional poetry; and more, including a coffee-table
book in the name of the BBC1 presenter with essays by Tate Britain cura-
tors. You could view it, act on it, interact with it. In the BBC’s own jargon,
this was not a “program” (antediluvian term) but a project, multiplatformed,
190 DIGIMODERNISM

bundled, with content dispersed over aggregated niches, transmedia (almost


universal), and inter-referential. You could navigate around it, using it in
different ways as you felt inclined. The BBC was not its broadcaster but its
orchestrator; and not its scheduler either, since you could engage with the
project’s different facets at various times. And while the BBC1 programs
were sold on DVD as a unit, the complete cultural text here transcended
mere TV and could be mounted only once: it became an evanescent item,
a television-based single performance. This was a new cultural form, and a
new version of television.
Digimodernist TV clearly redefined the medium in the image of the
Internet, and away from its 1950s’ conceptualization as a kind of cinema or
vaudeville in your living room. It was startlingly and fascinatingly novel.
However, though wildly exciting for both young TV executives and media
analysts, it shouldn’t be overestimated. In theory, the shift from first- to
second-wave TV consisted of the death of the couch potato. Its successor
was a phoning, typing, filming, texting, double-clicking, button-pressing,
logging-on, photographing viewer-actor who “consumed” his/her programs
largely by positive physical acts or by contributing much of their material or
by moving away from the set toward other kinds of linked text. In practice,
however, second-wave TV coexisted alongside its surviving ancestor, both
as text and reception. Most viewers still slumped, slack-jawed and passive,
most nights in front of a screen pumping spectacle into their eyes. The craze
for shows enabling viewers to call or text in and “vote” or take part in com-
petitions also calmed significantly after a spate of corruption scandals. But
the fact that this revolution originated in the silent gravitational pull of the
Web and its reformulation of the self/screen interface suggested its perma-
nence, in some form. In particular, second-wave (or digimodernist) TV
came to be the focus of all new thinking, all creative energy and social
excitement around the medium. An example of this is the BBC’s My Family
(2000– ), an early evening serial sitcom structurally and formally identical
to shows from thirty years earlier, and well made and popular. Yet its social
and cultural impact was nil; it was unambiguously first-wave, predigimod-
ernist. Without achieving higher ratings, its antithesis, the acme of digi-
modernist TV and the most socioculturally dynamic, meaningful, and
influential show of its time was and is, of course, Big Brother (2000– ).

Big Brother was newer and more revolutionary than is generally recognized.
Its roots tend to be traced back to fly-on-the-wall documentaries like
An American Family, to the static cameras that endlessly filmed Andy
Warhol’s borderline exhibitionists, or the constant surveillance of the
Digimodernist Culture 191

eponymous program in The Truman Show. The radical innovation of


Big Brother was to combine its apparent reality and open-endedness with
mechanisms by which viewers could collectively dictate its creative direc-
tion, that is, make text or shape narrative. The evictions decided by public
vote (via SMS or phone call) are a plebiscite on the future development of
the show. The viewer becomes the source of textual content, part-author
of the narrative’s next steps.
To a degree, such a mechanism had already been seen, of course. Ayn
Rand’s play Night of January 16th (1934) enabled a jury composed of audi-
ence members to pass judgment following a scripted trial. Texts such as
The Unfortunates and Whose Line is It Anyway? gave their readers and
viewers responsibility for sequencing the text or supplying comedic sub-
jects, as we’ve seen. Such audience-authorship is, however, very restricted
in scope. It tends to the low level, perhaps conclusive (not productive),
permitted by an Author-God deigning momentarily to suspend her/his
omnipotence and to desist for a second from creating. Accorded an instant’s
authorship, the audience resembles a dog thrown a scrap by a glutton. Big
Brother, however, has no ostensible author, only a production company
(Endemol) and presenters. Textually the field is cleared of authorship
(almost). In this space the show dramatizes the process of narrative-making,
of text-building, in which the viewer participates. By being “real”—show-
ing “real” people just living with each other—and then creating and
provoking situations, scenes, events, Big Brother focuses on the shift from
the unmediated actual into representation and story, into structured and
recognizable narrative form. It uses three devices in particular to achieve
this:
(1) The “fictiveness” of its “cast.” The sixteen contestants for the British
Big Brother 9 who entered their house for the first time in June 2008 resem-
bled, as an ensemble, a fictional cast. They reminded me, for instance, of
the repertory company of the Agatha Christie whodunit, which repeatedly
dealt out a clergyman, a retired colonel, an ageing female tyrant, a young
woman gold-digger, and so on; and so BB9 included, as in so many previous
versions, a nice guy, a darkly handsome bestubbled slacker, a squealing,
shrieking, borderline-pathological female, a straitlaced square, an über-
camp gay man, and so on—the same types recycled, like most fictional
genres. They also recalled the casts of cartoons or pantomimes: simplistic,
hyped-up exteriorities, stereotypes, and caricatures with no shades of gray
or surprises, goodies and baddies to be booed or cheered. Something about
them suggested too Dickens’s or Fellini’s grotesques, what Forster called
“flat characters,” limited to a quirk or an ever-present oddity but with no
192 DIGIMODERNISM

“rounded characters” to differ from. Again they seemed designed to echo


the casts of a Richard Curtis rom-com, including members of all ethnic
and sexual minorities, the disabled, and representatives of all corners of the
United Kingdom.
This reflection is paradoxical, since Big Brother is of course the supreme
example of reality TV; no matter how subtly planned and molded it’s script-
less and hence really not “fiction.” A precursor of this may be B. S. Johnson’s
notion of the autobiographical novel, the text with all the hallmarks of
fiction but composed of actual events. Big Brother, in many ways, gives
itself the trappings of dramatic fiction but is made up of real people doing
real things. To repeat: Big Brother is laid out as narrative-in-the-making to
encourage the viewer in as a privileged and decisive author of its events.
The kind of narrative most familiar to us is fiction; even factual stories
(history, anecdote) borrow its devices. Another instance of this is:
(2) The foregrounding of “tasks.” Every week the housemates are given a
task, prepare for it, perform it; success or failure determines in some wise
the shopping they are allowed to order for the week after. At the start of
week 6 in BB1 (2000) they receive:

their new task, described by Big Brother as “a test of memory and


reaction speed.” They have to record and memorize in sequence a
series of large photographs which will be held up over the garden
wall, all pictures of the ten original contestants plus the new arrival.
Every time a head appears a klaxon will sound, and the only time
they can guarantee there will be no heads is between 3 a.m. and 9 a.m.
They decide quite quickly to wager the full 50 percent of their shop-
ping money on the task.23

There are also mini-tasks. In week 2 that year, together with their main task
of having to memorize ten things about each other, “Big Brother sets them
a task. They must paint portraits of each other, and then mount them on
the wall as if they were in an art gallery.”24 Their shopping allowance does
not hinge on this. In week 3 their main task is a “cycling challenge,” their
mini-task is to “write, design and stage a play”; for the latter triumph they
are given a treat (a video).25 These tasks are, then, small-scale and playful,
perhaps involving elements of sport or the creative arts, or simple chal-
lenges of physical or mental dexterity. The particular ontology of Big
Brother, where the outside world is suspended, means they are often self-
referential. But, as can be seen, they have no intrinsic importance. They are
pretexts, goads to collaborations and fallings-out, incitements to bits of
Digimodernist Culture 193

action and therefore to feelings and interactions. They are fuel for narra-
tive: negligible in themselves, they stimulate instead interpersonal behav-
ior; and the provision or withholding of rewards furnishes a stake that
will induce drama and intensify their investment in the task.
Vladimir Propp, one of the earliest systematic analysts of fictional nar-
ratives, argued that fairy tales, the bedrock of Western literature, contain
recurring generic functions. Prominent among these is the “task” issued
to the hero, which might comprise ordeals or riddles or tests of strength or
endurance. The hero’s successful resolution of the task is rewarded with the
hand of the princess in marriage. In the 1960s Roland Barthes and Umberto
Eco drew on Propp’s work to find similar structures in Ian Fleming’s James
Bond novels (M as the task-giver, etc.). Big Brother’s deployment of tasks
and rewards reflects this narratological framework, but with a difference.
In fairy tales or Fleming the issuing of the task prompts an action (a quest,
an investigation) which subsumes it; it is perceived as a mission, and in this
form triggers the whole narrative with its accompanying interpersonal and
emotional content. In Big Brother, though, the task remains just a task.
Trivial and ludic, it’s just a device for getting at that same emotional and
interpersonal content. The tasks that prompt rich narrative variations for
Propp, Barthes, and Eco become here a skeletal means to a narrative
by-product.
The tasks are announced by a voice calling him/herself “Big Brother.”
It addresses the housemates over the PA, but is localized in the diary room
where it speaks to one or a handful of them in a notably impersonal man-
ner. In this small enclosure the camera and so the viewer occupy the invisi-
ble position of “Big Brother”: the housemate speaks then to us, confiding
feelings or opinions or hearing instructions that emanate with deliberate
anonymity from where we are. We seem to be, or could be, this figure; in
any event, the show’s title, from Orwell’s “Big Brother is watching you,”
intimates that the viewer really is this person. And it’s s/he who dictates
and disseminates the narrative-inducing tasks, the situations and actions
to come. The housemates tell us what they think about the goings-on in the
house, which are in turn driven by:
(3) The artificial stimulus of “conflict.” From the start the “cast” appears
to have been chosen (and, one suspects, encouraged) to clash and bounce
off each other: some housemates are unbearably annoying, others are
acutely intolerant, incompatible extremes are hurled together; confined
and juxtaposed, they cannot but make narrative out of their conflicts and
turmoil. Again narrative theory lurks behind this: for years American
screenwriting gurus have asserted dogmatically (and exaggeratedly) that
194 DIGIMODERNISM

story is propelled by conflict. And a battery of tricks is employed by the


producers to try to incite it, most melodramatically through the nomina-
tions for eviction process by which the housemates, already forced by their
seclusion and inactivity to bond, are compelled to secretly betray each
other, with often anguished results. Then there is the stream of minor
and quotidian provocations: the ceaseless supply of alcohol; the selection
of housemates with unsettled sexual preferences, of congenital flirts and
mouthy exhibitionists; the inclusion of nasty, bigoted, treacherous, tyran-
nical, and psychologically unstable housemates; the visually and ergonom-
ically dislocating “set”; the “imprisoning” and disciplining of housemates
for trivial misdeeds; and, throughout, the enforcement of a high level of
boredom from sensory deprivation, designed to enhance the contestants’
nervous tension. Such ploys function to whip undiluted reality (ordinari-
ness, people talking, eating, sleeping) into stories, events, dramas, narra-
tives you could recount to friends, with characters, with heroes and villains,
helpers and traitors. To create conflict was to fabricate narrative out of
the flux of the real; to move from dreary “stuff ” to structured, developing
story lines, to fictions. The genius of the program was to show this happen-
ing, and to draw the viewer into deciding how it happened.
The goading to conflict meant that every series involved some “scandal”
about fights, bigotry, loathsome behavior, and so on. Mistakenly, many
believed these were either “cynical” ruses to attract viewers or “shocking”
proof of the show’s degeneracy, when in fact they were integral to its tex-
tual identity. Conflict in Big Brother, like its tasks, was constrained by the
show’s self-enclosedness. In full-blown fiction conflict arises out of deep
sources such as sex or family relationships or travel or sociopolitical con-
texts. These were awkward or impossible for a group of idle young strang-
ers locked in a building; and so the best they could come up with was to fall
out. Finally, the show’s conflict was as shallow and banal as its tasks, another
mere means to an end. Yet this too was inescapable. In “real” fiction, tasks
and conflict are narrative material, substantial and far-reaching. Since Big
Brother was actually about narrative-making, it diminished them to pure
instrumentality; they wound up weightless, and finally insignificant.
So by the time the “viewer” reaches for his/her phone to vote, the dis-
tinctive textual apparatus surrounding this act lends it a special authorial
meaning. Narrative will come of it; it’s creative, story-making. The house-
mates have told Big Brother (told us) who they wish to nominate for
eviction; then the viewers select someone to kick out of the house. In this
instance the phoning viewer shifts slightly from the witness to narrative-
making to the narrative-maker in chief (there are, apparently, no others).
Digimodernist Culture 195

S/he determines who will be left in the house next week, and therefore
what sorts of interpersonal behaviors, what kinds of events and dramas
and scenes and dialogues will be played out then. True, the person removed
leaves genuinely, not just narratologically; but then the whole show is
predicated on the fuzziness of the line separating reality from story; and
calling up or texting to cast a vote, while an interference in someone’s life,
works primarily here to shape and direct the future development of a TV
program itself discreetly cloaked in the devices of fiction.
The audience’s authorship can then be described as productive, creative;
privileged; ungainsayable, absolute. The viewer here is the supreme autho-
rial figure, the eviction the supreme narrative act. And the ostensible
unformedness of the program, supposedly just a bunch of real people in
a house, chatting and stuff, only strengthens the viewer’s sense of his or
her determination of narrative material, whether justified (the evictions)
or illusory (the tasks). Moreover, in its collectivity and anonymity, this
is stress-free authorship: it might be fun, it’s easy and unburdened with
responsibility, and yet it’s really a text-making act.
At the end of each series, the viewers, by definition, have chosen their
favorite housemate. In Britain, creditably, this has enabled a TV audience
to display a tolerance in advance of its elites by, for instance, plumping for
a transsexual. Structurally and formally Big Brother is fascinating and rich,
and socially it has done much good. In detail, though, stretched over the
one thousand six hundred hours of any series, the show is all but unendur-
able: this is again the difference between grammar and rhetoric, textuality
and content, which inflected the chapter on Web 2.0, and for precisely
the same reason. The message board or blog, translated into television,
would be Big Brother. And yet the show’s dullness or mindlessness is inevi-
table, not a willful shortcoming as its vast array of hostile critics seem to
imagine. If a show is to document the transformation of reality into narra-
tive it’s going to need a large dose of the former, and on a twenty-four-hour
basis other people’s reality often is dull. I once watched, for two or three
minutes, someone lying on his bed regarded by a static camera using some
kind of night light to penetrate the darkness; occasionally he opened his
eyes and stared into space, occasionally he shut them. And this was the
highlights show. It felt like the death of television, to be honest. Anyone
who watches the round-the-clock coverage will have “enjoyed” seeing
yawning people slurping mugs of tea or disheveled people urinating, or
listening to conversations so desultory and vacuous you think you’re
going crazy. But you can’t see narrative come into being without seeing
the shape of things prior to that narrative; and although watching almost
196 DIGIMODERNISM

anything involves, in a sense, watching a narrative unfurl, Big Brother inno-


vatively gets you the viewer in there to decide on the unfurling process.

By Big Brother I want to suggest, of course, a raft of apparently real drama


shaped every step of its way by a remotely voting public. However, digi-
modernist TV takes many forms. As long ago as the twentieth century, Ally
McBeal (1997–2002) used computer-generated effects expressionistically
to visualize and dramatize the heroine’s thoughts and feelings. Attracted to
a “yummy” man, her head transmogrifies into that of a panting dog. Imag-
ining herself with larger breasts, her bra cups swell in the mirror until a
strap pings. Opening the door complaining about a man to find him stand-
ing there, Ally’s burning embarrassment is conveyed by her facial skin
turning a hellish red and smoke coming out of her ears. Feeling humiliated
in a meeting she is reduced in her conference chair to preschool size, her
feet swinging; digitally altering Ally’s size within an unchanging environ-
ment precisely evokes the emotion she has elsewhere described. These
digital effects—brief, sharp, witty—were actually just another mode for
conveying interior states in film, and they slotted in alongside Ally’s voice-
overs (a technique several decades old) and the use of dramatized cutaways
(as ancient as cinema itself). It was a new weapon in a very established
armory, though one used by the series’ creator David E. Kelley with notable
panache. These effects are throwaway, quirky, and delightful, or seek inexo-
rably to delight: even when Ally hallucinates dancing babies as the expres-
sionistic rendering of her sense of her ticking biological clock they’re
still adorable, in no way a (modernist) image of torment; Ally dances with
them.
Conversely, the success of the Channel 4 comedy Peep Show (2003– )
rests on the use of digital technology to explore minds in the world. This
moved beyond a funky, revamped expressionism toward a kind of ironic
subjectivism. In its premise the show is unoriginal: twentysomething
same-sex friends share accommodation in the big city, go to work at low-
level jobs or harbor unrealistic “creative” dreams, pass through a series of
unstable and abortive sexual relationships, and otherwise relax with TV,
movies, and alcohol. The uniqueness of Peep Show is formal and technical:
David Mitchell and Robert Webb, the apartment-sharing leads, play many
of the scenes with lightweight digital cameras strapped to their foreheads,
so that the viewer sees much of what they do—running, drinking, writing,
kissing, urinating, half-inadvertently glancing down a girl’s top—through
their eyes. When characters speak to each other—and this is a show ori-
ented around one-to-one conversations—they talk to the camera as though
Digimodernist Culture 197

their interlocutor. Much of the comedy plays off the gulf between the
thoughts in the leads’ heads (voice-over) and their lives in the world, joined
by this use of first-person point of view. Mark (Mitchell) is internally filled
with self-hate and inchoate rage, but externally repressed and “nice”; Jez
(Webb) imagines himself cool and liberated but socially is hopeless and
hapless. Posing as a student to seduce a girl and asked the name of his tutor,
Mark’s head camera swings in panic across a college notice board as his
voice-over, impotently conscious of his desperate ludicrousness, thinks:
“Keyser Söze?”26 A recurring joke has Jez, much the sexually more success-
ful of the two, stop listening to women when they’ve been talking for a
few seconds: his rambling, priapic, and airheaded thoughts almost drown
out the girl’s voice while her pretty face gazes eagerly and unwittingly up at
him/us. The overfamiliarity of Peep Show’s objective situation permitted
a formally original exploration of subjective states in the world; while also
very funny (though an acquired taste), it evoked nuances of thought, feel-
ing, and character hardly before seen in TV and film.

Three other formally innovative shows betray the hallmarks of digimod-


ernist television more diffusely. ITV’s Who Wants to be a Millionaire?
(1998– ) reinvented the quiz program in subjectivist terms, giving an ancient
genre a shot in the arm without departing from its traditional appeal. While
viewers had long shouted out the answers from the safety of their couch,
quiz shows had tended to generate drama and tension by putting contes-
tants under pressure, often by making them compete against the clock.
Employing sepulchral lighting and menacing music to create atmosphere,
Millionaire’s producers slowed their questions to snail’s pace, inching
along at an interrogative speed one-tenth that of certain other quizzes.
Consequently its focus devolved less to the correctness of the response and
more to the interior processes, the reasoning and wondering and hesitating
and deciding, the anguish and joy and fear and torment, which preceded it.
It was a psychological, not an epistemological, quiz. With the multiple-
choice questions and their possible solutions emblazoned across the
bottom of the screen, the viewer was prompted into the role of the pseudo-
contestant, regarding and reflecting on the same tricky posers. This power-
ful imaginary identification was consolidated by a radical breaking of the
invisible bubble traditionally enwrapping the lonely contestant: the “ask
the audience” and “phone a friend” options placed exterior figures urging
him/her to choose such-and-such an answer within the game itself either
as a keypad-pressing anonyme or a disembodied voice. Each innovation
responded to a wish to involve the “viewer” with the spectacle hitherto
198 DIGIMODERNISM

mounted on his/her screen; though illusory engagements, they fulfilled


digimodernist TV’s ambition to have its action intertwine with those of
the people at home.
ITV’s Baddiel and Skinner Unplanned (2000–03, 2005) brought digi-
modernist haphazardness to prime-time comedy. Frank Skinner and David
Baddiel, alternative comedians who had once shared an apartment, con-
ceived of a show in which they would sit on a couch before a studio audi-
ence, and talk. No script, no rehearsals; this was Whose Line is It Anyway?
emptied of its structures and parodies, its games and genres. They chatted
to each other and to the audience, and ended (or rather stopped) with a
random song. The summit of televisual haphazardness, the show’s feel and
content drew on the stars’ long experience of observational stand up as well
as their familiarity with each other. Its title sequence featured a happy audi-
ence dancing into a studio singing “it’ll never work” followed by the caper-
ing stars adding “and neither will we again.” But it did work: it was funny
and original, though in practice it felt, interestingly, less radical than had
been supposed. Establishing an easygoing collective mood, the stars made
their audience crucial to the content and direction of the evolving text. The
tone was discursive, allowing the relaxed comedians to ramble and rumi-
nate on various topics; it avoided the on-trial tension of improvisation and
finally yielded no textual initiative to its audience, who chatted back when
addressed with a similar “alternative” cool. It was as if Whose Line . . . ? had
finally arrived at full-blown digimodernism: apparently real and endless,
haphazard and engulfed in its present.
Channel 4’s Green Wing (2004–06, 2007) billed itself as “surreal” comedy.
Its most distinctive element was the acceleration and slowing of scenes
to get to and emphasize their visual point. Characters walking into or
out of shot would be whipped along more quickly, while their comedically
vital reactions, gestures, expressions, and movements would be observed
over several times their actual duration. This redistribution via an editing
suite of viewing attention and the resultant disruption of the rhythms of
“normal life” generated an oneiric visual style. Bizarre and plotless, Green
Wing seemed driven instead by the wayward irrationalities of sexual desire,
a group id seething and relentless and peculiar and lusting, and placed on
screen. Correlating the rhythm of watching not to the objective action
and its timing but to the speed at which viewers’ brains would process the
visual information, the show achieved a perfect mesh of content-based
subconsciousness with the interior thought patterns of its audience. The
characters’ complex psyches propelled the story; the viewer’s mental habits
Digimodernist Culture 199

edited it. “Surreal” wasn’t appropriate; Monty Python had been that, but
they kept it to themselves. Digimodernist TV, however, can leave the
viewer neither alone nor idle.

With the commercialization of the DVD box-set this trend shifts from
individual shows to a larger issue. For a reasonable price now you can
acquire a season of House or Curb Your Enthusiasm, DVDs occupying so
much less informational and domestic space than videos did; at home you
can take in several episodes in one night, and a five-month run in a week.
This is potentially far-reaching, the tip of a digital iceberg. Traditional com-
mercial TV relies on a sleight of hand: watching their favorite shows, view-
ers may imagine they are being sold programs by the channels that make
them, since this is what happens when they go to the movies or the
theater—they purchase blocks of entertainment. Commercial TV channels
in fact sell quantities of viewers to advertisers; the programs are “bait”
held out to attract the attention of large numbers of potential customers so
that companies can show them their products. Commercial channels
deliver these people to advertisers, for which service they are paid money
with which to concoct future bait.27 Consequently such TV is virtually free
to viewers; they aren’t “buying texts”; they’re being sold. This point isn’t
really arguable since it’s unquestionably how commercial TV programs are
funded. There’s a moral issue about the exploitation or deception of the
viewer, but this arrangement necessarily impoverishes the TV text too:
functioning as bait (rather than “dinner”), it glows with immediate luster
and fascination, and soon after seems thin and unsatisfying: no great art
ages so evisceratingly as “great” commercial TV.
DVD box-sets, however, redefine TV programs as the textual equivalent
of novels. When you buy them you eliminate the advertisers; you acquire
and peruse Seinfeld as you would the latest Martin Amis, at your own
speed, when and where you feel like it, in a direct and personal textual
experience. This would seem to have impacted on the shows themselves:
many box-set favorites have a density and richness not found in traditional
programs; they’re dinner. This development dates back, I think, to the
advent of cable reruns and videotape sales, which offered shows the possi-
bility of endlessly repeated screening of episodes. Today, a program like
The West Wing is ideally suited to sustained viewing across two or more
hours a night: it has the subtlety and complexity, and demands the consec-
utive close attention, of a literary novel. Box-sets may have helped stimu-
late then a general improvement in the quality of TV drama and comedy.
200 DIGIMODERNISM

However, economically they aren’t TV at all; they’re enjoyed beyond the


reach of the advertisers who bankrolled them. As for the channels that
commissioned them, they’re reduced to advertisers too: a way of finding
out which box-sets to buy.
In TV jargon this is “self-scheduling”: viewing structured by the mood
of the consumer, not by the decision of a controller, and it’s reinforced
by the availability of shows online, also eluding their paymasters. Endless
industrial headaches lie ahead. This is perhaps the culmination of the
arrival of digimodernist TV: after the viewer as textual writer, producer,
and presence, the viewer as scheduler, as controller; the viewer as
channel.

Radio

The first song played, and therefore the first video screened, by MTV in
1981 was the Buggles’ “Video Killed the Radio Star.” It bade farewell to the
era of radio, engulfed by new technology: “Pictures came and broke your
heart/Put the blame on VCR.” Today, the VCR has gone the way of the
penny-farthing and the abacus to the museums of design and technology,
and, at least as a culturally significant mode, the music video with it. MTV
screened almost exclusively videos throughout the 1980s, and the form
became the focus for much postmodernist analysis: Madonna’s clips were
the subject of cultural theory conferences and academic articles. But by the
mid-1990s the majority of MTV’s programming was nonmusical, and
the channel has increasingly been dominated by teen-oriented comedy
and reality TV, an abandonment of its initial ethos that damningly indicts
the lack of vitality of the contemporary music scene. As an art form the
music video now seems exhausted, devoid of creativity, interchangeable,
and dull. Moreover, the rise of the MP3 player, the iPod, and file-sharing
has above all reconstituted music as primarily an audio experience to which
the mind alone supplies images.
Radio, however, is thriving in the digimodernist era. Digital technology
has enhanced the experience of listening, producing a crystal clarity vastly
superior to the distortion and strangulation of yore; it has improved access
to programs by permitting their transmission via the Internet, TV, and cell
phones as well as traditional sets; podcasting, “listen again,” and technolo-
gies like the BBC’s iPlayer have brought shows to more listeners, allowing
both the creation of personalized archives and a greater listener control
over the circumstances of textual reception; and the number of stations has
increased exponentially. Output, listeners, convenience, quality, access,
Digimodernist Culture 201

diversity, control—digital technology has expanded and improved them


all; radio never had such a good friend. In Britain, where the BBC has
played an enlightened role, the effects are clear. A newspaper article titled
“Radio Enters a New Golden Age as Digital Use Takes Off ” noted that:

The digital revolution and the expansion of new ways of accessing


information through the Internet has [sic] given a huge boost to one
of the older and more traditional forms of electronic media—the
radio . . . with the number of listeners in Britain at a record high . . .
The figure . . . is attributed to growing numbers of people tuning in
on the Internet, digital television and mobile phones . . .
Jane Thynne, a broadcasting critic and writer, said BBC radio was
benefiting more from the digital era than television. “ . . . [Podcasting
is] essentially what radio has been doing for a long while anyway.”28

In August 2007 it was estimated that one quarter of British adults accessed
radio digitally, with digital-only stations increasing their audience by
600 percent in four years and listeners to podcasts up by 50 percent in
twelve months.29 Another survey suggested that the availability of podcasts
was increasing overall radio listening as new programs were thereby sam-
pled and discovered.30 But not everyone is a winner: in 2008 it was reported
that “almost 80% of digital listening is to stations already available on ana-
logue,”31 and that many commercial stations were struggling to compete, in
part due to the “record numbers” tuning in to the BBC but also to “under-
investment in new content.”32 Indeed, while the unchanging cheapness of
radio content underlies the beneficial impact of the new technology on
transmission and reception, the nature of digimodernist radio textuality is
less certain.
Textually, comparatively little of radio’s output bears the hallmarks of
digimodernism. Perhaps not 10 percent of the programming of Britain’s
five national BBC stations can be described as even vaguely digimodernist
in function. A reliance on prerecorded music, orienting shows around
material created on a previous occasion by people outside of the station,
will make a text rigidify even if transmitted live; structurally the nature of
the extemporized chat of the DJ linking these musical pieces has hardly
altered in half a century. Prerecorded spoken material is equally traditional,
and frequently restricted to professional voices. However, it is in the area of
speech radio that a digimodernist textuality becomes possible. Among the
BBC’s national stations it’s the youngest, 5 Live, founded in 1994, which has
a virtual monopoly on the form (this doesn’t make it, of course, necessarily
202 DIGIMODERNISM

the “best” station, either commercially or aesthetically), and we can survey


a handful of its most typical shows to gather an idea of what a digimodern-
ist radio might look like.
Kicking off her show on the morning of September 3, 2008, Victoria
Derbyshire announced two subjects of debate: the unfolding woes of the
English soccer club Newcastle United, and (as the academic year got under-
way) the government’s educational reforms introducing new diplomas
and extending the school leaving age. The subsequent discussion would
break off for weather and travel updates and news bulletins, the latter
sometimes featuring developments within these stories; so to the “liveness”
of her format was added the “right-nowness” of the questions under
consideration, as the debate shifted according to emerging information.
Inviting views, asking rhetorical questions of her listeners and goading
them with possible but perhaps irritating opinions, Derbyshire sought to
provoke their responses; endlessly giving out the show’s e-mail address,
telephone and SMS numbers, she opened up the means by which they
could participate in the virtual symposium. The callers, whose voices pre-
dominated, were obliged to self-identify by forename, place of residence,
and fundamental position vis-à-vis the issue at hand (regional sports jour-
nalist, Newcastle fan, fan of another [named] team; teacher, employer, par-
ent, teenager, etc.); the program eschewed Web 2.0’s pseudonymity, making
contributors responsible for their words (no trolls here). While nobody
was unassailable, you felt that the background and status, the credentials of
the caller, mattered in assessing their contribution; on Derbyshire’s show in
general, views are not labeled true or false solely by virtue of who states
them, but neither are they expressed in the social and personal void of,
say, a message board. At the same time, and in the spirit of Web 2.0, abso-
lutely anyone can voice their opinion in principle, though callers are obvi-
ously screened: it’s open to the whole nation (if equipped with the necessary
gadgetry) and so approaches a notion of democracy. True, it’s not as intel-
lectually scrupulous or brilliant as a round table of academics might be on
BBC Radio 4; nevertheless, the show gives a positive sense of the rational
and informed mind of Britain as a whole, gaining in equality and accessi-
bility what it lacks in high-level insight. It’s like, indeed, a better Web 2.0.
Debate is vigorous, committed, considered, unpretentious, and articulate;
if no breakthroughs in human understanding are achieved, the show fos-
ters respect for the possibilities of open discussion.
The callers’ voices are mixed with text messages and e-mails received
from around the country on the same subjects, read out by Derbyshire: the
former are truncated and simple, the latter more subtle and extended, and
Digimodernist Culture 203

they too feed in their written modes into the spoken debate as it unfolds.
Derbyshire’s role in all this is fascinating. She’s deliberately reactive: she
asks questions, unpacks the implications of contributors’ remarks, greets,
encourages, and thanks callers. While the majority of the latter pile their
thoughts successively each on top of the last, she can also link simultane-
ous callers to each other so they can interact more directly. She seems to
shift perspectives and views throughout in order to manage the discussion,
to tease out nuances, identify conflicts and problematics, and to keep the
debate concise and focused (cutting off when necessary). Self-effacing and
deceptively withdrawn, she’s skilful, tactful, and sympathetic; but she’s firm
and controlling too, maintaining an implicit insistence on the quality
of debate, its cogency, pertinence, and shrewdness, protecting discursive
standards: stupidity, ignorance, arrogance, and abuse get short shrift indeed.
Whatever the caller’s view, she retains her stance of minimal disagreement:
offering contradictory evidence and pinpointing argumentative flaws, her
role is not to state opinions about the issue but to enforce an ethics of
discussion.
In short, she never allows a finality to obtrude. Consequently the textual
onwardness and haphazardness that she oversees so adroitly are destined
for intellectual inconclusiveness too. (This is digimodernist radio’s version
of endlessness.) Listening to discussions is stimulating but also finally frus-
trating. This stems in part from the BBC’s position as a public service
broadcaster; on commercial radio, by contrast, debates such as these often
give the impression of having intellectually been concluded several decades
before they went on air. The trajectory is therefore horizontal, toward the
clarification of all argumentative points and angles, rather than a vertical
shift toward higher resolution, even “truth.” The successfulness of the
format derives, I think, from the unambiguous establishment of stringent
rules of debate, ones that valorize reason, objectivity, coherence, skepti-
cism, respect for one’s interlocutor, and the primacy of evidence. Der-
byshire’s soft voice imposes all this: her power is almost absolute, like her
silence.
5 Live’s “Drive,” which airs Monday through Friday from 4 p.m. to 7 p.m.,
offers a different perspective. It’s intended for workers on their way home
(hence its name), and rounds up and explores the day’s major news stories,
interviewing participants or experts: for example, on September 23, 2008,
and following a keynote speech by the British prime minister, one of the
presenters (Peter Allen) quizzed a government minister; shortly after, and
in the wake of the conviction of a woman for murdering her disabled
daughter, his copresenter Rachel Burden interviewed the policeman who
204 DIGIMODERNISM

had led the investigation. This was not, then, so much a phone-in show as
a phone-out one, which, in place of Derbyshire’s “democracy,” traced and
called up high-profile and implicated professionals for their opinions. But
the listener could still contribute material. Allen and Burden invited texts
and e-mails about the show, and read some of this instant feedback out. On
the whole such commentaries, marginal to the show’s purpose, brought
spice to it: they could be witty, original or piquant, or reveal unusual but
valid takes on the day’s events. In all, they were noticeably funnier, cleverer,
more individual and unexpected than anything the presenters said. Allen
and Burden’s style, in keeping with 5 Live as a whole, was warm, engaging,
unpretentious, good-humored, and acutely interested in the world. But
the material sent in from outside the station, though technically gratuitous,
enriched both the news content and the show’s interpretation of it. You
could just as easily have made the program without it, and before the inven-
tion of the SMS and e-mail you would have; you could easily listen to it
now without noticing these contributions, randomly scattered at roughly
twenty-minute intervals; like culinary spice, though almost weightless they
added something extra, making the textual dish more palatable, more dis-
tinctive and interesting.
At around 5:30 this day Burden referred to reports coming in of an explo-
sion in the center of the city of Bath, and immediately invited a second
source of listener-made material: eyewitness accounts via cell phone, e-mail,
or text. On a regular basis the show runs travel updates detailing accidents
and tailbacks affecting homebound commuters, which include informa-
tion sent in by stranded motorists about their own particular impasse;
other drivers are then advised to find an alternative route. In both cases
this show—and others like it on radio and TV—thereby encourages what
Web 2.0 calls “citizen journalism”: the provision from affected private indi-
viduals of hard news that can then be taken up and diffused by mass-media
broadcasters. Once again, this enriched the show: supposed itself to accrue
stories and find travel information, “Drive” used its listeners as unpaid and
ad hoc reporters, as uncontracted stringers, and so extended its editorial
grasp out from a claustrophobic studio across the country as a whole.
The uninvolved listener received an improved journalistic service; the
broadcaster’s product was, for free, significantly upgraded; and the ideal-
ism propelling contributors—the desire to bring truth or to help others—
was laudable. Once again, this textual digimodernism seemed to suggest,
even in banal circumstances, the workings of a healthily democratic spirit;
Web 2.0 without the populism, perhaps.
Digimodernist Culture 205

5 Live’s “6-0-6,” on the other hand, sets the phone-in in the consumerist
jungle of the leisure industry. Broadcast just after the conclusion of the
day’s professional soccer games (at a time indicated by its name), it permits
homeward-bound fans to vent their postmatch emotions, and contribu-
tions, whether euphoric, vindictive, or despairing, tend to be voluble, flu-
ent, and impassioned. The callers are all defined as fans of a particular club
and valorized as eyewitnesses of its match; they are heard by Alan Green, a
commentator who had of course been present at only one particular fix-
ture. In the course of the show Green speaks little, reacting and reflecting
only on the callers’ points. Although the fans of twenty clubs may ring him
on one evening, for each conversation he positions himself as a co-fan
wanting only the very best for the caller’s team. “6-0-6,” though popular,
has none of the qualities of Derbyshire’s show: it lacks continuity of subject
and the ethic of objectivity, accepting instead a narrowness of focus and a
tone of frenzied partisanship. Green will challenge what he sees as espe-
cially untenable views, but mostly he sympathizes with all misery and
empathizes with all joy. The show can in turn degenerate into paroxysms of
incoherent loathing, overdone anguish, or rebarbative gloating; insight is in
short supply, along with proportion. It resembles a kind of talking cure for
dangerously emotional soccer fans who can share their near-hysteria with
a friendly ear; rather than inviting callers to describe their childhood, Green
asks whether the second goal was offside.33 Many of the callers are extremely
articulate and analytical, but the show is vitiated by its embrace of the myth
of the myopic bias of the “true fan.” In this, it’s a product of Britain’s soccer
culture, which has never accepted the idea of the “football intellectual” with
his weird objectivity and cool-headedness. Gabriele Marcotti, also employed
by 5 Live, might have chaired a very different debate.
The conclusion is that much depends here on the style of the presenter
and the ethos s/he establishes. All three shows instantiate the digimodernist
traits of onwardness and haphazardness. They also exemplify digimodern-
ism’s transfers of creative terminology: the role of the “writer,” the origina-
tor of textual content, is partly taken, in varying ways, by the “listener”; that
of the “presenter” occasionally resembles the show’s producer, managing
others’ inventiveness; on “6-0-6” Green’s primary function is to listen.
They’re also evanescent texts: it’s the prerecorded music and comedy that
tend to get podcasted, ironically. Of course this is far from exhausting the
range of possibilities of a digimodernist radio, and each has its strengths and
limitations. Such forms have also become increasingly common elsewhere
in British radio and TV, leading to transmission of audience feedback
206 DIGIMODERNISM

that can be shallow or crass; and although radio’s “liveness” ideally adapts
it in principle to digimodernism, this has no necessary implications for the
nature or quality of such a program.
A day spent listening to such shows might end with Richard Bacon’s
round-midnight phone-in for 5 Live, which, like Derbyshire’s twelve
hours earlier (or later), invites “listener” comment on the latest major
news stories. Bacon plays his outside intervenants off against studio guests
(generally minor political or media figures) to create discussions that
mingle his own deliberately emphatic but scattershot views, his milder
voice reading out texts and e-mails, the divergent positions of his flesh-
and-blood panelists, the contributions of his transitory, disembodied and
“ordinary” callers, and the interactions of the latter linked up to one
another. The tone is voluble, irreverent, and entertaining, like an argument
in the pub; there are no democratic pretensions here, and no conclusions
either. There are many ways, it seems, to skin a cat successfully; this surface
has only been scratched.

Music

In the digimodernist era rock and rock-related pop music are exhausted
musical forms. The time of their creativity and cultural achievement is
decisively over. In this they now resemble jazz, which continues to be
recorded and performed, bought and appreciated, but with no expectation
that any significant new development in its artistic history will ever again
occur. Yet rock and pop, though moribund, still impose the general aes-
thetic criteria by which we understand and value the contemporary arts:
film and TV in particular (also the novel) are in thrall to the ideologies
previously laid down by an art form that is today played out. This leaves us
historically stranded: our cultural king reigns over us dead and unburied.
To argue this, though, is to enter all sorts of murky waters. The era of
rock as an interesting and vibrant form, 1956 to (say) 1997, was more or less
that of postmodernism; and yet its vigorous espousal of authenticity,
passion, spontaneity, and self-expression was, on the face of it, embarrass-
ing and inimical to a cultural-dominant favoring the waning of affect,
depthlessness, the decentered self, irony, and pastiche. Lyotard does not
(to the best of my knowledge) mention rock, and Baudrillard traverses
America without noticing it (he prefers movies).34 Jameson reduces it in an
early article to an item in a list of postmodern examples: “and also punk
and new-wave rock with such groups as the Clash, the [sic] Talking Heads
and the Gang of Four.”35 In Postmodernism a similar sentence includes
Digimodernist Culture 207

his sole remark on the subject (the film Something Wild gets nine pages):
“and also punk and new wave rock (the Beatles and the Stones now
standing as the high-modernist moment of that more recent and rapidly
evolving tradition).”36 This is badly misinformed, as well as uselessly
brief. Theoretically orphaned, postmodernist critics got very excited about
sampling (more characteristic of hip-hop) and video (ads for songs, akin to
movies). Simon Frith noted in 1988 that “[i]n the relentless speculation on
mass culture that defines postmodernism, rock remains the least treated
cultural form.”37 Rock has little or no academic status: you can quote
Godard in an article in Modern Fiction Studies but not the Stones, critiqu-
ing television (theorized by Bourdieu, Derrida) is more credible than rock
(French TV is socially, if not culturally important; French rock is neither),
and Christopher Ricks is generally perceived to be on vacation when
studying Dylan. Consumer-oriented rock writing tends, in this academic
void, to be historically unreliable, culturally philistine, temporally narrow,
aesthetically tendentious, and only tangentially interested in the actual
music (privileging legends and hype instead). It’s permeated by what Frith
has criticized as the “common sense of rock,” the belief “that its meaning
is known thoughtlessly: to understand rock is to feel it.”38 This ambient
intellectual nullity reproduces rock’s own irrationalist ideology, its emphatic
valorization of the intuitive over the studied: the Scott Fitzgerald reader
who doesn’t get it (Dylan), the uncool teachers who taught me (the Beatles),
throw your schoolbook on the fire (Bowie), school’s out forever (Cooper),
we need no education (Pink Floyd) so leave this academic factory (Franz
Ferdinand). Rock is suffused by a rejection of the values of education—
veracity, context, judiciousness, theory, perspective, knowledge—they’re
uncool, unrock.
There are other reasons why rock was intellectually unfashionable (too
male, too white, too unFrench . . .) but the principal legacy is a gulf between
artistic achievement and critical/academic evaluation. The best of rock
(Blonde On Blonde, Revolver, The Velvet Underground and Nico, Forever
Changes, Astral Weeks, Exile on Main Street, Horses, Marquee Moon, the Sex
Pistols’ four classic singles, etc.) is a towering and lasting cultural triumph;
at least as great as anything of its time in any other medium; hugely influ-
ential on every other art form from film (Scorsese, Coppola, Tarantino) to
classical music (Glass), the novel (Rushdie, Amis) to television (too many
examples to cite); deeply and richly meaningful to tens of millions of people;
and probably the greatest songs ever written in English, and conceivably in
any language (I’m in no position to adjudicate this, but would welcome very
warmly the song that’s better than “A Day in the Life,” “Marquee Moon,” or
208 DIGIMODERNISM

“Madame George”). Rock also became, in a way that videogames can only
envy, an art form in its own right. But now it’s over.
Four versions of rock. One, as an ethos, an aesthetic, rock is: dynamic,
abrasive, dramatic, immediate; communicative, emotional, exciting; vary-
ing in mood from exultant to terrified, reassuring to threatening, but always
strong, intense, committed; apocalyptic, anxious, disaffected, alienated;
thoughtful, open, curious, accessible; urban, contemporary, hip, cool; eman-
cipatory, libertarian, skeptical; sensual, sexual, hedonistic; druggy, vision-
ary; perhaps not performatively complex but lyrically rich; white, male,
young, English-speaking. Not all of these are essential or sufficient quali-
ties; there are canonical rock texts lacking most (though not all) of them.
But they define rock as a cultural hegemonic: they are the aesthetic traits
desperately sought by every film producer, TV controller, and publisher
of fiction.
Two, as an afterlife of historical Romanticism rock: emphasizes the
individual and personal experience against the demands of an oppressive
society; valorizes freedom, self-expression, spontaneity, a return to
nature, political revolution, and social nonconformism; plays with anti-
Enlightenment, mysticism, drugs, and sexual unorthodoxy; implodes into
the occult, violence, madness; and fetishizes the figure of the unloved,
intense, suffering artist-hero burning bright and dying young. The Doors
cited Blake, Suede quoted Byron. After Rubber Soul the Beatles juxtaposed
a cult of the child with a journey into hallucinogen-fueled “visions”; after
“Satisfaction” the Stones explored the noble savagery of the sexually and
violently primal. This is rock as social meaning: as authenticity and coun-
terculture idealism, and the danger it represented to society.
Rock’s post-Romanticism severs it from pop and rock ’n’ roll’s “romance,”
but it is rarely sweetly Romantic in tone. Version three: as a lyrical/musical
form of late modernism, rock: conveys the sound of the city (discordant,
mechanical, cacophonic, overpopulated), the imagery of the urban (the
street), and the feel of modernity (dislocation, loneliness, terror, despair);
fetishizes speed and the machine; is hypnotized by images of war and dic-
tatorship, and haunted by Eliot’s apocalyptic nightmares; valorizes experi-
ment, can be obscure or considered obscene and censored; and draws
water at the wells of symbolist and high-modernist poetry (Rimbaud, Eliot
again) and avant-garde music (Stockhausen, Cage). This is rock as artistic
achievement: the burden of its claim to cultural significance resides here.
Modernism was brought to rock by Bob Dylan alone,39 between the writing
of “Mr Tambourine Man” in February 1964 and the release on August 30,
1965, of Highway 61 Revisited.
Digimodernist Culture 209

Four: postmodernism as brought to rock40 in 1972 by David Bowie on


Ziggy Stardust and the Spiders from Mars emphasizes and valorizes: perfor-
mance, theatricality, role-playing, persona, camp, sexual and gender inde-
terminacy, quotation and allusion, the marriage of the marketplace and
subversion, nostalgia for a lost history, depthlessness, and the waning of
affect. Though this seemed contrary to the sacred values of “authenticity”
and “self-expression,” it finally achieved acceptance, especially in Britain,
through blending almost indistinguishably with the other versions of rock.
Indeed, in practice these versions existed almost solely in alloyed form:
British punk, for example, mixed all four (role-playing as Nazis, terrorists).
This makes analyzing rock by such categories almost futile (and I shall stop
here). However, it can be seen from this that all the aesthetic models of
rock were shaped during a relatively brief period now long over (1964–74),
and that nothing conceptually new has been added since. If rock is
exhausted then the alloys of these versions, fashioned by a generation of
artists almost half a century ago, are all worked out; the composite template
such bands and singers established is worn out (not “music” itself). You
can also (version 4½) see rock as a development from blues and a response
to its sociopolitical times, but narrowness here will kill our appreciation of
its cultural-historical achievement.

Like a four-line poem or the story of a thousand words, the individual song
can gleam like a jewel or knock you out, but it is too slight to become art of
the highest order. There’s not enough substance there; and acclaimed song-
writers from non-English speaking countries like Serge Gainsbourg have
accordingly insisted that song is a minor art. Rock, however, invented for
itself its own cultural unit, a new signifying form: the album, a coherent
suite of songs, forty or fifty minutes long, with the textual range, complex-
ity, richness, and variation that mark enduring artworks. Great albums had
appeared before 1965, such as Robert Johnson’s King of the Delta Blues
Singers (1961) or James Brown’s Live at the Apollo (1963), but they were
adventitious: by serendipity they just happened to contain an awful lot of
terrific songs. There were also immortal jazz albums, like Miles Davis’s Kind
of Blue (1959), but jazz never embraced the album as an expressive form
the way rock did. The songs on a rock art-album belong only there: they are
distinct but integral parts of a greater whole, they contribute to something
beyond themselves, they are linked thematically as well as sonically, flow
into one another and so extend and enrich each other. This coherence or
unity arises organically, though the “concept album” attempted, usually ham-
fistedly, to impose one artificially. The first such art-album was Highway 61
210 DIGIMODERNISM

Revisited, rock’s urtext (it’s sometimes mistakenly thought to be Rubber


Soul, released three months later).
Until 1965 the dominant form of pop and rock (or rock ’n’ roll) was
the single. Albums usually comprised the artist’s recent singles plus some
inferior tracks (the good stuff went out as singles; there was no other kind
of “good stuff ”). Not many such albums survive, crippled as they were by
so much third-rate material. The Rolling Stones’ Aftermath (1966), how-
ever, is typical: the singles first, followed by enough rushed and uninspired
songs to make a saleable commodity, their choice so random the track
listing can vary widely from one country to another. The title of such an
album is often an advertising slogan (Meet the Beatles [1964]) or that of a
hit single (Dylan’s The Times They are A-Changin’ [1963]): given no textual
identity, they’re basically untitled.
The art-album began then with its own aestheticized title, often opaque
and suggestive but autonomous, while its track listing hardened into a set
as inflexible as the chapters in a novel. In its independence it occasionally
separated completely from the world of the single: Astral Weeks had noth-
ing fit for release in that format, while Led Zeppelin and (to a lesser extent)
Pink Floyd abandoned the singles market entirely. No longer modeled
according to the template of the three-minute single, album tracks began
to get longer and more diffuse, extending to six or eleven or even seventeen
minutes. Their sequencing was determined by their own internal shape
and logic, but would habitually start with a decisive and dynamic declara-
tion of intent or summary of themes, and conclude perhaps apocalyptically
with a vision of death or infinite closure or perhaps on a finely judged
note of irresolution. Divided into two sides, the songs’ running order would
also permit a sense of weighted interruption at the end of side one and a
muted sense of relaunch at the beginning of side two. Twenty minutes or so
in length, each side would form its own pattern out of variations in style
and mood; side one of the US version of The Clash comprised four the-
matic oxymorons. All in all the rock art-album stood by itself: named with
charismatic inscrutability (this soon degenerated); making no distinction
between singles and nonsingles but instead between major and minor
tracks; strongly sequenced, shaped, canonically ordered; integral in content
and flow, with a continuous sound; and designed for endless replaying, so
rich, suggestive, and cross-pollinating that it might never be exhausted.
Within rock the album became primordial; in concert, artists sought to
reproduce “live” their studio sound, the textual benchmark, and so rock
epitomized Walter Benjamin’s insight that “the work of art reproduced
Digimodernist Culture 211

becomes the work of art designed for reproducibility.”41 The most extreme
example of the form may be the Stones’ Exile on Main Street, which contains
no one extraordinary song, but where the flows of meaning and emotion
across the ensemble generated by repeated replaying produce a sense of
wholeness, intensity, and beauty virtually second to none. This totality can-
not easily be described, but to connoisseurs it is unmistakable and unique.
This interwoven, slow-burning, and integral form, though distinctive to
rock, bears some family resemblance to the poetry recueil such as Lyrical
Ballads, Les Fleurs du Mal, or Swinburne’s Poems and Ballads First Series.
Though too numerous to yield the album’s overall shape, such poems gain
from being read as parts of a whole. If rock is to survive as an art form it
must be through the album, since decontextualized songs weigh too little
on the cultural memory. Creatively, however, the art-album is dead.
Rock ran itself into the ground under its own hypercombustible
steam, but it was helped on its way from the mid-1980s by the spread of
the compact disc. This was the first formal impact of digital technology on
rock. Producing one long, undifferentiated raft of songs, the CD made it
impossible to shape an album’s sequence. Permitting up to seventy-five
minutes of continuous music where the LP had been restricted to about
twenty-five, albums conceived as CDs became amorphous, unwieldy and
interminable quantities of often mediocre material. The CD didn’t “kill
rock,” which was showing signs of reaching the end of its natural life sev-
eral years before the format’s commercialization. Moreover, its influence
took a while to percolate through to artists reared on the art-album. Oasis’s
Definitely Maybe (1994) is shaped, but their Be Here Now (1997) is an inter-
minable raft. Radiohead’s OK Computer (1997), the last great art-album
ever made, is as beautifully formed as anything in rock.
The reason why OK Computer can be awarded such an accolade with
such confidence is not that it’s musically unsurpassable but that its form
is now obsolete. Debilitated by the CD, the art-album has been killed off
by the iPod and the MP3 player, the computerization of access to music
and therefore of the conception of the music text. Just as Led Zeppelin
abandoned singles, more and more artists have spoken of dropping the
album and releasing only tracks to be downloaded from the Internet. The
shift away from the commercial, social, and instant single in favor of the
album enabled experimentation, risk-taking, music as cultural achievement;
both have disappeared into the past. The track, so private, individualized,
fragmentary, and momentary, is made possible by and in turn embodies
the death of popular culture, in both its terms.
212 DIGIMODERNISM

The era of high rock, of rock’s dynamism, creativity, and originality, its
social relevance and cultural potency, seems to me to lie between spring
1965 and the early 1980s. The start of this period is easy to date. In late
1964, the Beatles were still monosyllabic children’s entertainers and the
Stones a rhythm ’n’ blues tribute act; the release of Dylan’s Bringing It All
Back Home in March 1965 was shattering in its impact, a gauntlet thrown
down in terms of lyrical and musical quality that led directly to such water-
shed singles that year as the Stones’ “Satisfaction” (June), the Beatles’
“Help!” (August), and the Byrds’ “Mr Tambourine Man” (April). The end is
less easy to pinpoint, though a line was drawn in Britain by the suicide of
Ian Curtis in 1980. It’s habitual for people to think that music was most
exciting during their youth, but for me in 1985–88 it seemed, on the con-
trary, that rock had never been less vibrant. This was the climate that
prompted Simon Frith to state in 1988, prematurely in my view, “I am now
quite sure that the rock era is over.”42 The mood was encapsulated by Live
Aid: retrospective, nostalgic, creatively lifeless, the spirit of the greatest
hits’ package rather than anything new. British music was dominated by
the Smiths, who drew lyrically and visually on late 1950s/early 1960s’
northern English “kitchen sink” drama in defiant rejection of contempo-
rary yuppie triumphalism, and musically on 1960s’ West Coast jangling
guitars in explicit repudiation of modern music’s synthesizers, samplers,
and beats. Their singer assumed a pose of adolescent torturedness though
by now in his mid-twenties. Their American equivalent as a successful sig-
nifier of “integrity,” “authenticity,” and “real music” was Bruce Springsteen,
whose songs, as Frith put it in 1987, emanated a “whiff of nostalgia” and
whose stage persona was that of a “37-year-old teenager.”43 Such 1980s’
rock was multiply lost to its memories.
In the decade from 1989 rock actually revived considerably, and under-
went what in retrospect can be seen as its aftershock or afterlife, an interval
between its achievement and its exhaustion in which a string of interesting
artists appeared without finally that much great music being produced.
After a handful of ground-breaking songs the Stone Roses and Happy
Mondays imploded; Kurt Cobain, one of the most gifted figures in all rock,
signaled, like Curtis, the cul-de-sac of Nirvana’s aesthetic (after two
wonderful albums) in the most terrible of ways. Britpop, after Suede’s
opening starburst, grew ever more reliant on nostalgia and retrospection,
though it substituted for the solemnity and sentimentality of the Smiths
and Springsteen an ethos of postmodern irony, pastiche, allusion, and wit.
Blur’s mid-1990s’ songs evoked the “naughty” sex and monoracial dreari-
ness of the England of the Carry On films and Benny Hill; their tales of
Digimodernist Culture 213

cross-dressing bank managers (which owed much to Pink Floyd’s “Arnold


Layne” [1967]) were, as one reviewer commented, like being wrapped in
copies of a 1950s’ English tabloid. The thirtysomethings of Pulp revisited
their first sexual fumblings and the vile wallpaper of the 1970s; their
much-admired “Common People” was an anachronistic reformulation of
British “slumming” narratives of the 1960s and reworked in particular the
Rolling Stones’ “Play with Fire” (1965). The plodding dirges of the inaptly
named the Verve exuded, like Oasis’s musical sloth, rock’s weariness; their
much-admired “Bitter Sweet Symphony” stole its best feature from the
Stones c. 1966. The Divine Comedy retreated ironically to 1950s–60s’
English cinematic figures, while Oasis, after a brief moment of promise,
sought to combine, on their end-of-an-era disaster Be Here Now, the anthe-
mic sing-along inclusiveness of “Hey Jude” with the bad-boy menace of
the Stones c. 1967, though excruciatingly delivered neither. Above all,
Britpop manifested the superannuation of rock by locating its meaning
and creativity a generation or more earlier. This was rock’s swan song.

Technological determinism here would be a mistake: digitization didn’t


“kill” either rock or the art-album; it would be truer to say that the creative
weakness of the latter, already clearly visible, did nothing to discourage the
commercial success of the former. Dylan Jones has described using iTunes
to create playlists resembling imaginary albums, such as his “1970 Beatles’
LP” Everest consisting of his favorite songs from Let It Be and their first
solo projects.44 A digimodernist art-album might look something like this.
However, though such a listener plays the roles of textual sequencer and, in
a sense, record label, s/he doesn’t create ab nihilo; and the remixing of
received cultural materials is of course highly postmodernist (always
alloys). Digimodernism and the art-album are so far contingently, not nec-
essarily, incompatible.
Demonstrating the exhaustion of contemporary rock is doubtless
going to be as finally impossible as proving the death of postmodernism.
All offered evidence can be contested. At the end of 2003 Rolling Stone
magazine announced their 500 Greatest Albums “of all time”: the entire top
ten came from 1965–79 and the 1970s accounted for 36.6 percent of the
entries, the four years of the 2000s yielding only 2.6 percent. “But these
writers and critics are stuck in the past.” Yet in recent years the historically
informed connoisseurship of rock has notably become the preserve of the
middle-aged: see, for instance, the ages of the aficionado DJs on BBC Radio
2 versus those of their ahistorical brethren on Radio 1. “But what about
great songs—rock was never just albums.” Yet the greatest rock songs
214 DIGIMODERNISM

depended crucially (but not completely) on the art-album: had rock


remained commercially limited to the single “Desolation Row,” “Land,” or
“Venus in Furs” would never have been recorded. Today the single has
degenerated in any case into the worthless by-product of interactive digi-
modernist TV: shows like The X Factor, American Idol, and so on, which
separate the unsaleably from the saleably talentless, manufacture a line of
chart-toppers of zero musical and cultural interest. It’s more instructive
though to look at a few examples of what exhaustion means in practice.

The Strokes, Is This It (2001); Coldplay, Parachutes (2000);


Snow Patrol, Final Straw (2003)
In 1997 Elton John played at the funeral of a British royal, in 2002 Ozzy
Osbourne sang for Queen Elizabeth II’s jubilee, and in 2008 the leader of
the British Conservative Party gave CDs including the Smiths to Barack
Obama. John Harris felt in 2003 that “the mainstream is all there is” and
that music had become “homogenized and conformist.”45 “Coldplay,” whose
name juxtaposes the musical and the cadaverous, have been castigated by
aficionados, yet while Parachutes is feebly sung and vacuously solemn its
reformulation of rock as contemporary easy listening isn’t unusual: they’re
convenient scapegoats for a wider trend. All the rebellion has gone out of
rock; it no longer has a countercultural bone in its body. Large quantities
are, in consequence, irredeemably pleasant, like Is This It and Final Straw:
agreeable, charming, as comfortable as an old pullover, as congenial and
familiar as a faded T-shirt. They make some vague gestures toward “atti-
tude” (expressions of fatigue, disillusion, apathy) but ritualistically; these
are songs that disturb or threaten or resist nothing. It’s true that this tooth-
lessness derives from and reflects the contemporary depoliticization and
conformism of rock’s first constituency, the 13–30 age group. If it’s a cliché
to describe the generation born after about 1980 this way it’s only because
it’s so obviously true (nor is it absolutely deplorable; politicized generations
tend to have suffered politically). Sometimes they seem to mobilize in
response to specific issues, but more often from consumerist than political
considerations: hearing that an actor on the other side of the earth said
something racist will outrage them, learning that a neighboring country
passed a racist law will leave them blank. If a sixty-year-old is today more
likely to enthuse about a revolutionary than a twenty-year-old, this is
because the former retains a sociopolitical sense of his or her reality, which
the latter has largely junked for an ego-consumerist worldview. Inevitably
rock, the sole art made principally by the young, has lost all trace of its
Digimodernist Culture 215

former rebelliousness. At the same time rock songs no longer break new
ground musically, creative originality has given way to conservatism; there’s
no artistic rebellion either. And rock’s social disruptiveness is gone too. The
Rolling Stones in 1967, the Sex Pistols in 1977, or Boy George in 1983
sparked a storm of fear and excitement that is inconceivable today, I think
because the terms of that tempest—drug-taking, sexual libertarianism,
media manipulation, and gender ambiguity—no longer stir. The personal
travails of Britney Spears, Pete Doherty, and Amy Winehouse evoke only a
mixture of prurient curiosity and parental concern; the forms of revolt
once embodied by rock are today experienced as exploitative entertain-
ment or human interest.

The White Stripes, Elephant (2003); Amy Winehouse, Back to Black (2006)

Post-2000 rock is exemplified by the art-album tour, for instance, Brian


Wilson recreating Pet Sounds track by track live. This backward-looking
simulation of past rock achievement is also found in the tribute act, bands
who reproduce the Beatles or Pink Floyd, say, almost perfectly in a tacit
acknowledgment that rock is now a repertoire, a canon of artists and
texts that will no longer be expanded but rather only revisited (rock as
the new classical music). The White Stripes and Amy Winehouse display
a distinctive combination: the interplay of an exceptionally high level of
musical and vocal talent, a technical mastery, with the absolute unoriginal-
ity of their material. Not an inch of their musical territory is uniquely
theirs. Unlike tribute acts, the sound they reproduce is a composite of many
earlier models, but as writers, textually, they add nothing to the creative
possibilities of song. And yet this is about as good as contemporary rock
gets. “Back to Black” and “Ball and Biscuit” are terrific songs, but not in the
shattering, perception-altering way that “Sympathy for the Devil” and “Hey
Jude” were, recordings that (to lend perspective) were first heard publicly
in the same room on the same night in 1968. Expressively, the White Stripes
and Winehouse leave music exactly as it was when they first appeared;
they’re entertainers rather than artists, people who breathe life into their
art rather than reconfigure or extend it. But this isn’t their fault; it’s down to
their belated dates of birth. Franz Ferdinand’s eponymous first album
(2004) is the same: clever and interesting, ironic and knowledgeable, in
another age they would have made something like Entertainment! or Fear
of Music and been listed by Jameson; now, they just hark back amusingly to
that time.
216 DIGIMODERNISM

The Libertines, The Libertines (2004); The Streets, A Grand Don’t Come for Free (2004);
Arctic Monkeys, Whatever People Say I Am, That’s What I’m Not (2006)
It isn’t that rock has gotten “bad”: if Lou Reed or the Clash had been born
in 1980 they wouldn’t have made anything better than the Libertines’ “Can’t
Stand Me Now” or Arctic Monkeys’ “I Bet You Look Good on the Dance-
floor.” It’s necessarily different, not contingently less good. As an example
of this, when alive and dynamic rock was healthy and strong enough to
stretch, absorb, merge with and reshape other musical genres and aesthet-
ics: jazz, musique concrète, reggae, Spanish guitar, the Western art music
(classical) tradition, country. Arctic Monkeys, the Libertines, and others
play rock, instead, as if resuscitating an ancient form, as you’d sing madri-
gals today, locked inside its inflexible and dead limits, codes, and conven-
tions. Such music cannot evolve or innovate, only repeat itself; it’s narrow
(and sometimes enjoyable). It can be atmospheric, sexy, fun, groovy (so
can Kylie)—but all drama, rebellion, and meaning are gone. A pose of rock
cool is assumed, but the sound is constricted, inchoate, shallow, drained of
content; the songs echo vast quantities of earlier great songs but without
depth or significance, the way French groups used to ape the gestures and
tone of the best Anglo-American music. Such artists then are mostly
valued ideologically: for their personal-aesthetic-historical self-positioning,
their adherence to rock’s traditional versions, their reproduction of the
type of personality and the type of music rock comprised in its heyday
(especially the punk template). The reductio ad absurdum of rating artists
and songs according to their fidelity to personal and sociocultural criteria
laid down thirty years earlier—the ad hominem fallacy of criticism, instead
of paying attention to what’s actually there—is the overestimation of the
excruciating and risible Streets.

Radiohead, In Rainbows (2007); Gorillaz, Demon Days (2005); Tori Amos; Manu Chao

Such music is instantly familiar and comfortable: it’s quotation and pas-
tiche without irony or double coding, without postmodernism, the hewing
from a coalface whose rich seams have long since been extracted, leaving
only faint traces among common earth. Some artists, recognizing this, have
sought to move on, for if rock is exhausted then song doesn’t have to be and
one of my aims here is precisely to separate the two by historicizing and
theorizing a little the former. It could be time for artists outside rock’s
personal templates and musical heritage, like Manu Chao and Tori Amos,
though both are perhaps past their best. The work of Damon Albarn since
Digimodernist Culture 217

2003, especially with the fictitious (not “virtual”) group Gorillaz, rejects
the tradition of the rock artist and his sociohistorical context. Although
Gorillaz’ late electronica is not musically original, its meaning, such as it is,
is freed from the straitjacket of certain inherited aesthetic assumptions.
Radiohead have probed most rigorously the territory beyond rock, begin-
ning with OK Computer’s titular embrace of digitization. Its hushed,
sinuous, intimate wash of sound and sense of insubstantiality, unease, and
alienation perfectly evoke the contemporary computerized workplace; it
largely leaves behind the harsh guitar drive with which rock traditionally
suggested the mechanical world of the factory and the street. Kid A (2000)
and Amnesiac (2001) took refuge in 1950s–70s’ avant-gardism: academic,
inward, and muted, they’re dominated by a mood of self-eclipse, of non-
communication and nonbeing. Opaque, churning, indecipherable, and
rebarbative, Hail to the Thief (2003) found the band even further advanced
into a crisis of expression which was that of the musical form that spawned
them. Like its predecessors, it’s defiantly minority in its rewards, culturally
deliberately marginal; rather than the glowing, warm satisfactions of OK
Computer, the album is aloof, hard, and indeterminate. Four years of silence
tellingly followed, ended by the marvelous In Rainbows, initially available
solely as a digital download in a controversial experiment in the ongoing
restructuration of music’s economics. The songs again placed themselves
in a cultural ghetto, so personal and private it seemed extraordinary they
had ever found an audience. Radiohead’s response to the disappearance of
the rock context has been unwavering though varyingly expressed: an ever-
greater artistic withdrawal from the world around into the delicacy, beauty,
and weirdness of their own creations.46
Some may wonder why I have given such space to a form I consider
played out. The reason is partly, as I’ve said, that the rock ethos remains
culturally hegemonic in other fields: on the Internet, on TV, and in cinema,
the notion that what’s valuable and interesting, creative and contemporary
is what appeals to “young people,” what’s “edgy,” explosive, antieducational,
and so on—all this derives from the authority of a cultural model. If this
model were itself creative and interesting, this would be unexceptional;
instead, the aesthetic future is to be rewritten. The same applies to the recent
past. Now that postmodernism is (as good as) over, one of the tasks of digi-
modernism will be to revisit and reassess the artistic period it appropriated
free from the bias of its assumptions. For postmodernism, as I’ve indicated,
rock was aesthetically unacceptable, even inimical. Yet this led to a dis-
torted view of actual cultural achievement. Digimodernism is the chance
to reevaluate and understand anew the art of the past fifty years; it’s the
218 DIGIMODERNISM

start of a period of cultural historiography, one that begins with a requiem


for the dead.

Literature

It is almost possible to argue that digimodernist literature does not exist.


Where are the digimodernist novels, poems, and plays? Who are the
digimodernist writers? One way of answering this would be to say that lit-
erature does not have the relationship to digimodernism which it had to
postmodernism or modernism, that of the immediate synecdochal exem-
plum on the level of the textual product. To understand these earlier move-
ments you could read Mrs Dalloway or Beloved and, as privileged fragments
of their contextual whole, they would refract a total cultural-historical
moment down to you; as integral units, they would convey an artistic
period that they dominated across to you in its evoked totality. If literature
so far does not do this for digimodernism it is in part because the latter
gives no privileged status to a finished textual mode; indeed, it shatters and
restructures textual modality. In consequence, it would be truer to argue
that digimodernist literature is yet to come. A technological determinism
operates here: the high cost of film- or television-making, and the con-
comitant need for economies and profits, generates a constant impulse to
embrace innovation in matters of production. Digital technology therefore
swiftly engulfed working practices in these media in a way impossible for
those turning out novels and poems. True, the latter had their typewriters
replaced by the word processor in the mid-1980s, a historic shift initially
experienced as almost overwhelming, and resisted by some (like Paul
Auster).47 Some claimed that this had prompted tangible literary changes:
the unprecedented lightness and ease of composition and the ability to cut
and paste were sometimes held responsible for the inordinate length and
sprawl of novels like Eco’s Foucault’s Pendulum (1988) and Amis’s London
Fields (1989). Since then plenty of condensed and focused novels have
appeared, of course, and anyway it’s needlessly reactionary to suppose that
technology’s main impact on art is to disfigure it.
As a cultural moment, digimodernism resembles the Renaissance more
than anything from the twentieth century. If in 1550 you had asked an
imaginary figure, blessed with a precociously full understanding of his or
her cultural times, what s/he thought of Renaissance literature, a strained
silence might initially have ensued. The aesthetic shifts of the Renaissance
were felt first in the visual arts of architecture, painting, and sculpture, and
for a long time scarcely at all in literature (Shakespeare and Cervantes
Digimodernist Culture 219

were not yet born). S/he would almost certainly have felt, instead, that
the effect of the Renaissance on the written word inhered mostly in the
invention of the printing press, in Gutenberg’s Bible, rather than any new
literary texts s/he could as yet proffer. In the same way, digimodernism in
literature has first and foremost been apparent in revolutions in publishing:
in the physical production of and access to literature. On one side there is
Amazon and the processes of online book selling, which have vastly
increased the number of texts kept in the commercial domain: whereas
stores are constrained in the books they can stock by their floor space,
Amazon’s huge warehouses are limited really only by the reach of customer
demand.48 Similarly, digital technology has made it possible for books to be
printed on demand, again immensely increasing the numbers potentially
in circulation; books live longer in the public domain than before (or have
longer deaths). Beside these developments comes the digitization of the
book itself, making existing works available on the Web for free through,
for instance, Google Books, though this is a problematic and contentious
area. And then there’s the putative e-book (you can read about Auster’s
typewriter on Kindle). It’s enough though to indicate that computerization
has not left literature alone.
More broadly, digimodernism inflects contemporary literature through
the increased socialization of reading. TV shows like Oprah Winfrey’s and
Richard & Judy regularly recommend novels to huge audiences, who buy
and devour them concurrently in an outbreak of mass identical literary
reception not seen since Dickens. Book clubs construct reading as a social
activity; writers’ tours (signings, readings, festivals) bring texts to vast
numbers of people simultaneously. To oversimplify, traditional reading was
solitary, driven by the “canon,” and seen as an ineffable contact with a tran-
scendent author; postmodernist reading was constructed as politicized,
skeptical, and a near-impossible engagement with a slippery text, the author
nowhere. Reading structured by such broadly digimodernist practices is
distinct again: social and commercialized, it favors the “fan” and makes a
cult of the author while assuming that a text’s meaning emerges from its
social use. An example of what happens to literature under such a con-
sumption appears in Robin Swicord’s film adaptation of Karen Joy Fowler’s
2004 novel The Jane Austen Book Club (2007), where a group of well-off
American women meet regularly to discuss the six novels and the vicissi-
tudes of their own relationships. Their personalities and predicaments
intertwine with those they’re reading about in a manner which, fifteen
years earlier, would have been postmodernist. Instead, the women’s attitude
to Austen is both humanist and pragmatic: having ironically posited her
220 DIGIMODERNISM

work as “escapism” from the stresses of life, in practice they settle into read-
ing it as an escape-route, a key leading them out of the prison house of their
present anguish into the sun-kissed world of emotional fulfillment. They
read her, then, as a treasure-trove of wisdom about the eternal verities of
the human heart; she becomes a guide to contemporary satisfaction, a form
of amusing holy writ subsumed into the hard detail of her readers’ lives.
The digimodernism of such tendencies lies in the robust sense that
textual meaning is bound up with its use (Wittgenstein distantly presiding
here, displacing Saussure), and that texts exist largely as the focus of collec-
tive practices. By this Mansfield Park can be treated as though a blog or
a Wikipedia entry even though it can’t be “written.” Other aspects of the
literary landscape have also been forcefully transformed, notably the place
of the critic, though this applies equally to cinema and music. On social
networking sites and blogs, on Amazon and the IMDb (International
Movie Database), cultural criticism is turned out en masse by untrained
amateurs. I’ve talked about this already in Chapter 4, but it’s interesting
here that the discourse they employ is modeled on professional published
criticism: an “objective” summation of content, background information
about the author’s previous works, evaluations of the text’s success or
failure, and recommendation or not for its target audience. Subtle, stupid,
well informed, or ignorant, such online reviews pay tribute to published
professionals’ jargon and methods even as they make them redundant.
A disinterested comparison of what the amateurs and professionals offer
suggests that these days remarkably little of the latter is as shrewd, fair,
knowledgeable, and stimulating as much (but definitely not all) of the
former. The quality of newspaper and magazine arts reviewing has plum-
meted in places since the 1970s–80s, while academic criticism, which for a
while abandoned any sense of positive literary value except on an ad homi-
nem basis (down with DWEMS, up with their opposites), diverged entirely
from the realm of reading for pleasure. So, on one side there’s the socializa-
tion of reading (we all read the same book simultaneously); on the other,
the socialization of criticism (every woman and man an online critic).
What of the author? But this is too much like modernism: pockets of
“radical” and restless young men collaborating on coffee-stained manifes-
tos in an attempt, essentially, to model literary production on the behavior
of revolutionary groupuscles. It’s certainly the case, instead, that the shifts
outlined in Chapter 5 have been instantiated in the world of the novel: His
Dark Materials and the Harry Potter sequence reflect the infantilism, mytho-
logy, earnestness, and endlessness that characterize early digimodernism.
Digimodernist Culture 221

Mark Haddon’s The Curious Incident of the Dog in the Night-Time and Dan
Brown’s The Da Vinci Code (both 2003) will be discussed in Chapter 7. But
even the best of these are only the signs of changes that are underway, and
I wouldn’t want to call any of them “digimodernist literature” because I
can’t see yet what the category might mean, although such a label exists
in film and TV. There is, then, a disparate bag of transformations here—
Amazon, Google Books, Kindle, Oprah, tours, book clubs, critic-blogs,
mythology, endlessness—which all break with postmodernism and relate
to its successor. But not only does digimodernism await its Shakespeare
or Woolf, it also awaits its Barth, Barthelme, and Queneau, both its giants
and its recognizable type.
Whatever digimodernist literature may turn out to be, my sense is that
it won’t be hypertext or electronic interactive literature. This can be defined
as a form of fiction accessible only via a computer and consisting of
discrete quantities of text that the reader moves around by clicking on
links; the reader chooses his or her pathway among these textual units,
which have been previously created by the writer. A division of labor
transpires: the writer invents the material, the reader sequences it the
way s/he wants. The coming of hypertext as the future of literature was
announced as long ago as 1992 when Robert Coover published an inflam-
matory article called “The End of Books,”49 and was vigorously espoused by
people such as George Landow, the man responsible for the portmanteau
word “wreader” which would have made the antilexicon had it ever caught
on. But in reality the future dominated by hypertext is already behind us.
There are perhaps three senses, largely unavoidable, in which hypertext
is now the literary master who will never rule. The first is that, outside of
niches such as pockets of academia or Eastgate electronic publishers, no
one is interested, to be brutally honest. The hypertext world is even smaller
and more incestuous than the much-maligned contemporary poetry
world (who is hypertext’s Heaney, its Walcott?); most voracious readers of
fiction would struggle to name one hypertext title or author; there’s no
nonprofessional demand for it. There are professors who write it, like
Stuart Moulthrop, Michael Joyce, Shelley Jackson, and Mark Amerika, and
professors who write about them, and the rest of the world lets them do so
in peace. Nor is hypertext likely now to gain in popularity. The past fifteen
years have seen a wild enthusiasm for the Internet and electronic text
swamp the developed world; it’s been an age of exponential growth in the
number of Web sites and digital applications. And yet the level of general
interest in hypertext is, if anything, even lower than it was in the
222 DIGIMODERNISM

mid-1990s; how will it survive when cultural fashions change, as they inev-
itably must, and people lose their fascination for computerized text?
The second reason is that, functionally, hypertext is already old hat.
Web 2.0 enables everybody to write and publish their own material before
a worldwide audience; to be allowed to sequence someone else’s stuff no
longer looks quite as astounding as it once did. Compared to Web 2.0, just
clicking your way around what someone else has provided is a minor thrill;
it’s like asking somebody accustomed to writing and directing movies to
take a job as an editor. A third possible reason, though more subjective, is
that, in my experience, hypertext fictions are somewhat joyless affairs. The
citation is overused, but they really do resemble Samuel Johnson’s walking
dog, about whom we marvel that the act is done at all and overlook how
badly it is done . . . except that, in the age of Web 2.0, we no longer do the
marveling bit. Engulfed by such a fiction, the reader’s response is likely
to be disorientation, followed by frustration and finally a sense of futility;
as The Unfortunates demonstrated, the right to sequence literary materials
is worthless unless doing so permits a distinctive aesthetic or human
experience. For its practitioners, hypertext offers the reader liberation,
empowerment, and so on, but this is both bogus and old-fashioned. It is
spurious to promise emancipation where there was never oppression
(however much you may hate a writer, you don’t feel stifled by her/his
monopoly on textual sequencing). Moreover, this rhetoric is heavily reliant
on post-structuralist assumptions about the writer and the reader, which
have lost their once unassailable position in a broadly “post-theory” intel-
lectual landscape. Hypertext advocates fetishize postmodernist textual
qualities (discontinuity, the aleatory, etc.) that have gone out of cultural
fashion. Consequently, the citizens of the hypertext community are, on the
whole, an ageing, nostalgic group already.
If you want to pursue this further you could google Geoff Ryman’s 253,
uploaded to the Web in 1996 and published in book form in 1998.50 You
click around the personal details of 253 people traveling across London by
tube one January morning: calling up any individual you get their name,
age, appearance, and a description of their thoughts, all condensed into
exactly 253 words. They share some characteristics and themes, there are
links among certain of them, and you can jump around both data and peo-
ple; there is no preset textual sequence beyond what you make up yourself.
But two problems with the content emerge. First, each descriptive capsule
must be striking and, to some degree, cryptic, to cope with its physical
isolation from the rest of the text: it has to induce in the reader a desire to
click on and can do so only by providing a strong textual experience—it’s
Digimodernist Culture 223

too easy for the reader to stop when s/he knows that there is no chance of
a sense emerging of textual completeness or its positive corollaries (coher-
ence, harmony, order, purpose). The need for something remarkable on
every screen produces in 253 a ludicrous melodramatic overwriting: every
single passenger seems to be embroiled in race rioting, underworld crimi-
nality, sudden bereavement, drug-trafficking, risky adultery, and so on—it’s
unreal and overblown. After all, if one page didn’t deliver a kick you
mightn’t go on to another: there’s no cumulative pleasure possible here, no
slow-burn or control of shifts in pace, focus, or mood. Similarly, Ryman
“ends” his narrative with a cataclysmic crash instantly slaughtering half his
passengers, an event with all the verisimilitude and proportion of a cartoon
character dropping a thousand kilo weight on top of the train. But then
again, how do you “conclude” a text with no forward dynamic of its own?
Ryman himself seems to have regarded 253 as an interesting experiment
not to be repeated. But all these failings don’t prevent it from figuring in a
recent “canon” of hypertext.51
Prescient and pioneering twenty years ago, hypertext was hexed by con-
tent and pushed, so it would seem, into the footnotes of literature by the
onward speed of change. As with ergodic literature, the theoretical ances-
tor of the digimodernist text, the context is much broader now: things are
textual, cultural, social, historical.

***
Such a catalog of struggles and problematics within established media
provokes a last, unanswerable question. Is digimodernism finally another
name for the death of the text? Most of the crisis-ridden forms discussed
here provide a closed, finished text: you buy, own, and engage a film or TV
program or song as a total artistic entity, as a text-object. This objectivity
endures over time, is authored, reproduced; it has become, in its material
already-createdness, the definition of a text. Videogames and radio shows
are markedly weaker in this regard; they are less culturally prestigious too;
but socially they are thriving. The onward, haphazard, evanescent digi-
modernist “text” may seem finally indistinguishable from the textless flux
of life. Is digimodernism the condition of after-the-text?
The sections of this chapter, sequenced by the intensity of the digimod-
ernism manifested in each medium, divide into weak and strong forms
of delimited text. The latter are afflicted by declining audiences (film, TV),
fossilized canons (film, music), academic uncertainty (music, literature),
and the disappearance or undermining of their commodity form (film
downloads and pirating, the crisis of TV advertising, the death of the CD).
224 DIGIMODERNISM

Kevin Kelly has dreamed of all books being digitized into “a single liquid
fabric of interconnected words and ideas” to be unraveled, re-formed,
and recomposed freely by anyone for any reason.52 There are signs across
the media landscape of such a development. Yet, unquestionably, this
would resemble a mass of unauthored and unlimited textualized matter.
A text, though, must have boundaries and a history, in the same way that
the distinction between “life” and “a life” ascribes to the latter physical
circumscription and biography. With the reception and commodification
of the individual text already imploding, will there be room under digitiza-
tion for a text?
There are two optional answers to this. The first sounds a futuristic note
of doomy jeremiad: early digimodernism will perhaps be remembered as
the last time one could speak of a new, emergent form of textuality, before
the singular object-text was drowned forever by the rising tide of undiffer-
entiated text; the 2000s naively saluted a textual revolution before it revealed
itself, in its totalitarianism, as the genocide of the text.
The second entails turning away from texts and the consideration instead
of history, or of contemporaneity placed in the long term. The survival
of the object-text depends on the continued valorization of competence,
skillfulness, and know-how, because these are, ipso facto, excluding forces:
they delimit, isolate, close. These are social and moral issues, and so we
come to the final chapter.
7
Toward a Digimodernist Society?

See-saw Margery Daw


Johnny shall have a new master
Nursery rhyme (traditional)

It has been said that:

modernity is more than a period. It designates the social, political,


cultural, institutional, and psychological conditions that arise from
certain historical processes.
Modernity in this sense is related to, but distinct from, the various
aesthetic works and styles that fall under the label “modernism.” As
an artist, one has a choice whether or not to embrace “modernism.”
Modernity is not like that. You may come to modernism (or not), but
modernity comes to you.1

What is true here of modernism is clearly true also of digimodernism; its


first decade has not seen it gain universal sociocultural acceptance, for a
range of reasons, despite its rise to a certain preeminence. How does this
description of modernity relate to digimodernity, and how does digimo-
dernity relate to digimodernism? These questions may seem, at this stage,
peculiar, since I have resisted from the outset the notion of a digimodernist
epoch distinct from that or all those which preceded it. There is no line
in the historical sand absolutely dividing “our time” from the one before.
The notion of “postmodernity” I find unhelpful too, although I recognize
the force and significance of the individual movements, tendencies, and
changes that have often been grouped together under that (needless and
cumbersome) umbrella term. Student introductions to artistic movements

225
226 DIGIMODERNISM

like Romanticism infallibly include some historical background about


industrialization, the Enlightenment, and the French Revolution, but in its
infancy digimodernism doesn’t yet require such contextualization. Lyotard
argued that Auschwitz was “the crime opening postmodernity,” but to the
degree that digimodernity exists it was opened by shifts in technology, a
way of saying that it is primarily not an artistic or historical revolution
but a textual one.2 Our era is heavily marked, of course, by the atrocities
of September 11, 2001, but digimodernism was not born that day, even if
postmodernism was buried in the rubble of the Twin Towers.
In this chapter I am going to sketch some of the traits of an entity whose
status is not fully clear to me. Though periodized it has no neat point of
departure; it does not question the controlling assumption that the period
we inhabit is best defined as “modernity,” though in a particular stage of its
history; it is not identical with a set of factors prompting digimodernism as
a cultural phenomenon, but nor is it distinguishable from them; its roots
lie deep in our society and time, and while in places it dates back to the
1980s, much is traceable to the origins of modernity itself, and all can be
understood as the tangled and difficult legacy of a played-out postmodern-
ism; it’s a recognizable sociohistorical climate but one that “comes to you,”
into which we find ourselves hurled but over which, by recognition and
humility, we can (and must) take some control.
Part of the impetus behind this chapter comes from a sense of the
strangeness of our era’s very relationship with historical time. Zaki Laïdi
has described our new temporal condition as the sacrilizing of the present,
stripped of social utopianism and experienced as eternal.3 This echoes the
digimodernist textual characteristic of engulfment by the present, linked
to its evanescence and onwardness. Modernity and modernism’s conscious
rejection of tradition, and postmodernism’s backward-looking and double-
coded recycling of what is known to be definitively lost, become digimod-
ernism’s blank unawareness of previous time. Its absence is no longer
rational or intentional; it’s evinced by the post-1960s’ abandonment by the
Right of “conservatism,” which fetishized the continuation of the past into
the present, for a messianic creed of perpetual upheaval, the world’s domi-
nant political ideology. The past is not felt to feed into or inform or frame
us; it’s regarded, if at all, with contempt (less clever or knowledgeable, cer-
tainly less moral than us) or self-pity (life was simpler then)—any notion
that it might in any sense be superior to the sacred present is dismissed as
mental sickness, as “nostalgia” (algos, pain; the film Pleasantville). To this
perceptual absence is joined the experiential loss of the future: educational
policy favors teaching children what is “relevant” now and “preparing them
Toward a Digimodernist Society? 227

for life” as it is today, unable to conceive of the differentness that the future
implacably brings.
However, what marks us out most distinctively in time is our aggression
toward the future; a society forever congratulating itself on its newfound
tolerance for all its current members treats those to come with implicit
loathing. Imperial Western prosperity was built on stealing from the spa-
tially other, the kidnapping of Africans, the annexation of foreigners’
resources; prevented from doing this, contemporary wealth is founded on
stealing from the temporally other, the future. Personal, household, and
national debt amassed simply to live day to day spends the future’s money
in advance and effectively bankrupts it; the wild propagandizing for loans,
mortgages, store and credit cards, and the marginalization of savings, relate
the present to the future as a burglar to his/her victim. Similarly, far more
of the physical earth is consumed every day than is restored to it, leading
inexorably to an empty and filthy future planet that will trace its brutal
inhospitality to our temporal rapacity. This consumerist assault, this mug-
ging of the future’s cash and resources, produces, like a native uprising
against cruel invaders, both banking catastrophes and natural disasters.
It is mirrored in our attitude to children, the future made flesh: the wide-
spread depiction of infant torture, abuse, and murder in TV and film, often
only to trigger a plot or convey vague gravitas; the popularity of “true
accounts” of past childhood suffering (A Child Called “It,” etc.); the high
level of casually incurred divorce and separation of couples with children
under eighteen, inflicting deep and self-evident trauma; and the sense
of an explosion in pedophilia caused by inadequate criminal sentencing
(a shameful residue of patriarchy). Digimodernist societies steal the future,
and torment its citizens. Three sections follow: they respectively address
the destiny of self, thought, and action in such a possible time.

The Invention of Autism

We live in the age of autism. In 1978 the rate of autism was estimated at
4 in every 10,000 people; by 2008 this figure had risen to 1 in 100, a 25-fold
increase.4 For Simon Baron-Cohen, research in the early 1990s was ham-
strung by “the now incorrect notion that autism is quite rare (today we rec-
ognize it to be very common).”5 Does this mean that contemporary society
is starting to be flooded by something that hardly existed before? This
seems unlikely, to say the least (though there are those who assert it); the
more probable explanation is that specialists now recognize a syndrome
and make informed diagnoses on the basis of previously unavailable
228 DIGIMODERNISM

research. But at the same time, it is naïve to imagine that changes in clinical
diagnosis are wholly unlinked to changes in objective sociohistorical
conditions. The consulting room does not exist in a vacuum; scientific
research occurs in, without being engulfed by, history. Sociohistorical
factors don’t determine scientific results, of course, but occasionally, and
perhaps especially in the study of people or of the unobservable, they
imperceptibly strengthen the plausibility of an interpretation. It’s not then
absurd to hypothesize that we inhabit a society uniquely adapted to the
frequent ascription of autism and the identification of autistic traits. This
is not to reduce autism to a “social construct,” or to claim, offensively,
that nobody “really” suffers from it. I don’t doubt that many people, undi-
agnosed, endure a form of quotidian misery that could be alleviated by
suitable treatment, and that medical attention and research funding need
to be directed urgently toward both; my comments here should be seen
in this light. Yet it’s also reasonable to assume that our present under-
standing of the condition is imperfect; Asperger syndrome was recognized
by the World Health Organization only as recently as 1994; and part of this
incompleteness may well lie in the condition’s sociohistorical and cultural
identity.
In such a society a doctor like Andrew Wakefield, seeking to gain pub-
licity for his (now discredited) view that the MMR jab was harmful to
children, would describe it as the trigger for autism; a generation earlier, it
would have been schizophrenia. In such a society too a novel like Mark
Haddon’s The Curious Incident of the Dog in the Night-Time (2003), which
explores the interior mental landscape of Christopher Boone, a fifteen-
year-old boy with Asperger syndrome, could win the Whitbread Book
of the Year prize and go on to sell over ten million copies worldwide. Not
entirely I think due to its literary merits: its plot is thin and its depiction of
peripheral characters schematic, but it is at times very moving, and from
a clinical point of view utterly fascinating as it uncovers the thought-
processes, the psychic blocks and piercing astuteness of the autistic mind—
this was, quite simply, a novel whose psychopathological insights the world
had suddenly grown desperate for.
Cultural representations of autism have become so commonplace that,
as Ian Hacking wrote in May 2006, “everyone has got to know about
autism.”6 Despite this, two of the most telling of such portrayals date back
as far as the 1980s, one of which, Ridley Scott’s film Blade Runner (1982),
makes no mention of what was then an obscure syndrome. The story of a
policeman’s quest to eliminate highly sophisticated renegade replicants,
Blade Runner is predicated on the “Voigt-Kampff Empathy Test” which
Toward a Digimodernist Society? 229

determines that a given individual is an android from its inability to make


empathic responses to hypothetical situations. The novel by Philip K. Dick
on which the film is based notes that human schizophrenics also show
an empathic lack indistinguishable from that of androids.7 The movie’s
art design stretches this observation to its limit: constructing a future Los
Angeles of infinite loneliness and alienation, emptied of reciprocity or inti-
macy, a mixture of vast silent spaces between affectless people and unbear-
able overcrowding, it suggests a social condition of almost universal autism.
(It’s unsurprising to find signs that the cop may also be a replicant.) The
Voigt-Kampff test bears some similarity to the “Awkward Moments Test”
used since 2000 to measure social understanding in autism, and its under-
lying premise recalls the interpretation of autism as “mindblindness,” the
inability to see the world through another’s eyes.8 Near the end of The Curi-
ous Incident . . . Christopher, alluding in passing to Blade Runner, describes
one of his favorite dreams, in which everyone who can read other people’s
faces dies suddenly and the occupants of the autistic spectrum inherit a
postapocalyptic, evacuated planet; the landscape he evokes resembles that
portrayed by Scott, a social system identical with (pseudo)autistic traits
and individuals assimilable to autists.9
This posited universal autism is, to a degree, found in the present in
Barry Levinson’s film Rain Man (1988), whose depiction of a middle-aged
man (Raymond) with high-functioning classic autism won Dustin Hoffman
an Oscar. The film implies, and with some clinical validity, that he shares
many of his autistic traits with his brother Charlie, played by Tom Cruise.
Charlie is equally prone to tantrums, fixated and narrow, inflexible, closed-
off, incapable of empathy or reciprocity. Charlie is an entrepreneur, a ruth-
less and driven small businessman in sharp suit and shades, the epitome
of 1980s’ America’s cult of the yuppie; he is also a vile, egomaniac bully.
Indirectly, then, the film deploys its study of autism to attack the ethos
of the yuppie by assimilating it to mental illness; Raymond seems only to
suffer from a less socially acceptable version of the “disorder” afflicting
Charlie and his type. This assimilation has been reiterated by the academic
psychiatrist Michael Fitzgerald, who has alleged that Keith Joseph, a free-
market ideologist and minister in Margaret Thatcher’s 1980s’ governments,
showed signs of an undiagnosed Asperger syndrome from which he derived
his political philosophy: “Monetarism has some of the characteristics of
Asperger’s in its insensitivity and its harshness.”10 Reducing your political
enemies’ views to a form of mental sickness is an integral feature of totali-
tarianism, of course (and Blade Runner too flirts uneasily with elements of
Nazi doctrine). Conversely, identifying the main lines of the dominant
230 DIGIMODERNISM

economic and social ideology of the day with autism will assuredly increase
dramatically the incidence of diagnosis of the condition, though at the
price of emptying the term of all meaning.
While the new clinical conceptualization emphasizes that “we all have
some autistic traits—just like we all have some height,”11 two social develop-
ments or trends over the past fifteen years or so do seem to have encouraged
the greater production of autistic or pseudoautistic traits in the population.
(By pseudoautistic, I mean characteristics deriving locally from particular
experiences and lacking the neurological basis of clinical autism; it will be
understood that I use “autism” in this section as a linguistically convenient
umbrella term for the autistic spectrum; much of what I say is most appli-
cable to Asperger syndrome.)
The ascription of autism is encouraged by the emergence of new tech-
nologies, especially computers, the Internet, and videogames, which enable
individuals to engage with “worlds” or reality-systems without socially
interacting; this systemic desocialization is subsequently extended to the
“real world” in the form of a diminished capacity to relate to or to “read”
other people, a preference for solitude and a loss of empathy; such technol-
ogies also do little to stimulate language acquisition. Derivative gadgets
like the iPod hold their users in similarly isolated private worlds (cell
phones too, though with less obvious causes). This is largely the basis for
the claim that autism is integral to digimodernism, and plays much the
same role within it as the neurosis did for modernism and schizophrenia
for postmodernism: a focus for contemporary clinical study, a modish
social buzz word, and a dimension, in varying forms, of much of the most
emblematic culture of the time.
This is reinforced by the growing and widespread tendency to portray
the sociopathic as normative in popular TV drama, cinema, and music
(cop shows, action movies, rap, grunge, etc.). Drawing on the existential or
rock ‘n’ roll outlaw hero of the 1960s, such texts valorize acting according
to personal impulses with no reference to other people, the collectivity,
social rules, or conventions (e.g., Oasis’s injunctions, “don’t let anybody
get in your way . . . don’t ever stand aside/don’t ever be denied”—cf. the
video for the Verve’s “Bitter Sweet Symphony”); they may fetishize an
absence of empathy and a solitary, cruel fantasy potency; or they may
romanticize a state of helplessly total and self-harming isolation from
society. Such texts both normalize and glamorize a condition of nonsocial-
ization and noncommunication, which can be seen, up to a point, as
pseudoautistic.
Toward a Digimodernist Society? 231

However, I find these two explanations for the increase in autistic or


pseudoautistic traits unsatisfactory. While suggesting that contemporary
society has somehow become more “autistic,” they refer only to factors
encouraging a spurt in pseudoautism, and so do not address the changed
incidence of clinical diagnosis. Moreover, gadgetry or teenage or fantasy
texts cannot account for a 2,500 percent diagnostic rise. Indeed, the very
fact of diagnosis points in the opposite direction, toward seeing autism as
the perfectly shaped excluded other of contemporary society: the model of
“sickness” generated by the ways in which orthodox social values define
health and the “normal.” Autism, I would argue, is the ready-made antithe-
sis of the peculiarities of today’s world; it is the mark of that which our
society despises, marginalizes, and makes impossible, and is produced as
the exact contrary of hegemonic social forces in a variety of contexts, which
I shall now briefly summarize.
This pressure to identify the autistic stems from:

1. the demographic shift toward global overpopulation, ever-growing


urbanization, the spread of constant formal and informal surveillance,
the disappearance of wilderness and the near-impossibility of solitude;
this not as a fact but as a perception or experience, as noise pollution
and light pollution; autism’s need for solitude, silence, freedom from
interference, physical integrity is a refusal of such a shift;
2. the economic tendency toward ever-greater flexibility, multitasking, ad
hoc arrangements, job insecurity, rapid staff turnover, the felt commer-
cial need constantly to update, restructure, retrain; autism’s insistence
on sameness, on repetition of past actions, on rigidity is a failure to meet
this; similarly autism’s deep and detailed memory is out of kilter with an
amnesiac consumerist economics favoring short-lived and throwaway
product lives;
3. the social shift toward an ever-greater valorization of social skills, of the
ability to chat and come across, to accrue popularity and self-present,
toward a fetishization of gregariousness and bonding with others
through various manipulations and self-betrayals (the gossip culture of
reality TV and celebrity fascination); against this autism favors the
authentic, the concrete, depth-knowledge versus superficiality, true
facts versus hazy impressions, solutions to problems over futile and
vapid chitchat;
4. the gender shift toward an increased suspicion of the characteristic
traits of masculinity: overthrowing previous assumptions by which they
232 DIGIMODERNISM

were “natural” and to be valorized, a postfeminist culture sees them


rather as dubious, as absurd or inadequate or implicitly misogynistic; it
can be felt that such a culture sets more store by the “feminine” quality
of “empathy” than by the “masculine” value of “systemization”; since
“classic autism occurs in four males for every one female, and Asperger
syndrome occurs in nine males for every one female,” it can be
supposed that autism is “an extreme of the typical male profile,” the
expression of an “extreme male brain,” such that “maleness,” at root, is
identical with mental impairment (since no “extreme female brain”
is recognized, “femaleness” becomes, at root, gender normalcy); autism
then constructed as the ever-incipient sickness of masculinity;12
5. the cultural modishness of a “Latin” emotional tone (forever hugging,
kissing, frequent touching, emoting, loud voices); autism consequently
identifies with an unfashionable “English” (or Victorian) remoteness, a
plain, literalist, almost stern high-mindedness;
6. the emerging generational crisis by which young people are felt by adults
to be unreachable (Bettelheim’s view of autists), necessarily distant,
uncooperative, alienated; the (mis)identification of autism’s “causes”
in childish things (the MMR jab, other infant vaccines, videogames);
Stuart Murray discusses the (illogical) media emphasis on autism as a
child’s illness;13
7. the difficulty, since the implosion in the early 1990s of the Marxian
model, of conceptualizing alienation from a pervasively consumerist
society; attempted collective rebellions (grunge, rap, emo) are immedi-
ately swept up into the system; nothing remains outside, nothing
“uncorrupted,” yet there is still so much to be reviled and repudiated;
consequently, autism can be broadly identified as behavioral alienation
in a capitalist-consumerist hegemony (e.g., Raymond’s and Christopher’s
incomprehension of money); autism as a name for an incurable state of
exclusion;
8. the emergent moral consensus, deriving from a degraded postmodern-
ism or multiculturalism, by which everyone is right from their side and
all views must be respected; autism, seeing this as self-contradictory
and an inadequate notion of “right,” valorizes truth, objectivity, and
reason, postmodernism’s devils; autism as the impossibility of being
postmodern;
9. a pseudophilosophical or antiscientific drift toward the denigration
of knowledge and cleverness, increasingly objects of contempt and deri-
sion, a consumerist hatred of nonutilitarian information, and the new
social orthodoxy of instrumentalist learning; a society which sneers
Toward a Digimodernist Society? 233

at high levels of literacy or numeracy as though symptoms of personal


inadequacy, which fetishizes the superstitions of New Ageism, con-
spiracy theories, and pseudomedical quackery; autism’s contrasting
embrace of exhaustive knowledge, its love and recall of facts, its rich and
grammatically correct use of language, its insistence on rationality,
truth, and rigor; all of this constructs autism as an incapacity to accept
the subintellectual barbarism of its age.

Of all these tendencies that produce autism as the excluded or failed other
of the contemporary hegemonic, this last seems to me decisive. In the case
of Asperger syndrome all the others may be only its tributaries. A strand
of popular discourse about autism, found especially on the Internet,
ascribes the condition to just about every dead intellectual high achiever
you can think of: Newton, Kant, Darwin, Nietzsche, Wittgenstein, Einstein,
and so on. It’s inevitable that a society which hates autonomous intellectual
sophistication as ours does will wind up labeling its heroes and heroines
mentally ill; however, it should cause clinicians some unease that almost
all of the teenagers admitted in any year to the world’s top four or five
universities would score highly on the Autism Spectrum Quotient test.
Furthermore, the same strand of popular discourse also tends, in a move
that is integral to our collective sense of mental health, to assign the condi-
tion to any dead single-minded and self-denying person who achieved
anything of meaningful and lasting importance: Michelangelo, Mozart,
Beethoven, Jefferson, Van Gogh, Joyce. From this a clear (if disavowed)
definition of the normal appears: normalcy is frivolity, superficiality, igno-
rance, gregariousness, a short attention span, self-gratification, disengage-
ment, empty tolerance, social competence, and so on. Indeed, normalcy
is the condition of consumerism; everything inimical to consumerism is
reduced to mental sickness.
Autists cannot be seen as “rebels” against or “martyrs” of contemporary
society because they have not chosen their profoundly difficult relation-
ship to it. As for Haddon’s Christopher, whose life is unenviably and intrac-
tably hard in many ways, it is also, despite or because of this, richer, fuller,
and better than that of his mother, a half-illiterate and bad-tempered egoist
who abandons her child for another man. (Christopher responds to stress
by “groaning,” his mother by yelling at people.) It wouldn’t be so strange to
imagine a system of social values that treats Christopher’s problems with
more sympathy and acceptance than his mother’s. A society whose values
produce autism so perfectly as its excluded other does not deserve to sur-
vive; nor will it.
234 DIGIMODERNISM

The Return of the Poisonous Grand Narrative

All talk of “grand narratives” or their synonyms “metanarratives” or “master


narratives” implicitly looks back to Jean-François Lyotard, whose The Post-
modern Condition, first published in French in 1979, has been described as
“one of the founding texts of postmodern theory” and “a standard refer-
ence point for the discussion of postmodernity.”14 In its introduction
Lyotard wrote the most famous single sentence in all postmodern theory,
usually translated as: “Simplifying to the extreme, I define postmodern as
incredulity toward metanarratives.”15 The last three words might be the
“cogito ergo sum” of postmodernism, the phrase reputed to coalesce an
entire philosophical project. It’s generally taken as meaning “the rejection
of all overarching or totalizing modes of thought that tell universalist sto-
ries . . . Christianity, Marxism and Liberalism, for example.”16

These narratives are contained in or implied by major philosophies,


such as Kantianism, Hegelianism, and Marxism, which argue that
history is progressive, that knowledge can liberate us, and that all
knowledge has a secret unity. The two main narratives Lyotard is
attacking are those of the progressive emancipation of humanity—
from Christian redemption to Marxist Utopia—and that of the tri-
umph of science. Lyotard considers that such doctrines have “lost
their credibility” since the Second World War.17

However, a close reading of The Postmodern Condition reveals something


strikingly different. In it Lyotard attacks and rejects nothing. He describes
the two principal notions employed since the Enlightenment to legitimate
or justify the practices of le savoir or knowledge (research, learning, teach-
ing). They are the humanist “life of the mind” (education ennobles you,
makes you a worthier person) and the political project of emancipation
(education frees you from oppression and obscurantism). Dynamic and
universalist, these are labeled “grand narratives.” He contends that socially
they no longer compel belief; and, in the Western democracies, is surely
right to do so. How then to legitimate the practices of le savoir: why do
them, and how should they be done? Contemporary society favors, he says,
la performativité, an educational system entirely geared to and governed by
the imperatives of economic efficiency and social and political effective-
ness: profits, power, and control. Condemning this as incompatible with
le savoir, Lyotard instead promotes “paralogy,” legitimation based on the
paradigm of cutting-edge science and foregrounding discontinuity and
Toward a Digimodernist Society? 235

incommensurable difference, that is, “little narratives.” There is no assault


on any doctrinal brand, no alleged demise of religion, Marxism, or prog-
ress; the book is devoted above all to the sociology of education, it exam-
ines the sociohistorical implications of the incontestable deaths of two
very specific grand narratives in that field.
The misreading of The Postmodern Condition has various origins. It
doesn’t help that Lyotard’s “l’incrédulité à l’égard des métarécits” is badly
translated into English by Bennington and Massumi.18 Their omission
of any article before “metanarratives” can only suggest “all” to English-
speaking minds (cf. “I love dogs”). Grammatically, the problem is that, as
Jean-Claude Sergeant has pointed out, written English insists for the
sake of clarity on the use of anaphoric references (this, that, these, those)
disdained as redundant by written French. Had Lyotard been American he
would have put “toward these metanarratives” to specify that he meant the
ones he had spent the previous paragraph describing and discussing. In
context there is no question which metanarratives he is referring to; it’s
only out of context and via an overliteral translation that he comes to
invoke, for benighted English speakers, all of them. (Postmodern theory in
the United States and Britain is founded largely on a misconception gener-
ated by linguistic ignorance; in France, tellingly, where the book’s thesis
seems necessarily less thrilling, it has been far less influential.) This error
has been compounded by a mishmashing together of those frequent occa-
sions elsewhere in his writings on which Lyotard inveighs against univer-
salist grand narratives (especially Marxism) as totalizing and oppressive,
and extols their “little” counterparts, for example, “We have paid dearly for
our nostalgia for the all and the one . . . The answer is: war on totality.”19 Yet
the personal preference of some Parisian is of little importance to a defini-
tion of postmodernity; it’s a taste, a polemic, not a critique, a thesis; it has
ethical and biographical interest, but no historical content.
True, Lyotard later claimed that by “metanarratives” he had meant
“the progressive emancipation of reason and freedom, the progressive or
catastrophic emancipation of labor . . . the enrichment of all humanity
through the progress of capitalist technoscience, and even . . . the salvation
of creatures through the conversion of souls to the Christian narrative
of martyred love.”20 Yet, no one believing the intentional fallacy by which
this remark might guarantee the book’s “meaning,” the sliding from The
Postmodern Condition’s narrow precision into more general corollaries just
draws its author on to thinner ice: he invites a skeptic like Christopher
Butler to reply that, in the wider world, religion has actually been thriving
since the 1970s. This extrapolation is self-undermining. While Lyotard’s
236 DIGIMODERNISM

later airy assertions on the subject lack intellectual substance, for a histori-
cal critique of the status of metanarratives that broadly holds water21 we
return to a strict reading of The Postmodern Condition. This is in turn
entirely compatible with the thesis that one of the most striking social
characteristics of our time is the prevalence and power of grand narratives
in their most poisonous form.
Repeatedly in the digimodernist era an image of religion emerges,
especially in its public role, its cultural and social and political functions.
It comes across as pure toxicity. It would stand: for violence, murder,
destruction; for ignorance, superstition, irrationalism; for oppression,
hatred, cruelty; against education; against freedom; against democracy.
This is not antireligious (I’m not an atheist, for many reasons) but a social,
educational, and political comment. Religions also show themselves in
private places, often with compassion and humanity. The nature of the
universe is another matter. But this is the contemporary drift of their
public face. They enter the contemporary social arena with apparently
only one thought in mind: to stamp out intellectual freedom; to obliterate
equality; to overthrow democracy; to extirpate the arts; to slaughter the
innocent; to brand and scourge the differently minded; to annihilate rea-
son; to short-circuit knowledge; to destroy thought. This is their one idea:
the death of the idea.
I am referring to events such as the Jyllands-Posten Muhammad car-
toons controversy (2005– ), the Mecca girls’ school fire (2002), the murder
of Theo van Gogh (2004), the Sudanese teddy bear blasphemy case (2007),
or Benedict XVI’s lecture on Islam (2006). I mean the bombings of Bali
(2002), Madrid (2004), London (2005), Mumbai (2006), and more. I mean
the suppression of Gurpreet Kaur Bhatta’s play Behzti (2004) and Sherry
Jones’s novel The Jewel of Medina (2008), the disruption of Jerry Springer—
The Opera (2005– ) and the censorship of film versions of His Dark Materi-
als (2006– ). I mean the spread of intelligent design and faith schools and
the “veil” and “honor” killings and the desecration of graves. I mean the
cruel punishments of Iran, Texas, and Saudi Arabia. I mean the messianic
Christian fundamentalism lying behind Bush and Blair’s invasion of Iraq
and the wanton slaughter that followed it. And more, and more. I mean
religion as killing, silencing, ignorance, and fear. Religion doesn’t have to
be like this; the territory has been poisoned.
Emerging from the iniquitous tale told in Genesis 22 where a psychotic
god sends a loathsome daddy to murder his own little boy for their
pleasures, 9/11 embodies this poison in our machine. Terrorism as pure
Toward a Digimodernist Society? 237

spectacle, as media event and televisual hyperreality, 9/11 was postmod-


ernism as distilled evil yet was driven by and productive of emotions and
thoughts turning toward a barbarous early digimodernism. Similarly, the
fatwa against Salman Rushdie, essentially a bad Iranian review of a post-
modernist novel, founds digimodernist primitivism in the repudiation and
supersession of postmodernism. In this pseudomodern dispensation
advanced technology is employed to medieval ends: the uploading to the
Internet of films showing the innocent being beheaded; the recording and
dissemination by cell phone of images of torture at Abu Ghraib.
In furtherance of his Catholic faith Mel Gibson’s film The Passion of the
Christ (2004) empties Christianity of love, redemption, hope, eternity,
peace, theology, and doctrine. He reinvents it as a vista of agony. Bach’s
St. Matthew Passion, the fruit of Renaissance and Reformation, can inspire
conversion with its evocation of the divine; Gibson offers sado-masochism
and torture and the blind loathing of the Bronze Age illiterate. Edited,
acted, and directed as pure fantasy, the film also reenacts the psychogoddy-
psychodaddy horror of Genesis 22; it was greeted by the Vatican and by
American evangelists as the “truth.” Gibson’s Apocalypto (2006) pursues
this agenda: discarding medieval anti-Semitism for antipaganism, he
portrays (Spanish) Catholic missionaries as bringing “civilization” and
“salvation” to cruel and sadistic Maya. The final scene on the beach, echo-
ing Lord of the Flies’ climactic naval officer, implicitly denies or justifies the
extermination of the pre-Colombian peoples. In addition to these atavistic
doses of obscurantist Catholic propaganda, early digimodernism is marked
by a torrent of antireligious texts (which this isn’t). Yet they reveal exactly
the same image of religion. There’s Dan Brown’s virulently anti-Catholic
and historically mendacious propaganda farrago The Da Vinci Code (2003);
there’s Richard Dawkins’s The God Delusion (2006) and Christopher
Hitchens’s God is Not Great (2007), which draw their atheistic energy and
reams of support for it from simply quoting recent newspaper stories; there
are Michel Houellebecq’s novels, especially Plateforme (2001), and Will
Self ’s The Book of Dave (2006). The point is this: religion appears publicly
poisonous wherever you stand in relation to it: for, against, it doesn’t
matter.
Simplifying to the extreme, as it were, the 1960s in Britain and America
witnessed (among other things) a questioning or a repudiation of or an
attempt to surpass humanism and the Enlightenment values of reason,
truth, objectivity, and learning. If irrationalist rock music is today exhausted
and antihumanist postmodernism or post-structuralism played out, religious
238 DIGIMODERNISM

fundamentalism, a third branch growing out of the common trunk, remains


and thrives. Anti-Enlightenment in essence, it begins with Foucault’s 1966
vision of “man . . . erased, like a face drawn in sand at the edge of the sea”
(Foucault initially welcomed the Iranian Revolution).22 The influence of
the Enlightenment on religion has been lifted: it no longer has any reason
not to head triumphantly back into the dark ages. Postmodernism was
the lost sibling of a toxic fundamentalism that slew its kin on reaching
maturity. Yet in a posthumanist age Dawkins and Hitchens’s case is hope-
less, like calling for a return of Henry VII during his son’s most tyrannical
days. How will this work through? Americans are keen to study the decline
of imperial Britain seeking clues to their own destiny, but the United States
is unlikely to be ruined by endless victories as Britain, with black comedy,
was; it will more likely resemble imperial Spain and be throttled by its own
religious mania. Almost all of America’s most dedicated enemies are its
citizens and residents: this must be clear.
In any event, this is not the only poisonous grand narrative dominating
the digimodernist horizon. God may save; most prefer to spend. The
most popular and destructive Western grand narrative is not religion but
consumerism. By this I don’t mean just consumption or even mass con-
sumption, but a conception of life, a system of values, a worldview, a frame-
work for the understanding, meaning, and purpose of existence stretching
far beyond mere buying. Again, and simplifying grossly, 1960s’ America
and Britain saw a change in the criteria by which people interpreted and
assessed their lives and others’: from definition by position in the work-
place and the family to definition as consumers, through choices and
acquisitions made in the marketplace. All of life could be constructed thus
as morally equivalent, individualized selections, and valorized as such. The
Left abhorred and long resisted the threatened loss of its workplace-derived
power (unions, etc.); the Right detested the destruction of the family; and
so conventional politics was scattered by 1960s’ social and ethical changes.
Yet the Right responded to the rise of the consumer; it therefore became
the post-1960s ideological default setting. In power it was swamped in a
spiraling contradiction: praising the family, advancing the consumer, and
so wrecking the family. It resembled a doctor treating lung cancer with
a dose of leukemia. Voters embraced it, becoming richer and unhappier.
As a grand narrative, an overarching and total structure of human
existence, consumerism escapes moral binaries, one of the secrets of its
success. Since scale of action is more important here than nature it can
neither be loved nor hated in itself. Instead it stretches across a spectrum
ranging from the pleasantly innocent (buying a dress for a party) to the
Toward a Digimodernist Society? 239

monstrously evil (the devastation of the earth) and without obvious


gradations. We all consume, we’re all consumers; many of us are consum-
erists too. Consumerism here is the transformation of the practices and
mind-set of consumption into the sole or overriding model for all human
life. In this way it becomes a fanaticism. Postmodernism’s commitment to
many valid viewpoints is obsolete, overpowered by an all-swamping single
creed. Consumerism is megalomaniacal: it wants everything to be run its
way. Once upon a time universities offered teaching in whatever scholars
had learned and wished to pass on. Today, humanism “incredible” and
paralogy stillborn, they provide courses in whatever teenagers feel like
learning, since their income depends on the number of students they attract.
If British teenagers decided en masse to study anime and not physics, every
physics department in the country would close (this has largely transpired).
The intellectual content of university programs no longer derives from
“knowledge” or “truth” (postmodernism is helpless) but from the whims
of adolescents. Chasing “sales” for their “products,” universities can no
longer flunk students as to do so would only shrink their “customer base.”
The structures of consumerism have been forced on to higher education
and in ways that are typical: in denial (nobody justifies it like this); in
contempt of logic or evidence; and in a crazed drive toward self-destructive
outcomes.23 Consumerism is a fanaticism that, suffusing everything, poi-
sons everything.
So we are engulfed. Everything is remodeled. Freedom is reconfigured
as “choice,” happiness as “retail therapy”; there are no universals except the
oneness of infinite valid individual preferences in exchange for money.
Families splinter into pockets of isolated consumption, each with his TV or
her computer, nothing shared, nothing communicated; relationships, like
commodity ownership, become transitory, consecutive, and pragmatic;
social groups form solely around shared consumption patterns, fragment-
ing societies and alienating generations. Consumerism reshapes sexual
practices as equal selections in the genital marketplace (as “lifestyle
choices”; denial), the only “unacceptable” act being failure to consume
(virginity, abstinence). It reinvents religion (as New Age bricolage) and
sport (as club fanaticism); it reevaluates countries not by their politics
or human rights records but their suitability for international tourism
(Italy good, Norway bad). Its heroes are “celebrities” who seem only to
consume, not to work or learn; employees self-define as “commodities”
needing every day to sell themselves to a company. There are consumerist
TV programs, in denial as “lifestyle” shows, about how to buy, furnish,
renovate, declutter, clean, extend, and sell your “property” (formerly your
240 DIGIMODERNISM

“home”), or how to eat, drink, and cook, or what clothes to buy and places
to visit. Newspapers sell advertising posing as journalism (travel supple-
ments, etc.); magazines merge adverts with “consumer guides”; DVDs
make us buy a film’s advertising and marketing (passed off as “extras”).
And so on.
Consumerism destroys political action (it becomes senseless to “choose”
or “engage” without spending) and revamps social idealism in its own
image: messianic, it believes the planet will be saved by a kinder and smarter
consumption (recycling, cutting out wastage, etc). This is a lie: political
decisions will be necessary. Consumerism, as its name suggests, eats up the
planet and excretes back into it, at a rate well in excess of its capacity for
absorption. Consequently the only thing consumerism can contribute to
the environmental cause is to be less. But every fanaticism proffers its own
processes of “salvation.”
Megalomaniacal and messianic, consumerism is also deranged. It thinks
it is valid to eat 8,000 calories on Christmas Day or drink from bucket-
sized coffee cups in bars. Our social problems are those of consumerism,
of too much or the wrong kind of consumption, as Salem’s were those of
religion: obesity, anorexia, malnutrition, food panics, drug addiction, debt,
gambling, binge drinking. Banks frenziedly offer credit to spend in shops
and discourage savings, short-circuiting their own revenues and crippling
the world economy. Google, our preeminent structurant of information,
ranks pages on their popularity among other pages, on their cybersales in
the cybermarket, rather than on their content (let alone their quality); the
Internet implodes toward its lowest common denominator of sex and
trashy entertainment and becomes the habitat, like consumerism itself, of
the unsocialized.
Consumerism robs you of your home (turned into a get-rich-quick
investment opportunity), your community (become a no man’s land of fear
separating the “property” of strangers), your city (controlled by near-empty
but owned cars), your country (run by and for consumerism), and your self
(curdled into a lazy, passive, endlessly unsatisfied and demanding mouth).
And yet no single act of consumption causes any of this. The road to hell
begins with the smallest step. Equally there’s no inkling of an economic
system not based on personal consumption that would be better. I want
here to isolate consumerism primarily as a mode of thought, a moral code,
an ethos, a buried framework of understanding; to challenge it in its grand-
narrative imperialism, its demented ambitions to direct all; to roll it back,
to push it back. We need a new mental master. The social will follow in due
course.
Toward a Digimodernist Society? 241

The Death of Competence

The phrase “the death of competence” isn’t of course meant literally: we


haven’t all suddenly become inept. It’s shorthand instead for a shift in social
values, for the eclipse of one value in particular, evinced by a variety of
overlapping social phenomena:

the evacuation of the value of competence in public fields it previously


dominated (culture, education, politics, journalism);
the withering away of the expectation of competence in personal fields
where it was previously and rightly taken for granted (hygiene, nourish-
ment, finance);
the diminished social valorization and economic reward of technical
competence, affecting trained career professionals from engineers to
nurses.

Such developments can be seen in the following areas:

1. educationally as the demise of the goals of equipping a child with knowl-


edge s/he didn’t previously have and leading him/her to cleverness once
not possessed. The heavy-handed inculcation of what to think and not
of how to was dying already in the 1960s. Britain’s 2004 secondary
school Teacher of the Year declared that these days: “It’s more about
sharpening the vessel than filling it.”24 Yet children are already sharp
(out of the mouths of babes and sucklings; the Emperor’s new clothes);
they’re also empty. Antitransformative and postcompetence schooling
is characterized by: (a) individual therapy. In the 1970s–80s I spent
perhaps a thousand hours in art classes, emerging both ignorant of
art history and unable to draw and paint. No one tried to teach me.
I think the idea was that, confronted with paper and tools, my “creativ-
ity” would well up out of me and take form, a fantasy permitting a child’s
preexisting soul ineffable expression, altering nothing; (b) a fetishiza-
tion of “relevance” and “engaging” students, in practice limiting them to
what they already think they know; (c) constantly assessing and harass-
ing and disempowering s/he who brings competence to the classroom
(the teacher), constantly dictating and overhauling and eviscerating
that by which competence comes (the syllabus), and constantly glorify-
ing, flattering, and seducing s/he who by definition is stupid and ignorant
(the student); and (d) the grudging teaching of low-level and nontrans-
ferable competence, that is, “skills” (a child who understands quadratic
242 DIGIMODERNISM

equations can manipulate any logical system, a child who can send an
e-mail can only send an e-mail). The fear of authoritarianism generates
a terror of filling children with autonomous cleverness or knowledge
that will stay knowledge. The effects of such schooling are: (a) low
intellectual self-esteem, since that comes from demonstrable compe-
tence no longer permitted, (b) alienation, since children know nothing
of the world around them, (c) low skills, since competence stripped
of autonomy allows only mindless repetition, and (d) a pervasive and
understandable contempt for the pointlessness of school.
2. politically as deprofessionalization: (a) the self-presentation of politicians
and/or parties as fit for office not because of shown or putative compe-
tence (being good at governing), but because of their “morality.” Driven
by recent failure or by a platform serving only the economic self-interest
of a tiny minority, this claim to morality is at best political (promises of
probity, justice), at worst private (love of spouse, country, God; hatred
of others’ uses of their genitalia). It’s reflected by apolitical voters indif-
ferent to the tedious detail of administration and obsessed instead with
strangers’ sexual or spiritual preferences. Realizing Iraq had no weap-
ons of mass destruction and that he had sent Britons to slaughter and
die for nothing, Blair proclaimed his sincerity (he had “acted in good
faith”) in turn attacked by critics labeling him “Bliar”; nobody pointed
to his evident ineptitude. George W. Bush, corrupt and deranged in for-
eign policy, catastrophic and venal in economic policy, was elected twice
on his claims to private morality, finally of importance only to himself,
his loved ones, and his God; (b) the quotidian management, control,
and direction, not of the country (a political concern) but of its media
(a PR focus), transforming politicians into their own image consultants
(Cameron); and (c) the framing of policy not from a postideological
and technocratic tinkering with the machine, described since the late
1950s and recognizably postmodern (Fukuyama’s “end of history,” etc.),
but from throwback economic and religious prejudice, contemptuous
of “evidence” or even logic, and the triumph of myth.
3. in every area of human life, every time and society, the plausibility of
judgment and evaluation has varied according to knowledge and/or
training. The opinion of coq au vin of a child who has eaten only ham-
burgers means little; an architect and a doctor are not equally persuasive
on the structural solidity of your house. In cultural matters digimod-
ernist society has jettisoned this rule; its orthodoxy states: nobody’s
cultural judgment or evaluation is given greater or lesser weight by his/
her cultural knowledge and/or training. The assessment of Transformers
Toward a Digimodernist Society? 243

of a middle-aged Film Studies Professor is no more valid (probably less)


than a twelve-year-old boy’s. How has this amazing situation come
about? Today the dominant ideology of cultural critique is consumer-
ism (not humanism, Marxism, Christianity, etc.); consumerism dictates
that one woman’s money is as good as another’s, leveling the interpretive
ground; consumerism privileges too the response of the targeted market;
and so an infantilized “popular” culture at best elides, at worst scorns
all cultural knowledge and training (which presuppose maturity).
Postmodernism, which taught professional critics to indict themselves
as oppressors and elitists, executed one ideology (exclusive but rational-
discursive) only to let surge aggressively in another (exclusive, totalizing,
and irrational). Cultural antielitism is proto-elitism in denial. Consum-
erism permits all to choose, reject, and assess texts equally, evened out
by identical purchasing; it’s hostile to cultural knowledge and training
since discrimination may well induce a refusal to acquire the latest tex-
tual turd. In this climate media criticism of the arts evaluates the market
viability of a product, advises on whether to disburse for it or not, and
replaces the humanism overthrown by postmodernist intellectuals with
the arrogant anticompetence of consumerism.
4. Dolan Cummings claims that the Reithian ideal of providing TV which
will “improve the audience . . . has fallen foul of the anti-elitism that, for
better or worse, pervades contemporary culture. The idea that anyone
in authority (or indeed anyone at all) should decide what is ‘better’ has
become something of a taboo.”25 Yet TV has never been so overrun by
“experts,” by apparently unassailable “authorities” telling us what is
“better.” Their domain is not cultural but consumerist: what food to
cook and how to prepare it, what house to buy and how to decorate it,
what clothes to dress in and where to vacation. Proliferating newspaper
supplements and magazines as long as novels direct and instruct and
guide their readers, establishing an “elite” with infallible knowledge of
the “good”; yet this discourse inheres solely within consumerism: the
“good” is good-to-buy, to consume. They never tell you to keep your
money; there is to be no expertise outside consumption. Yet such com-
petence is low level, reduced to the textless (cuisine, where the artwork
is in your stomach, nonobjective) and the temporary (furnishing, dress,
where the “good” shifts and changes shape like a mirage in the desert).
In parallel come the flood of “experts” on subjects that our grandparents
figured out for themselves: how to eat a balanced and nutritious diet;
how to lead a reasonably successful and generally happy life;26 how to
raise children. Produced by the self-appointed and unqualified, packed
244 DIGIMODERNISM

with the unproven or the half-true or the vacuous, books on such themes
sell in truckloads to a population dazedly inept, it would seem, in the
fundamental practices of life. To go about his/her humdrum existence
the digimodernist individual needs the constant support of hundreds
of TV shows and thousands of periodical or book pages; and yet this
profound existential feebleness and wish to be told how to “improve,” to
live “better,” meets only a mass of inadequate, shallow, and unscientific
pseudoauthority. This is the spiral of the death of competence.
5. the spurious cultural glamorization of illiteracy, innumeracy, and inar-
ticulacy. In Oxford, home of a world-class university and world-famous
dictionary, almost every public sign is scarred with spelling, punctua-
tion, and other linguistic errors (English itself gets blamed, like architects
accusing bricks when their buildings collapse). Schools and exams dis-
regard correctness; consecutive thought is defeated by the bullet point.
Debt is casually amassed at interest rates of 30 percent with no sense
of how this impoverishes. Public conversations are so voided by vague-
ness and vocabulary famine they convey only surges of will and taste.
There have always been illiterate, innumerate, and inarticulate people,
and perhaps not more now than ever; nor are such human failings to be
condemned out of context. The digimodernist era, inheriting postmod-
ernism’s critique of power/knowledge, its desire to “dismantle thought”
and “expose reason,” is distinguished instead by a bogus valorization of
these failings by electronic-digital culture. (They’re “cool,” “democratic,”
“antielitist,” “young.”) It’s a lie: socioeconomically, now and here as
always, power, wealth, and independence accrue to the highly literate,
numerate, and articulate. Only the naïve are fooled: you might almost
suspect a conspiracy (the competent few own and run society, while the
inept masses are told they’re “cool” by the “culture” the competent con-
trol). Democratic government, if self-serving and short-termist, reduces
all education to what will boost economic growth, since the latter
reelects politicians; consumerism finds innumeracy especially useful,
of course. In such a society knowledge and cleverness are inherently
radical, subversive. Foucault’s antihumanism, which bound knowledge
to power, was only the reverse of the humanist coin: ignorance, illiter-
acy, and innumeracy guarantee poverty, oppression, and exclusion
(or slavery, as Frederick Douglass saw). Anticompetence is death.
6. infantilized adults produce children and teenagers mired forever in
preschool behavior patterns: unable to listen or concentrate, seeking con-
stant entertainment, unwilling to do chores, verbally incontinent and
incoherent, acting and dressing in public as at home. The undermining of
Toward a Digimodernist Society? 245

intrafamilial knowledge and training by intellectuals who critiqued


(rightly) patriarchal oppressiveness and cruelty was engulfed by the
destructiveness of consumerist individualism: the latter defined all by
purchasing patterns only (mistakenly called “peer groups”). Conse-
quently what was once taught to children in the home is now frequently
never learned: how to cook, how to manage your finances, how to con-
verse, how to drink alcohol without being antisocial, how to conduct
sexual relationships without generating and destroying unwanted off-
spring, how to integrate generations. Exhausted parents leave their
children to be instructed by electronic-digital entertainment outlets,
by infantilizing and pseudoautistic media texts. A practical and social
ineptitude becomes widespread that a nomadic Bronze Age tribe would
have thought unacceptable, the sign of some inner deficiency. This per-
sonal incompetence is expressed, in pseudomodern fashion, through
astonishingly high levels of technological sophistication. Such societies
will decline; high competence societies will thrive; the former can
import the children of the latter to shore up their own failings but this
will end when the economies of the latter reach, as they will, broad par-
ity. The sociality of the crisis of competence, already so apparent, will
become in the twenty-first century the geopolitics of the competent.
***
The death of competence is digimodernist because the latter stumbles on
to a blasted landscape violently rearranged by a postmodernism that in
retrospect played into the hands of a triumphalist and totalizing consum-
erism. Digimodernism changes the places where competence is found; this
is inevitable. This transition requires an Enlightenment rightly revamped,
rewritten, and renewed by postmodernism, and a restored family structure
rightly critiqued and renewed by feminism. No politics today wants either:
it wants consumerism, which would destroy both.

To be well spoken, highly trained,


well educated, skilled in handicraft,
and highly disciplined,
this is a blessing supreme.27
Conclusion: Endless

And so a book that has foregrounded onwardness and endlessness draws,


with all due ironic self-awareness, to its close. If to conclude is a particu-
larly tricky business here this is prefigured in the very expression “early
digimodernism,” with its suggestion of high and late forms still to come.
Will these as yet unknown phases, evolutions, and histories change digi-
modernism out of all recognition from the infant I’ve tried to describe
here? Perhaps; probably. The chances of every cultural development I’ve
traced remaining influential and important are quite low; digimodernism
has scarcely even begun. So, I’m placed in an unusual position: not only do
I find myself wondering, as I prepare to switch off my computer, what my
future readers will make of these words, as all writers do; I wonder what
I myself will think of them in five, ten, or twenty years’ time. All I can say to
these ghostly potential selves is: this is how things seemed to me, in 2009.
Very soon after the first Europeans set foot in the New World they commis-
sioned and drew maps of the territories they thought they’d discovered.
Some of these now look feeble compared to what we know of the shape of
the Americas, some are presciently accurate. All I know for sure (I think)
is that computerization has changed and will change the text violently and
forever, altering its production, consumption, form, content, economics,
and value. This is my attempt to draw the map of that textual world. After-
wards, we’ll (or you’ll) see.
One question in particular remains to be answered, or properly asked.
What are we to make of digimodernism? Is it to be celebrated, excoriated,
accepted, resisted? Moreover, who is this “we”—whose digimodernism is it?
First impressions suggest a certain populism; its texts are not made, as a
disproportionate number of postmodernist texts were, by professors; and it

246
Conclusion: Endless 247

surely cannot be used, as theoretical postmodernism and post-structuralism


arguably were, as a way of prolonging the Sorbonne sit-ins by other means.
Who does digimodernism belong to? What is it for? What are we to do
with it? The conclusion is such questions, and their future answers.
This page intentionally left blank
Notes

Introduction
1. Gore Verbinski (dir.), Pirates of the Caribbean: The Curse of the Black Pearl (Walt
Disney Pictures, 2003).
2. Fredric Jameson, Postmodernism, or, The Cultural Logic of Late Capitalism. London:
Verso, 1991, p. 6.
3. Ibid.
4. Alan Kirby, “The Death of Postmodernism and Beyond” in Philosophy Now,
November/December 2006 http://www.philosophynow.org/issue58/58kirby.htm Retrieved
January 23, 2009.

1. The Arguable Death of Postmodernism


1. Gilles Lipovetsky, Hypermodern Times, trans. Andrew Brown. Cambridge: Polity,
2005, p. 30. Though bearing Lipovetsky’s name and exclusively devoted to his work, this is
in part written by Sébastien Charles. I don’t differentiate them here since my sense is that
the whole book is finally “by” Lipovetsky.
2. Francis Schaeffer, The God Who is There. London: Hodder and Stoughton, 1968,
pp. 15–16.
3. Ernst Breisach, On the Future of History: The Postmodernist Challenge and Its After-
math. Chicago: University of Chicago Press, 2003, p. 193.
4. Deborah Cogan Thacker and Jean Webb, Introducing Children’s Literature: From
Romanticism to Postmodernism. London: Routledge, 2002, p. 163.
5. Andrew M. Butler and Bob Ford, Postmodernism. Harpenden: Pocket Essentials,
2003, p. 72.
6. The manifesto’s wording is taken from the project’s official Web site: http://www.
dogme95.dk/the_vow/index.htm Retrieved December 21, 2007. However, as its layout var-
ies from one reproduction of the original text to another, I have tried to strike a balance here
between the Web site’s version and clarity on the printed page.

249
250 Notes

7. http://www.dogme95.dk/the_vow/vow.html Retrieved December 21, 2007. The


above also applies.
8. http://www.dogme95.dk/dogme-films/filmlist.asp Retrieved April 30, 2008. On
December 21, 2007, there had been 231.
9. http://www.dogme95.dk/news/interview/pressemeddelelse.htm Retrieved Decem-
ber 21, 2007.
10. Nicholas Blincoe and Matt Thorne (eds.), All Hail the New Puritans. London: Fourth
Estate, 2000, pp. viii–xvii.
11. http://en.wikipedia.org/wiki/New_Puritans Retrieved April 2, 2008.
12. Blincoe and Thorne, All Hail the New Puritans, pp. vii, viii.
13. Ibid., p. xiv.
14. http://www.3ammagazine.com/litarchives/2003/nov/interview_matt_thorne.html
Retrieved April 2, 2008.
15. Ibid.
16. Katherine Evans (ed.), The Stuckists: The First Remodernist Art Group. London: Vic-
toria Press, 2000, p. 8. The group’s documents are also available on the Web at: http://www.
stuckism.com
17. Charles Thomson, “A Stuckist on Stuckism” in The Stuckists: Punk Victorian, ed.
Frank Milner. Liverpool: National Museums, 2004, p. 30.
18. Evans (ed.), The Stuckists, p. 9.
19. Ibid., pp. 12–13.
20. Ibid., p. 10.
21. Ibid.
22. Ibid.
23. Ibid.
24. http://www.stuckism.com/stuckistwriting.html Retrieved June 14, 2008.
25. Thomson, “A Stuckist on Stuckism,” p. 17.
26. Ibid., p. 7.
27. Ibid., p. 28.
28. Mark Bauerlein, “It’s Curtains for the Gadfly of the Piece . . .” in London Times
Higher Educational Supplement, October 7, 2005, p. 21.
29. Anon., “. . . But What a Memorable Performance” in London Times Higher Educa-
tional Supplement, October 7, 2005, p. 21.
30. Ibid.
31. Ibid., pp. 21, 20.
32. Bauerlein, “It’s Curtains for the Gadfly of the Piece,” p. 20.
33. Jonathan Culler, The Literary in Theory. Stanford: Stanford University Press,
2007, p. 1.
34. James Seaton, “Truth Has Nothing to Do With It” in Wall Street Journal, August 4,
2005, http://www.opinionjournal.com/la/?id=110007056 Retrieved March 29, 2008.
35. Anon., “. . . But What a Memorable Performance,” p. 21.
36. John J. Joughlin and Simon Malpas (eds.), The New Aestheticism. Manchester:
Manchester University Press, 2003, p. 3.
Notes 251

37. Terry Eagleton, After Theory. London: Allen Lane, 2003, pp. 1–2. Further references
will appear in the text.
38. Terry Eagleton, “Capitalism, Modernism and Postmodernism” in Against the Grain:
Essays 1975–1985. London: Verso, 1986, p. 131.
39. David Alderson, Terry Eagleton. Basingstoke: Palgrave Macmillan, 2004, pp. 3–4.
40. Terry Eagleton, Literary Theory: An Introduction, 2nd edition. Oxford: Blackwell,
1996, p. 75; Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe.
Oxford: Blackwell, 1968, p. 20. Emphasis in original.
41. Terry Eagleton, Saints and Scholars. London: Verso, 1987, p. 23; Terry Eagleton, The
Gatekeeper. London: Allen Lane, 2001, pp. 62–68; Terry Eagleton, The Meaning of Life.
Oxford: Oxford University Press, 2007, p. 8. The former, with an unbearably delicious irony,
is a postmodern tale, which borrows, shall we say, its premise from Tom Stoppard’s play
Travesties (1974).
42. Gilbert Adair, The Postmodernist Always Rings Twice: Reflections on Culture in the
90s. London: Fourth Estate, 1992, pp. 19, 15. Emphasis in original.
43. G. P. Baker and P. M. S. Hacker, Wittgenstein: Meaning and Understanding. Oxford:
Blackwell, 1983, p. 279. Emphasis in original.
44. Alderson, Terry Eagleton, p. 61.
45. Jacques Derrida, Limited Inc, trans. Samuel Weber. Evanston, IL: Northwestern
University Press, 1988, pp. 144–46. Emphasis in original.
46. Raoul Eshelman, “Performatism in the Movies (1997–2003)” in Anthropoetics, vol. 8,
no. 2 (Fall 2002/Winter 2003), http://www.anthropoetics.ucla.edu/ap0802/movies.htm
Retrieved October 12, 2008.
47. Raoul Eshelman, “Performatism, or the End of Postmodernism” in Anthropoetics,
vol. 6, no. 2 (Fall 2000/Winter 2001), http://www.anthropoetics.ucla.edu/ap0602/perform.
htm Retrieved April 26, 2008.
48. Eshelman, “Performatism in the Movies.”
49. Quotes from Eshelman, “Performatism in the Movies,” and Raoul Eshelman,
“After Postmodernism: Performatism in Literature” in Anthropoetics, vol. 11, no. 2 (Fall
2005/Winter 2006), http://www.anthropoetics.ucla.edu/ap1102/perform05.htm Retrieved
October 11, 2008.
50. Eshelman, “After Postmodernism.”
51. Lipovetsky, Hypermodern Times, p. 30. Further references will appear in the text.
52. Gilles Lipovetsky and Jean Serroy, L’Ecran Global: Culture-médias et cinéma à l’âge
hypermoderne. Paris: Seuil, 2007.
53. Paul Crowther, Philosophy after Postmodernism: Civilized Values and the Scope of
Knowledge. London: Routledge, 2003, p. 2. Emphasis in original.
54. Ibid.
55. José López and Garry Potter (eds.), After Postmodernism: An Introduction to Critical
Realism. London: Athlone Press, 2001, p. 4.
56. Ibid.
57. Charles Jencks, Critical Modernism: Where is Post-Modernism Going? Chichester:
Wiley-Academy, 2007, p. 9.
252 Notes

58. I think on the whole their critical and/or commercial failure can be asserted, but
not with absolute assurance. Metacritic, a Web site that aggregates published reviews, gives
average rankings (out of 100) of 73, 63, and 48 for the three films respectively; it also gives
average “user” (customer) scores (out of 10) of 8.1, 6.4, and 5.3, all of which suggests a dra-
matic falling off (retrieved October 31, 2008). The first section of William Irwin (ed.), More
Matrix and Philosophy: Revolutions and Reloaded Decoded (Peru, IL: Open Court, 2005) is
called “The Sequels: Suck-Fest or Success?” with a first chapter by Lou Marinoff subtitled
“Why the Sequels Failed.” As Wikipedia notes, “the quality of the sequels is still a matter of
debate” (“The Matrix [series],” retrieved October 31, 2008). The tendency is unmistakable,
but not conclusive.
59. Both float whimsically romantic hypotheses about the inspiration for the national
playwright’s breakthrough (transtextuality, pastiche).
60. Hollywood Ending, unreleased in any form in Britain, was called “old, tired and
given-up-on” by the Washington Post, while Melinda and Melinda was described as “worn
and familiar” by Village Voice.
61. Famously described by Tibor Fischer in the London Daily Telegraph as “like your
favorite uncle being caught in a school playground, masturbating.”
62. Brian McHale, Postmodernist Fiction. London: Methuen, 1987; Ian Gregson, Post-
modern Literature. London: Arnold, 2004.
63. Linda Hutcheon, The Politics of Postmodernism, 2nd edition. London: Routledge,
2002, p. 165.
64. Ibid.
65. Ibid., p. 166.
66. Andrew Hoberek, “Introduction: After Postmodernism” in Twentieth-Century
Literature, vol. 53, no. 3 (Fall 2007), p. 233.
67. Wittgenstein, Philosophical Investigations, p. 48. Emphasis removed.
68. Steven Connor (ed.), The Cambridge Companion to Postmodernism. Cambridge:
Cambridge University Press, 2004, p. 1.
69. Ibid.

2. The Digimodernist Text


1. Adapted from The Concise Oxford Dictionary. Oxford: Oxford University Press, 1982,
pp. 946, 1215.
2. Espen J. Aarseth, Cybertext: Perspectives on Ergodic Literature. Baltimore, MD: Johns
Hopkins University Press, 1997, p. 1. Emphasis added.
3. Al Gore, The Assault on Reason. London: Bloomsbury, 2007.
4. Colin MacCabe, Performance. London: BFI, 1998, p. 78.
5. Ibid., p. 76.
6. Ibid., p. 55.
7. Eagleton, Literary Theory, pp. 64–65.
8. Ibid., p. 66. Emphasis added.
Notes 253

9. Raman Selden, Practicing Theory and Reading Literature. Harlow: Pearson, 1989,
pp. 113, 120.
10. Ibid., p. 125. Emphasis added.
11. Roland Barthes, “From Work to Text” in Image Music Text, trans. Stephen Heath.
London: Flamingo, 1984, pp. 157, 159, 159, 160, 161, 164 (translation modified). Emphases
in original.
12. Ibid., p. 157. Emphasis in original.
13. Ibid., pp. 162–63.
14. J. Hillis Miller, “Performativity as Performance/Performativity as Speech Act:
Derrida’s Special Theory of Performativity” in South Atlantic Quarterly, vol. 106, no. 2
(Spring 2007), p. 220.
15. Barthes, “From Work to Text,” p. 164.
16. Roland Barthes, “The Death of the Author” in Image Music Text, trans. Stephen
Heath. London: Flamingo, 1984, pp. 148, 145.
17. Michel Foucault, “What is an Author?” trans. Josué V. Harari, in Textual Strategies:
Perspectives in Post-Structuralist Criticism, ed. Josué V. Harari. Ithaca, NY: Cornell Univer-
sity Press, 1979, pp. 141–60. For a more recent view, see Seán Burke, The Death and Return
of the Author, 2nd edition. Edinburgh: Edinburgh University Press, 1998.
18. John Fowles, The French Lieutenant’s Woman. London: Vintage, 1996, pp. 97, 388, 389.
19. Martin Amis, Money. London: Penguin, 1985, p. 247; Martin Amis, The Information.
London: Flamingo, 1995, p. 300.
20. Christopher Lasch, The Culture of Narcissism: American Life in an Age of Diminish-
ing Expectations. London: Abacus, 1980, pp. 125, 127, 129.
21. Ibid., p. 150.
22. Allan Bloom, The Closing of the American Mind: How Higher Education has Failed
Democracy and Impoverished the Souls of Today’s Students. Harmondsworth: Penguin, 1988,
p. 62.
23. For certain languages, like Arabic and Japanese, other directions are clearly involved.

3. A Prehistory of Digimodernism
1. Michael Kirby’s Happenings (London: Sidgwick and Jackson, 1965) is an anthology
of statements, scripts, and production notes for happenings orchestrated by Allan Kaprow
(including his seminal 1959 piece “18 Happenings in 6 Parts”), Jim Dine, Claes Oldenburg,
and others. Based on first-hand textual experiences inaccessible to me, it’s recommended as
a replacement of sorts for the section “missing” from this chapter.
2. Laurence O’Toole, Pornocopia: Porn, Sex, Technology and Desire, 2nd edition. Lon-
don: Serpent’s Tail, 1999, p. vii. Both well researched and naïve, O’Toole’s book reflects the
immense difficulties intelligent discussion of pornography faces, caused, to a great extent,
by the form’s digimodernist shattering of conventional meta-textual categories.
3. Michael Allen, Contemporary US Cinema. Harlow: Pearson, 2003, p. 162.
4. Ceefax first went live in the mid-1970s, but take up was initially slow.
254 Notes

5. It was ever thus: mid-1980s’ research already found that heavy users tended to
be male. See Bradley S. Greenberg and Carolyn A. Lin, Patterns of Teletext Use in the
UK. London: John Libbey, 1988, pp. 12, 47.
6. See, for instance, the Wikipedia entry on international teletext: http://en.wikipedia.
org/wiki/Teletext Retrieved January 26, 2008.
7. The title conflates those of the TV game-show What’s My Line (originally CBS
1950–67, BBC 1951–63) and Brian Clark’s TV play Whose Life is It Anyway? (ITV 1972,
remade by MGM as a feature film in 1981).
8. Slattery was almost destroyed by the show (among other pressures), while Sessions,
McShane, Proops, Stiles, and Lawrence never broke out of the cultural margins. One or two
performers did, like Stephen Fry and Paul Merton, but through other shows.
9. Sean Bidder, Pump up the Volume: A History of House. London: Channel 4 Books,
2001.
10. B. S. Johnson, The Unfortunates. London: Panther Books in association with Secker
& Warburg, 1969, inside left of box.
11. Ibid., “First,” p. 4.
12. Ibid., pp. 1, 3.
13. Jonathan Coe, Like a Fiery Elephant: The Story of B. S. Johnson. London: Picador,
2004, pp. 230, 269.
14. For a good essay on related issues see Kaye Mitchell, “The Unfortunates: Hypertext,
Linearity and the Act of Reading” in Re-Reading B. S. Johnson, ed. Philip Tew and Glyn
White. Basingstoke: Palgrave Macmillan, 2007, pp. 51–64.
15. Coe, Like a Fiery Elephant, pp. 269–70.
16. John Fowles, The Collector. London: Vintage, 2004, p. 162; Martin Amis, Dead
Babies. London: Vintage, 2004, p. 21. (Amis echoes, deliberately or not, a phrase in Oscar
Wilde’s The Picture of Dorian Gray, chap. 4.)
17. Coe, Like a Fiery Elephant, p. 352.
18. Ibid., pp. 230–31.
19. Ibid., p. 343.
20. Julio Cortázar, Hopscotch, trans. Gregory Rabassa. London: Harvill Press, 1967,
unnumbered page.
21. Edward Packard, The Cave of Time. London: W. H. Allen, 1980, p. 1.
22. For an “adult” version of this narrative form see Kim Newman, Life’s Lottery.
London: Simon & Schuster, 1999.
23. Gill Davies, Staging a Pantomime. London: A&C Black, 1995, p. 90. Ellipses in
original.
24. Ibid., p. 92.
25. Tina Bicât, Pantomime. Marlborough: Crowood Press, 2004, p. 25.
26. Ibid.
27. Ibid.
28. Lawrence Sterne, The Life and Opinions of Tristram Shandy, Gentleman. Oxford:
Clarendon Press, 1983, p. 376.
29. Other derivations have also been proposed.
Notes 255

4. Digimodernism and Web 2.0


1. Connor (ed.), The Cambridge Companion to Postmodernism, pp. 14–15.
2. Quoted in http://www.ibm.com/developerworks/podcast/dwi/cm-int082206txt.
html Retrieved September 18, 2008.
3. http://en.wikipedia.org/wiki/Web_2 Retrieved June 16, 2008.
4. Ibid.
5. David Jennings, Net, Blogs and Rock ’n’ Roll. London: Nicholas Brealey, 2007,
pp. 136–41.
6. This becomes clear when chat room text is reproduced (or mimicked) within the
covers of a book, as in Sam North’s novel The Velvet Rooms. London: Simon & Schuster,
2006.
7. http://www.telegraph.co.uk/arts/main.jhtml?xml=/arts/2008/04/06/nosplit/
sv_classics06.xml Retrieved July 16, 2008. Many of the quotations here could have been
furnished with such extensive use of the parenthesis [sic] they would have become unread-
able; it has consequently been omitted from this section only.
8. Jonathan Yang, The Rough Guide to Blogging. London: Rough Guides, 2006, p. 3.
9. Brad Hill, Blogging for Dummies. Hoboken, NJ: Wiley Publishing, 2006, p. 39.
10. Nat McBride and Jamie Cason, Teach Yourself Blogging. London: Hodder Education,
2006, pp. 15, 15–16.
11. Yang, The Rough Guide to Blogging, p. 3.
12. McBride and Cason, Teach Yourself Blogging, p. 153.
13. http://en.wikipedia.org/wiki/Wikipedia Retrieved July 7, 2008.
14. http://en.wikipedia.org/wiki/Crying_of_Lot_49 Retrieved July 16, 2008.
15. In fact I’ve no idea how many people worked on these sections. The singular is used
here as a grammatical convention.
16. http://en.wikipedia.org/wiki/Henry_James Retrieved July 16, 2008.
17. http://en.wikipedia.org/wiki/Martin_Amis Retrieved July 16, 2008. Hard to imagine
a Jane Austen character called Fucker. The article also suffers from Wikipedia’s vulnerability
to breaking news, giving undue importance to trivial but recent media squabbles.
18. http://en.wikipedia.org/wiki/Postmodernism Retrieved July 9, 2008.
19. See, for instance, any of Wikipedia’s articles about itself.
20. Tom Wolfe, “A Universe of Rumors” in Wall Street Journal, July 14, 2007, http://
online.wsj.com/article/SB118436667045766268.html Retrieved August 28, 2008.
21. Janet Street-Porter, “Just Blog Off ” in London Independent on Sunday, January 6,
2008, http://www.independent.co.uk/opinion/commentators/janet-street-porter/editorat
large-just-blog-off-and-take-your-selfpromotion-and-cat-flap-with-you-768491.html
Retrieved August 28, 2008.
22. http://www.guardian.co.uk/commentisfree/2008/aug/27/oliver.foodanddrink?com
mentpage=1 Retrieved August 28, 2008.
23. Michael Miller, YouTube 4 You. Indianapolis, IN: Que Publishing, 2007, pp. 76–86.
24. Ibid., p. 12.
256 Notes

25. Andrew Keen, The Cult of the Amateur: How Today’s Internet is Killing Our Culture
and Assaulting Our Economy. London: Nicholas Brealey, 2007, p. 5.
26. Jonathan Zittrain, The Future of the Internet and How to Stop It. London: Allen Lane,
2008, p. 70. Emphasis removed.
27. See David Randall and Victoria Richards, “Facebook Can Ruin Your Life. And so
Can MySpace, Bebo . . .” in London Independent on Sunday, February 10, 2008, www.
independent.co.uk/life-style/gadgets-and-tech/.../facebook-can-ruin-your-life-and-so-can-
myspace-bebo-780521.html Retrieved September 21, 2008; Anon., “Web Revellers Wreck
Family Home,” BBC News Web site, April 12, 2007, http://news.bbc.co.uk/1/hi/england/
wear/6549267.stm Retrieved September 1, 2008.

5. Digimodernist Aesthetics
1. Algernon Charles Swinburne, “Hymn to Proserpine.”
2. A good overview of these positions is to be found in Robert W. Witkin, Adorno on
Popular Culture. London: Routledge, 2003.
3. As a contrast to the above, sample Robert Miklitsch, Roll Over Adorno: Critical
Theory, Popular Culture, Audiovisual Media. New York: SUNY Press, 2006.
4. My source is Wikipedia; the tendency is so overwhelming that absolute precision
in the data becomes irrelevant.
5. The gay market for this kind of music is a secondary, derived one.
6. Countries that impose tighter controls on the possession of credit cards, like France,
Spain, and Italy, show a correspondingly weaker form of this shift in scheduling.
7. Friends (NBC), “The One with Rachel’s Assistant,” season 7 episode 4, first transmit-
ted October 26, 2000.
8. http://en.wikipedia.org/wiki/Magi Retrieved August 30, 2008.
9. Catherine Constable, “Postmodernism and Film” in Connor (ed.), The Cambridge
Companion to Postmodernism, pp. 53–59.
10. Suman Gupta, Re-Reading Harry Potter. Basingstoke: Palgrave Macmillan, 2003, p. 9.
11. Jean Baudrillard, “History: A Retro Scenario” in Simulacra and Simulation, trans.
Sheila Faria Glaser. Ann Arbor, MI: University of Michigan Press, 1994, p. 43. Emphasis
added.
12. Cindy Sherman, The Complete Untitled Film Stills. New York: Museum of Modern
Art, 2003, p. 9.
13. Thomas Pynchon, The Crying of Lot 49. London: Vintage, 2000, pp. 117–18.
14. Baudrillard posed this question in 1981 about the subjects of the proto-reality TV
show An American Family, first aired in 1973 (“The Precession of Simulacra” in Simulacra
and Simulation, p. 28). Such shows used to appear once a decade; now they launch every
week. When in 2008 Channel 4 screened a structural remake of the program that so exer-
cised Baudrillard, a British TV critic noted presciently: “it won’t have the same impact . . .
Reality shows, for want of a better expression, are now the norm” (Alison Graham, “Déjà
View” in London Radio Times, September 13–19, 2008, p. 47).
Notes 257

15. Hill, Blogging for Dummies, p. 268.


16. Ben Walters, The Office. London: BFI, 2005, p. 3.
17. Faking It (Channel 4, 2000–05); The Edwardian Country House (Channel 4, 2002);
The Supersizers Go Restoration (BBC2, 2008).
18. Baudrillard, “History: A Retro Scenario,” p. 44.
19. Ihab Hassan, The Dismemberment of Orpheus: Toward a Postmodern Literature, 2nd
edition. Madison, WI: University of Wisconsin Press, 1982, p. 268.
20. Stuart Sim, Irony and Crisis: A Critical History of Postmodern Culture. Cambridge:
Icon, 2002.
21. Stanley Aronowitz, Dead Artists, Live Theories and Other Cultural Problems.
London: Routledge, 1994, p. 40. Emphasis added.
22. Adair, The Postmodernist Always Rings Twice, p. 14.
23. http://en.wikipedia.org/wiki/New_Sincerity Retrieved March 28, 2008.
24. Lawrence Kasdan (dir.), French Kiss (20th Century Fox, 1995).
25. George Lucas (dir.), Star Wars Episode I: The Phantom Menace (20th Century Fox,
1999).
26. Sam Raimi (dir.), Spider-Man (Columbia, 2002).
27. Erich Auerbach, Mimesis: The Representation of Reality in Western Literature, trans.
Willard R. Trask. Princeton: Princeton University Press, 2003, p. 23.
28. Ibid., p. 13.
29. Ibid., p. 23.
30. Malcolm Bradbury, The Modern British Novel 1878–2001, rev. edition. London:
Penguin, 2001, p. 505.
31. J. R. R. Tolkien, “Foreword to the Second Edition” in The Fellowship of the Ring.
London: HarperCollins, 2007, p. xxv.
32. Alison McMahan, The Films of Tim Burton: Animating Live Action in Contemporary
Hollywood. London: Continuum, 2005, p. 238.
33. Gavin Keulks, “W(h)ither Postmodernism: Late Amis” in Martin Amis: Postmod-
ernism and Beyond, ed. Gavin Keulks. Basingstoke: Palgrave Macmillan, 2006, p. 159.
34. Jameson, Postmodernism, p. 300.
35. Friends (NBC), “The One with the Breast Milk,” season 2 episode 2, first transmitted
September 28, 1995.

6. Digimodernist Culture
1. Amis, The Information, pp. 435–36. Emphases in original.
2. Steven Connor, Postmodernist Culture: An Introduction to Theories of the
Contemporary. Oxford: Blackwell, 1989.
3. Connor (ed.), The Cambridge Companion to Postmodernism.
4. Jameson, Postmodernism, p. 299.
5. “Videogames” here encompass all software-based electronic games whatever plat-
form they may be played on, and are synonymous with “computer games.” Academically the
definition is moot, but mine is closer to the popular sense of the word.
258 Notes

6. Nic Kelman, Video Game Art. New York: Assouline, 2005. Andy Clarke and Grethe
Mitchell (eds.), Videogames and Art (Bristol: Intellect, 2007) sets games among the broader
practices of pictorial art.
7. Some games, like SimCity, are more accurately “played with” than “played.”
8. Vladimir Nabokov, Lectures on Literature. London: Weidenfeld & Nicolson, 1980,
p. 251. Emphasis in original.
9. Ibid.
10. See Christiane Paul, Digital Art, rev. edition. London: Thames & Hudson, 2008.
11. Quoted in “Hoffman Hits Out over Modern Film,” BBC News Web site, January 25,
2005, http://news.bbc.co.uk/1/hi/entertainment/film/4206601.stm Retrieved September 1,
2008.
12. Quoted in Clifford Coonan, “Greenaway Announces the Death of Cinema—and
Blames the Remote-Control Zapper” in London Independent, October 10, 2007, http://
www.independent.co.uk/news/world/asia/greenaway-announces-the-death-of-
cinema--and-blames-the-remotecontrol-zapper-394546.html Retrieved September 21, 2008.
Punctuation amended.
13. Mark Cousins, The Story of Film. London: Pavilion Books, 2004, p. 5.
14. Even more striking is the French poll conducted in November 2008 by Les Cahiers
du Cinéma, which could not find one film made since 1963 to put in its twenty all-time
greatest movies. This excision of the more recent half of cinema history suggests a paralysis
of critical appreciation.
15. Also the reliance on circus performance by Fellini and others.
16. In W. (2008) Stone depicts Bush junior as a bemused nonentity.
17. Cousins, The Story of Film, p. 9.
18. Ibid., pp. 447, 458.
19. Ibid., p. 493.
20. Federico Fellini (dir.), 8½ (Cineriz, 1963). My translation.
21. An example is Jean-Pierre Jeunet’s overpraised and complacent Amélie (2001). Its
real title, Le Fabuleux Destin d’Amélie Poulain, with its internal rhyme and use of “foal”
(poulain) as a proper name, echoes Chickin Lickin or Mr Magorium’s Wonder Emporium.
Heavily indebted to Tim Burton and Ally McBeal, its blend of digitization and children’s
story motifs could have been made in New York, though no American studio would have
dared bankroll its implied politics.
22. My comments on A Picture of Britain quote verbatim from Niki Strange’s paper
“ ‘The Days of Commissioning Programmes are over . . .’: The BBC’s ‘Bundled Project’ ” at the
Television Studies Goes Digital conference, London Metropolitan University, September 14,
2007. See also David Dimbleby, A Picture of Britain. London: Tate Publishing, 2005.
23. Jean Ritchie, Big Brother: The Official Unseen Story. London: Channel 4 Books, 2000,
p. 154.
24. Ibid., p. 73.
25. Ibid., p. 92.
26. Peep Show (Channel 4), season 2 episode 4 [10], “University,” first transmitted
December 3, 2004.
Notes 259

27. This isn’t original, but I forget who argued it first (candor is a virtue).
28. Terry Kirby, “Radio Enters a New Golden Age as Digital Use Takes Off ” in London
Independent, February 2, 2007, www.independent.co.uk/news/media/radio-enters-a-new-
golden-age-as-digital-use-takes-off-434732.html Retrieved September 22, 2008.
29. Anon., “Quarter of Radio Listeners Make Switch to Digital” in London Independent,
August 16, 2007, www.independent.co.uk/news/media/quarter-of-radio-listeners-make-
switch-to-digital-461813.html Retrieved September 22, 2008.
30. Ben Dowell, “Podcasts Help Lift Live Radio Audiences” in London Guardian, July 2,
2008, www.guardian.co.uk/media/2008/jul/02/radio.rajars Retrieved September 22, 2008.
31. John Plunkett, “Digital Radio Attracts More Listeners” in London Guardian,
May 1, 2008, www.guardian.co.uk/media/2008/may/01/digitaltvradio.rajars Retrieved
September 22, 2008.
32. Owen Gibson, “Record Numbers Tune in to BBC” in London Guardian, May 2,
2008, www.guardian.co.uk/media/2008/may/02/bbc.radio Retrieved September 22, 2008.
33. Green’s style is interestingly discussed in Andrew Tolson, Media Talk: Spoken
Discourse on TV and Radio. Edinburgh: Edinburgh University Press, 2006, pp. 94–112.
34. Jean Baudrillard, America, trans. Chris Turner. London: Verso, 1988.
35. Fredric Jameson, “Postmodernism and Consumer Society” in The Anti-Aesthetic:
Essays on Postmodern Culture, ed. Hal Foster. Port Townsend, WA: Bay Press, 1983, p. 111.
36. Jameson, Postmodernism, p. 1.
37. Simon Frith (ed.), Facing the Music: Essays on Pop, Rock and Culture. London:
Mandarin, 1990, p. 5.
38. Ibid. Emphasis in original.
39. I mean musically. The personal influence of Joan Baez on Dylan’s transformation
into a late modernist was, I suspect, immense.
40. Ed Whitley has argued that The Beatles (1968) is a postmodern album because of its
heterogeneity, pastiche, plurality, bricolage, and fragmentation. But for me its diversity
stems from an attempt at collective and multifaceted self-portraiture, suggested by the
title. This unifies and totalizes the text. Postmodern elements appear on Highway 61 Revis-
ited and elsewhere before 1972, but subjugated to other aesthetics (alloys, alloys). See
Ed Whitley, “The Postmodern White Album” in The Beatles, Popular Music and Society:
A Thousand Voices, ed. Ian Inglis. Basingstoke: Macmillan, 2000, pp. 105–25.
41. Walter Benjamin, “The Work of Art in the Age of Mechanical Reproduction” in
Illuminations, trans. Harry Zohn. New York: Schocken, 1969, p. 224.
42. Simon Frith, Music for Pleasure: Essays in the Sociology of Pop. Cambridge: Polity,
1988, p. 1.
43. Ibid., pp. 99, 96.
44. Dylan Jones, iPod, Therefore I Am. London: Weidenfeld & Nicolson, 2005,
pp. 152–56, 259–64.
45. John Harris, The Last Party: Britpop, Blair and the Demise of English Rock. London:
Fourth Estate, 2003, pp. 370, 371.
46. See Joseph Tate (ed.), The Music and Art of Radiohead. Aldershot: Ashgate,
2005.
260 Notes

47. Paul Auster and Sam Messer, The Story of My Typewriter. New York: Distributed Art
Publishers, 2002.
48. The business implications of this are explored in Chris Anderson, The Long Tail:
How Endless Choice is Creating Unlimited Demand. London: Random House, 2006.
49. Robert Coover, “The End of Books” in New York Times Book Review, June 21, 1992.
50. Geoff Ryman, 253: The Print Remix. London: Flamingo, 1998; http://www.ryman-
novel.com Retrieved November 11, 2008.
51. Astrid Ensslin, Canonizing Hypertext: Explorations and Constructions. London:
Continuum, 2007, pp. 84–86.
52. Kevin Kelly, “Scan This Book!” in New York Times, May 14, 2006, www.nytimes.
com/2006/05/14/magazine/14publishing.html?ex=1305259200&en=c07443d368771bb8&
ei=5090 Retrieved October 10, 2008.

7. Toward a Digimodernist Society?


1. James Gordon Finlayson, Habermas. Oxford: Oxford University Press, 2005, p. 63.
2. Jean-François Lyotard, “Apostil on Narratives” in The Postmodern Explained to
Children: Correspondence 1982–85, trans. ed. Julian Pefanis and Morgan Thomas. Sydney:
Power Publications, 1992, p. 31.
3. Zaki Laïdi, Le sacre du Présent. Paris: Flammarion, 2000.
4. Simon Baron-Cohen, Autism and Asperger Syndrome. Oxford: Oxford University
Press, 2008, p. 15.
5. Ibid., p. viii.
6. Quoted in Stuart Murray, Representing Autism: Culture, Narrative, Fascination.
Liverpool: Liverpool University Press, 2008, p. 10.
7. Philip K. Dick, Do Androids Dream of Electric Sheep? London: Grafton, 1972, p. 33.
8. Daniel Lauffer, “Asperger’s, Empathy and Blade Runner” in Journal of Autism and
Developmental Disorders, vol. 34, no. 5 (October 2004), pp. 587–88.
9. Mark Haddon, The Curious Incident of the Dog in the Night-Time. Oxford: David
Fickling, 2003, pp. 242–44.
10. Quoted in Jeremy Laurance, “Keith Joseph, the Father of Thatcherism, ‘was autistic’
claims Professor” in London Independent, July 12, 2006, http://www.independent.co.uk/
life-style/health-and-wellbeing/health-news/keith-joseph-the-father-of-thatcherism-was-
autistic-claims-professor-407600.html Retrieved August 7, 2008.
11. Baron-Cohen, Autism and Asperger Syndrome, p. 32.
12. Ibid., pp. 33, 71. For an extended discussion of the “extreme male brain” theory of
autism see Simon Baron-Cohen, The Essential Difference (London: Penguin, 2004). He
describes the “extreme female brain” as “unknown terrain ahead” (p. 173), as (like some
particle) an entity theoretically posited but not yet empirically found, rather than as nonex-
istent. His “best guess,” that someone with such a brain “would be a wonderfully caring
person . . . who was virtually technically disabled” (pp. 173–74) is, however, an improbable
figure: the latter deficiency would surely leave anyone so helpless and dependent it would
preclude development of the former quality’s strength and resourcefulness. Conceptualized
Notes 261

like this, the “extreme female brain” is unlikely ever to be glimpsed; as with much discourse
around autism, a richer sense of the sociocultural context is needed.
13. Murray, Representing Autism, pp. 139–65.
14. Simon Malpas, Jean-François Lyotard. Abingdon: Routledge, 2003, pp. 1, 123.
15. Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge, trans.
Geoff Bennington and Brian Massumi. Manchester: Manchester University Press, 1984,
p. xxiv. Emphasis in original.
16. D. C. R. A. Goonetilleke, Salman Rushdie. Basingstoke: Macmillan, 1998, p. 105.
17. Christopher Butler, Postmodernism: A Very Short Introduction. Oxford: Oxford
University Press, 2002, p. 13.
18. Jean-François Lyotard, La Condition Postmoderne: Rapport Sur le Savoir. Paris: Les
Editions de Minuit, 1979, p. 7.
19. Jean-François Lyotard, “Answer to the Question: What is the Postmodern?” in The
Postmodern Explained to Children: Correspondence 1982–85, trans. ed. Julian Pefanis and
Morgan Thomas. Sydney: Power Publications, 1992, pp. 24–25.
20. Lyotard, “Apostil on Narratives,” p. 29.
21. This is not to overlook detailed problems such as the book’s misuse of Wittgenstein
or the term “narrative.” I return to the question of paralogy later.
22. Michel Foucault, The Order of Things: An Archaeology of the Human Sciences, trans.
Alan Sheridan. London: Routledge, 2002, p. 422.
23. Consumerist mechanisms have also restructured English primary school education,
imposing stressful and damaging entry and exit tests on small children solely in order to
construct prospective parents as “consumers” selecting the best “product” in the school
marketplace.
24. Quoted in Francesca Steele, “Go Back to School . . . Starting Right Here” in London
Times, January 2, 2008, http://www.timesonline.co.uk/tol/life_and_style/education/
article3117906.ece Retrieved November 20, 2008.
25. Dolan Cummings, “Introduction” in Dolan Cummings, et al., Reality TV: How Real
is Real? London: Hodder & Stoughton, 2002, p. xiii.
26. These compendia of dictates are called self-help books, an ingratiating deception
that foreshadows their contents.
27. In Jack Kornfield (ed.), Teachings of the Buddha. Boston, MA: Shambhala, 1993,
p. 13.
This page intentionally left blank
WORKS CITED

Aarseth, Espen J. Cybertext: Perspectives on Ergodic Literature. Baltimore,


MD: Johns Hopkins University Press, 1997.
Adair, Gilbert. The Postmodernist Always Rings Twice: Reflections on
Culture in the 90s. London: Fourth Estate, 1992.
Alderson, David. Terry Eagleton. Basingstoke: Palgrave Macmillan, 2004.
Allen, Michael. Contemporary US Cinema. Harlow: Pearson, 2003.
Amis, Martin. Dead Babies. London: Vintage, 2004 [1975].
—. The Information. London: Flamingo, 1995.
—. Money. London: Penguin, 1985 [1984].
Anon., “. . . But What a Memorable Performance” in London Times Higher
Educational Supplement, October 7, 2005, pp. 20–21.
Aronowitz, Stanley. Dead Artists, Live Theories and Other Cultural Problems.
London: Routledge, 1994.
Auerbach, Erich. Mimesis: The Representation of Reality in Western Litera-
ture, trans. Willard R. Trask. Princeton: Princeton University Press,
2003 [1953].
Baker, G. P. and Hacker, P. M. S. Wittgenstein: Meaning and Understanding.
Oxford: Blackwell, 1983 [1980].
Baron-Cohen, Simon. Autism and Asperger Syndrome. Oxford: Oxford
University Press, 2008.
—. The Essential Difference. London: Penguin, 2004 [2003].
Barthes, Roland. “The Death of the Author” in Image Music Text, trans.
Stephen Heath. London: Flamingo, 1984, pp. 142–48.
—. “From Work to Text” in Image Music Text, trans. Stephen Heath.
London: Flamingo, 1984, pp. 155–64.
Baudrillard, Jean. “History: A Retro Scenario” in Simulacra and Simula-
tion, trans. Sheila Faria Glaser. Ann Arbor, MI: University of Michigan
Press, 1994, pp. 43–48.

263
264 Works Cited

—. “The Precession of Simulacra” in Simulacra and Simulation, trans.


Sheila Faria Glaser. Ann Arbor, MI: University of Michigan Press, 1994,
pp. 1–42.
Bauerlein, Mark. “It’s Curtains for the Gadfly of the Piece . . .” in London
Times Higher Educational Supplement, October 7, 2005, pp. 20–21.
BBC News Web site, “Hoffman Hits Out over Modern Film,” January
25, 2005, http://news.bbc.co.uk/1/hi/entertainment/film/4206601.stm
Retrieved September 1, 2008.
Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduc-
tion” in Illuminations, trans. Harry Zohn. New York: Schocken, 1969
[1955], pp. 217–51.
Bicât, Tina. Pantomime. Marlborough: Crowood Press, 2004.
Blincoe, Nicholas and Thorne, Matt (eds.). All Hail the New Puritans.
London: Fourth Estate, 2000.
Bloom, Allan. The Closing of the American Mind: How Higher Education
has Failed Democracy and Impoverished the Souls of Today’s Students.
Harmondsworth: Penguin, 1988 [1987].
Bradbury, Malcolm. The Modern British Novel 1878–2001, rev. edition.
London: Penguin, 2001.
Breisach, Ernst. On the Future of History: The Postmodernist Challenge and
Its Aftermath. Chicago: University of Chicago Press, 2003.
Butler, Andrew M. and Ford, Bob. Postmodernism. Harpenden: Pocket
Essentials, 2003.
Butler, Christopher. Postmodernism: A Very Short Introduction. Oxford:
Oxford University Press, 2002.
Coe, Jonathan. Like a Fiery Elephant: The Story of B. S. Johnson. London:
Picador, 2004.
Cogan Thacker, Deborah and Webb, Jean. Introducing Children’s Literature:
From Romanticism to Postmodernism. London: Routledge, 2002.
Connor, Steven (ed.). The Cambridge Companion to Postmodernism.
Cambridge: Cambridge University Press, 2004.
Coonan, Clifford. “Greenaway Announces the Death of Cinema—and
Blames the Remote-Control Zapper” in London Independent, October
10, 2007, http://www.independent.co.uk/news/world/asia/greenaway-
announces-the-death-of-cinema--and-blames-the-remotecontrol-zapper-
394546.html Retrieved September 21, 2008.
Cortázar, Julio. Hopscotch, trans. Gregory Rabassa. London: Harvill Press,
1967 [1963].
Works Cited 265

Cousins, Mark. The Story of Film. London: Pavilion Books, 2004.


Crowther, Paul. Philosophy after Postmodernism: Civilized Values and the
Scope of Knowledge. London: Routledge, 2003.
Culler, Jonathan. The Literary in Theory. Stanford: Stanford University
Press, 2007.
Cummings, Dolan. “Introduction” in Reality TV: How Real is Real? ed. Dolan
Cummings, et al. London: Hodder & Stoughton, 2002, pp. xi–xvii.
Davies, Gill. Staging a Pantomime. London: A&C Black, 1995.
Derrida, Jacques. Limited Inc, trans. Samuel Weber. Evanston, IL: North-
western University Press, 1988.
Dick, Philip K. Do Androids Dream of Electric Sheep? London: Grafton,
1972 [1968].
Eagleton, Terry. After Theory. London: Allen Lane, 2003.
—. “Capitalism, Modernism and Postmodernism” in New Left Review 152
(July–August 1985). Reprinted in Eagleton, Terry. Against the Grain:
Essays 1975–1985. London: Verso, 1986, pp. 131–47.
—. The Gatekeeper. London: Allen Lane, 2001.
—. Literary Theory, 2nd edition. Oxford: Blackwell, 1996.
—. The Meaning of Life. Oxford: Oxford University Press, 2007.
—. Saints and Scholars. London: Verso, 1987.
Ensslin, Astrid. Canonizing Hypertext: Explorations and Constructions.
London: Continuum, 2007.
Eshelman, Raoul. “After Postmodernism: Performatism in Literature”
in Anthropoetics, vol. 11, no. 2 (Fall 2005/Winter 2006), http://www.
anthropoetics.ucla.edu/ap1102/perform05.htm Retrieved October 11,
2008.
—. “Performatism in the Movies (1997–2003)” in Anthropoetics, vol. 8,
no. 2 (Fall 2002/Winter 2003), http://www.anthropoetics.ucla.edu/
ap0802/movies.htm Retrieved October 12, 2008.
—. “Performatism, or the End of Postmodernism” in Anthropoetics, vol. 6,
no. 2 (Fall 2000/Winter 2001), http://www.anthropoetics.ucla.edu/
ap0602/perform.htm Retrieved April 26, 2008.
Evans, Katherine (ed.). The Stuckists: The First Remodernist Art Group.
London: Victoria Press, 2000.
Finlayson, James Gordon. Habermas. Oxford: Oxford University Press,
2005.
Foucault, Michel. The Order of Things: An Archaeology of the Human
Sciences, trans. Alan Sheridan. London: Routledge, 2002 [1966].
266 Works Cited

Fowles, John. The Collector. London: Vintage, 2004 [1963].


—. The French Lieutenant’s Woman. London: Vintage, 1996 [1969].
Frith, Simon. Music for Pleasure: Essays in the Sociology of Pop. Cambridge:
Polity, 1988.
— (ed.). Facing the Music: Essays on Pop, Rock and Culture. London:
Mandarin, 1990 [1988].
Gibson, Owen. “Record Numbers Tune in to BBC” in London Guardian,
May 2, 2008, www.guardian.co.uk/media/2008/may/02/bbc.radio
Retrieved September 22, 2008.
Goonetilleke, D. C. R. A. Salman Rushdie. Basingstoke: Macmillan, 1998.
Graham, Alison. “Déjà View” in London Radio Times, September 13–19,
2008, p. 47.
Greenberg, Bradley S. and Lin, Carolyn A. Patterns of Teletext Use in the
UK. London: John Libbey, 1988.
Gupta, Suman. Re-Reading Harry Potter. Basingstoke: Palgrave Macmillan,
2003.
Haddon, Mark. The Curious Incident of the Dog in the Night-Time. Oxford:
David Fickling, 2003.
Harris, John. The Last Party: Britpop, Blair and the Demise of English Rock.
London: Fourth Estate, 2003.
Hassan, Ihab. The Dismemberment of Orpheus: Toward a Postmodern Liter-
ature, 2nd edition. Madison, WI: University of Wisconsin Press, 1982.
Hill, Brad. Blogging for Dummies. Hoboken, NJ: Wiley Publishing, 2006.
Hillis Miller, J. “Performativity as Performance/Performativity as Speech
Act: Derrida’s Special Theory of Performativity” in South Atlantic Quar-
terly, vol. 106, no. 2 (Spring 2007), pp. 219–35.
Hoberek, Andrew. “Introduction: After Postmodernism” in Twentieth-
Century Literature, vol. 53, no. 3 (Fall 2007), pp. 233–47.
Hutcheon, Linda. The Politics of Postmodernism, 2nd edition. London:
Routledge, 2002.
Jameson, Fredric. “Postmodernism and Consumer Society” in The Anti-
Aesthetic: Essays on Postmodern Culture, ed. Hal Foster. Port Townsend,
WA: Bay Press, 1983, pp. 111–25.
—. Postmodernism, or, The Cultural Logic of Late Capitalism. London:
Verso, 1991.
Jencks, Charles. Critical Modernism: Where is Post-Modernism Going?
Chichester: Wiley-Academy, 2007.
Works Cited 267

Jennings, David. Net, Blogs and Rock ‘n’ Roll. London: Nicholas Brealey,
2007.
Johnson, B. S. The Unfortunates. London: Panther Books in association
with Secker & Warburg, 1969.
Jones, Dylan. iPod, Therefore I Am. London: Weidenfeld & Nicolson, 2005.
Joughlin, John J. and Malpas, Simon (eds.). The New Aestheticism.
Manchester: Manchester University Press, 2003.
Keen, Andrew. The Cult of the Amateur: How Today’s Internet is Killing Our
Culture and Assaulting Our Economy. London: Nicholas Brealey, 2007.
Kelly, Kevin. “Scan This Book!” in New York Times, May 14, 2006, www.
nytimes.com/2006/05/14/magazine/14publishing.html?ex=130525920
0&en=c07443d368771bb8&ei=5090 Retrieved October 10, 2008.
Keulks, Gavin. “W(h)ither Postmodernism: Late Amis” in Martin Amis:
Postmodernism and Beyond, ed. Gavin Keulks. Basingstoke: Palgrave
Macmillan, 2006, pp. 158–79.
Kirby, Terry. “Radio Enters a New Golden Age as Digital Use Takes Off ” in
London Independent, February 2, 2007, www.independent.co.uk/news/
media/radio-enters-a-new-golden-age-as-digital-use-takes-off-
434732.html Retrieved September 22, 2008.
Kornfield, Jack (ed.). Teachings of the Buddha. Boston, MA: Shambhala,
1993.
Lasch, Christopher. The Culture of Narcissism: American Life in an Age of
Diminishing Expectations. London: Abacus, 1980 [1979].
Lauffer, Daniel. “Asperger’s, Empathy and Blade Runner” in Journal of
Autism and Developmental Disorders, vol. 34, no. 5 (October 2004),
pp. 587–88.
Laurance, Jeremy. “Keith Joseph, the Father of Thatcherism, ‘was Autistic’
Claims Professor” in London Independent, July 12, 2006, http://www.
independent.co.uk/life-style/health-and-wellbeing/health-news/keith-
joseph-the-father-of-thatcherism-was-autistic-claims-professor-407600.
html Retrieved August 7, 2008.
Lipovetsky, Gilles. Hypermodern Times, trans. Andrew Brown. Cambridge:
Polity, 2005 [2004].
López, José and Potter, Garry (eds.). After Postmodernism: An Introduction
to Critical Realism. London: Athlone Press, 2001.
Lyotard, Jean-François. “Answer to the Question: What is the Postmodern?”
in The Postmodern Explained to Children: Correspondence 1982–85,
268 Works Cited

trans. ed. Julian Pefanis and Morgan Thomas. Sydney: Power Publications,
1992 [1986], pp. 9–25.
—. “Apostil on Narratives” in The Postmodern Explained to Children:
Correspondence 1982–85, trans. ed. Julian Pefanis and Morgan Thomas.
Sydney: Power Publications, 1992 [1986], pp. 27–32.
—. La Condition Postmoderne: Rapport Sur le Savoir. Paris: Les Editions de
Minuit, 1979.
—. The Postmodern Condition: A Report on Knowledge, trans. Geoff
Bennington and Brian Massumi. Manchester: Manchester University
Press, 1984.
MacCabe, Colin. Performance. London: BFI, 1998.
Malpas, Simon. Jean-François Lyotard. Abingdon: Routledge, 2003.
McBride, Nat and Cason, Jamie. Teach Yourself Blogging. London: Hodder
Education, 2006.
McMahan, Alison. The Films of Tim Burton: Animating Live Action in
Contemporary Hollywood. London: Continuum, 2005.
Miller, Michael. YouTube 4 You. Indianapolis, IN: Que Publishing, 2007.
Murray, Stuart. Representing Autism: Culture, Narrative, Fascination.
Liverpool: Liverpool University Press, 2008.
Nabokov, Vladimir. Lectures on Literature. London: Weidenfeld &
Nicolson, 1980.
O’Toole, Laurence. Pornocopia: Porn, Sex, Technology and Desire, 2nd
edition. London: Serpent’s Tail, 1999.
Packard, Edward. The Cave of Time. London: W. H. Allen, 1980 [1979].
Plunkett, John. “Digital Radio Attracts More Listeners” in London Guardian,
May 1, 2008, www.guardian.co.uk/media/2008/may/01/digitaltvradio.
rajars Retrieved September 22, 2008.
Pynchon, Thomas. The Crying of Lot 49. London: Vintage, 2000 [1965].
Ritchie, Jean. Big Brother: The Official Unseen Story. London: Channel 4
Books, 2000.
Schaeffer, Francis. The God Who is There. London: Hodder and Stoughton,
1968.
Seaton, James. “Truth Has Nothing to Do with It” in Wall Street Journal,
August 4, 2005, http://www.opinionjournal.com/la/?id=110007056
Retrieved March 29, 2008.
Selden, Raman. Practising Theory and Reading Literature. Harlow: Pearson,
1989.
Works Cited 269

Sherman, Cindy. The Complete Untitled Film Stills. New York: Museum of
Modern Art, 2003.
Steele, Francesca. “Go Back to School . . . Starting Right Here” in London
Times, January 2, 2008, http://www.timesonline.co.uk/tol/life_and_
style/education/article3117906.ece Retrieved November 20, 2008.
Sterne, Lawrence. The Life and Opinions of Tristram Shandy, Gentleman.
Oxford: Clarendon Press, 1983 [1759–67].
Street-Porter, Janet. “Just Blog Off ” in London Independent on Sunday,
January 6, 2008, http://www.independent.co.uk/opinion/commentators/
janet-street-porter/editoratlarge-just-blog-off-and-take-your-
selfpromotion-and-cat-flap-with-you-768491.html Retrieved August
28, 2008.
Thomson, Charles. “A Stuckist on Stuckism” in The Stuckists: Punk Victo-
rian, ed. Frank Milner. Liverpool: National Museums, 2004, pp. 6–31.
Tolkien, J. R. R. “Foreword to the Second Edition” in The Fellowship of the
Ring. London: HarperCollins, 2007 [1966], pp. xxiii–xxvii.
Walters, Ben. The Office. London: BFI, 2005.
Wittgenstein, Ludwig. Philosophical Investigations, trans. G. E. M. Anscombe.
Oxford: Blackwell, 1968 [1953].
Wolfe, Tom. “A Universe of Rumors” in Wall Street Journal, July 14, 2007,
http://online.wsj.com/article/SB118436667045766268.html Retrieved
August 28, 2008.
Yang, Jonathan. The Rough Guide to Blogging. London: Rough Guides,
2006.
Zittrain, Jonathan. The Future of the Internet and How to Stop It. London:
Allen Lane, 2008.
This page intentionally left blank
INDEX

“6–0–6” (BBC 5 Live) 205 Amis, Martin 23–4, 45, 47, 59, 93,
9/11 151, 154, 177, 226, 237 117, 166, 207, 218
The Information 59, 91, 166
Aardman Animations 10–11, 16–17 London Fields 59, 218
Chicken Run 10–13, 137 Money 47, 59, 140
Wallace and Gromit: The Curse of the Time’s Arrow 23–4
Were-Rabbit 17 amnesia, digimodernist textual 64, 83,
The Wrong Trousers 10–11 149
Aarseth, Espen J. 53 Amos, Tori 217
Ergodic Literature 53–4 anonymity, digimodernist textual 52,
Adair, Gilbert 36–7, 150–1 60, 62, 69, 71, 75, 79, 83, 89,
addiction, digimodernist textual 79, 106–7, 118, 171, 193, 195
83, 148–9, 180 Antonioni, Michelangelo 139, 187
Adorno, Theodor 34, 125, 129, L’avventura 187
134, 136 apparently real, the 22, 48, 87, 120–1,
After Poststructuralism (Davis) 28 139–50, 177, 185, 187–8, 191,
Against Postmodernism 196, 198
(Callinicos) 38 Arctic Monkeys 216
Albarn, Damon 46, 217 [See also Blur, Whatever People Say I Am, That’s
Gorillaz] What I’m Not 216
Alderson, David 33–4, 37 Armageddon (Bay) 177
Allen, Michael 80 Aronowitz, Stanley 150
Allen, Peter 203–4 art-album 209–11, 213–15
Allen, Woody 45, 47 Asperger syndrome 228–30, 232–3
Stardust Memories 47 Asteroids (Atari) 169
All Hail the New Puritans (ed. Blincoe Astral Weeks (Morrison) 210
and Thorne) 22–3 audience participation 83–7, 95–9,
Ally McBeal (Fox) 196 191, 198
Alteration, The (Amis) 138 Auerbach, Erich 156–7, 159
Amazon 219–21 Mimesis 156–7
American Family, An (PBS) 190 Austen, Jane 163, 181, 219–20
American Idol (Fox) 131, 214 Auster, Paul 218–19

271
272 Index

authorship, digimodernist 51–2, Beloved (Morrison) 23, 149, 218


58–61, 69, 71, 83–4, 86–7, 89, 97, Benjamin, Walter 88, 211
106–12, 153, 161, 171, 187, 191–2, Bennington, Geoff 235
194–5, 219 Beowulf (Zemeckis) 182
autism 227–33 Berlusconi, Silvio 109
automavision 186–7 Berners-Lee, Tim 101
Bettelheim, Bruno 232
Bacon, Richard 206 Big Brother (Channel 4) 52, 60, 72,
Baddiel and Skinner Unplanned 147, 190–6
(ITV) 198 Big Read (BBC) 161
Baddiel, David 198 Bill, The (ITV) 162
Barnes, Julian 46, 173 Blackadder (TV series, BBC) 48
England, England 173 Black Box 89
Flaubert’s Parrot 91 “Ride on Time” 89
Baron Cohen, Sacha 145–7 Blaine, David 142, 148
Ali G (film and TV show) 145–6 Blair, Tony 154–5, 236, 242
Borat: Cultural Learnings of America Blincoe, Nicholas 22–3
for Make Benefit Glorious Nation blogs 71, 102–5, 111–12, 117–18, 120,
of Kazakhstan 22, 145–8, 187 142, 171, 195, 220–1
Baron-Cohen, Simon 227 Bloody Chamber, The (Carter) 15, 50
Barthes, Roland 55–9, 88, 124, 193 Bloom, Allan 67
“The Death of the Author” 59 Blur 126, 130, 213 [See also Albarn,
“From Work to Text” 56–7 Damon]
Batman (franchise) 16, 128, 153 Blyton, Enid 11, 137
Baudrillard, Jean 38, 58, 124–5, 139, Book of Dave, The (Self) 237
149, 151, 173, 206 Bordwell, David 28–9
“The Precession of Simulacra” 58 Bourdieu, Pierre 207
Bauerlein, Mark 27–9 Bowie, David 129–31, 207, 209
BBC 80–3, 144, 154, 161, 189–91, Ziggy Stardust and the Spiders from
200–3 Mars 209
BBC 5 Live 201–6 Bow Wow Wow 185
BBC Radio 1 214 Boy George 215
BBC Radio 2 214 Boyzone 131, 136
BBC Radio 4 202 Bradbury, Malcolm 156–7
Beatles, the 129, 131, 135, 207–8, Brecht, Bertolt 34–35
212–13, 215 Breisach, Ernst 7
Everest 213 Brit art 26
“Help!” 212 Britpop 130, 212–13
“Hey Jude” 213, 215 Buffy the Vampire Slayer (The WB,
Let It Be 213 UPN) 136
Rubber Soul 208, 210 Burden, Rachel 203–4
Bebo 121 Bush, George W. 154–5, 236, 242
Beckett, Samuel 49, 63 Butler, Christopher 235
Waiting for Godot 63 Butler, Judith 40
Index 273

Calvino, Italo 15, 23 Cobain, Kurt 212


If on a Winter’s Night a Traveler 15 Coe, Jonathan 93–4
Cambridge Companion to Cogan Thacker, Deborah 15
Postmodernism, The Comment is Free 110, 118
(ed. Connor) 136 competence 113, 116–18, 120, 150,
Cameron, David 214, 242 241–5
Cameron, James 173 Composition No. 1 (Saporta) 93–4
The Abyss 173 conceptualism 26–7
Terminator 2: Judgment Day Connor, Steven 49, 101, 113, 166–7
173–4 Postmodernist Culture 166
Canterbury Tales, The (Chaucer) 14, Conrad, Joseph 91
163 Constable, Catherine 136
Capote, Truman 70 consumerism 42–3, 103, 114–15,
Carroll, Noël, 28–9 131–2, 134, 154, 214, 227, 231–3,
cartoons 8–18, 126–8, 132, 185, 191 238–40, 243–5
Casualty (BBC) 162 Coogan, Steve 48
Cave of Time, The (Packard) 95 Cooper, Alice 207
CD (compact disc) 211, 224 Coppola, Sofia 185
Ceefax 80–3, 99, 188 Lost in Translation 185–6
cell phone 104, 113, 142, 189, 200–1, Marie Antoinette 185
204, 230, 237 Coronation Street (ITV) 162
CGI (computer-generated Cortázar, Julio 94–5
imagery) 139, 149, 153, 173–86, Rayuela [Hopscotch] 94–5
196 Cousins, Mark 172–3, 186
Channel 4 84, 148, 154, 196, 198 critical modernism 45
Chao, Manu 217 critical realism 44
Charlie’s Angels (2000 film, McG) 14 Crouching Tiger, Hidden Dragon
chat rooms 52, 103, 105–6, 113, 117, (Lee) 158, 184
134, 142, 148, 171 Crowther, Paul 43–4
chess 63, 168–9, 171 Cruise, Tom 229
Childish, Billy 24, 26 cultural theory 27–36, 136
children’s entertainment 8–18, 95–9, Cummings, Dolan 243
125–39, 153, 155, 158, 174, 176, Cunningham, Valentine 28–30
181–2, 186, 221, 243, 245 British Writers of the Thirties 29
Choose Your Own Adventure (novel Reading after Theory 28, 30
series, Bantam) 95 Curtis, Ian 212
Christensen, Hayden 181 Curtis, Richard 192
Christie, Agatha 119, 191
Chronicles of Narnia, The (film series, Dallas (CBS) 162
Walden Media) 153, 176 Dating Game, The (ABC) 14, 126
City of God (Meirelles and Lund) 177 Davies, Gill 95–6
Clash, the 206, 216 Da Vinci Code, The (Brown) 221, 237
The Clash 210 Dawkins, Richard 35, 237–8
Cloverfield (Reeves) 121, 177 The God Delusion 237
274 Index

Death of the Critic, The Dylan, Bob 129, 207–10, 212


(McDonald) 30 Bringing It All Back Home 212
Debord, Guy 166 Highway 61 Revisited 209–10
deconstruction 39, 41, 57 “Mr Tambourine Man” 209
Deep Impact (Leder) 177 Dynasty (ABC) 162
Deep Throat (Damiano) 76
Defoe, Daniel 138 Eagleton, Terry 28–36, 38, 55
Deleuze, Gilles 37, 40 After Theory 28–36
Derbyshire, Victoria 202–6 “Capitalism, Modernism and
Derrida, Jacques 28, 34, 37–41, 57, Postmodernism” 32–3
69, 207 The Illusions of Postmodernism
Diaz, Cameron 14 33, 38
Dickens, Charles 161–2, 191, 219 Literary Theory 30–1, 34–5
Dick, Philip K. 59, 229 The Meaning of Life 35
digital backlot 182–3 Saints and Scholars 34
Discworld (novel series, Wittgenstein (screenplay) 34
Pratchett) 159, 161 earnestness 150–5, 158, 176–7, 186,
Disney 13–16 221
The Jungle Book 183 EastEnders (BBC) 162
Snow White and the Seven Easton Ellis, Bret 45
Dwarfs 14 e-book 68, 219
Divine Comedy, the 213 Eco, Umberto 109, 125, 193, 218
Docusoap 22, 48, 141–4, 147, Foucault’s Pendulum 218
153–4, 188 Elder Scrolls, The (Bethesda) 158
Dogme 95 18–24, 26–7, 120, 186 Eliot, T. S. 208
Doherty, Pete 215 e-mail 69, 71, 148, 189, 202, 204, 206
Doors, the 208 Emin, Tracey 26
Douglass, Frederick 244 Emmerich, Roland 174, 177, 182
DreamWorks 12–17, 126, 128 The Day after Tomorrow 177
Flushed Away 16 Godzilla 174
Madagascar 16 Independence Day 174–5, 177
Over the Hedge 17 10,000 BC 182
Shark Tale 16 Encyclopedia Britannica 118, 120
Shrek 12–16, 137 Endemol 191
Shrek 2 16, 47 endlessness, digimodernist
Shrek the Third 17 textual 155–65, 198, 221
“Drive” (BBC 5 Live) 203–4 “End of Books, The” (Coover) 221
Duck Soup (McCarey) 63 engulfment, digimodernist textual 79,
Dungeons and Dragons 75 83, 148–9, 154–5, 168, 180, 226
DVD 63–4, 199–200, 240 Enron: The Smartest Guys in the Room
box-set 199–200 (Gibney) 185
Dyer, Geoff 23 ER (NBC) 163
Index 275

Ergodic literature 53–4, 223 Gainsbourg, Serge 209


Eshelman, Raoul 39–41 Gang of Four 206
“Performatism, or the End of Entertainment! 215
Postmodernism” 39–40 Gans, Eric 40
Performatism, or the End of Garland, Alex 23
Postmodernism 39–40 generative anthropology 40
evanescence 52, 69, 75, 81, 83, 89, 105, Gervais, Ricky and Merchant,
171, 190, 205, 226 Stephen 144–5
Extras 145
Facebook 71, 121–3 The Office 144–5
Face/Off (Woo) 136 Gibson, Mel 237
Fahrenheit 9/11 (Moore) 184 Apocalypto 237
Family Guy (Fox) 132 The Passion of the Christ 237
Fantastic Four (film series, 20th Glass, Philip 125, 207
Century Fox) 179 Godard, Jean-Luc 20, 92, 139,
Farrell, Colin 181 185, 207
Fast Show, The (BBC) 132 Week End 185
Fawlty Towers (BBC) 52, 159 Golden Compass, The (2007 film,
Fellini, Federico 47, 185, 191 Weitz) 153
8½ 47, 187 Google 240
feminism 70, 128, 140, 245 Google Books 219, 221
Final Fantasy (franchise) 128, 175 Gore, Al 54, 184
Fitzgerald, Michael 229 An Inconvenient Truth 184
Fleming, Ian 193 Gorillaz 216–17 [See also Albarn,
Forster, E. M. 191 Damon]
Foucault, Michel 32, 37, 59–60, 69, grand narratives 31, 179, 234–40
114, 238, 244 Green, Alan 205
Fowles, John 59, 93, 149 Greenaway, Peter 47, 172, 185
The French Lieutenant’s Woman The Draughtsman’s Contract 47
(novel) 13, 46, 59, 149 Greene, Graham 92, 115
The Magus 140 Green, Jeremy 48
Franz Ferdinand 207, 215 Green Wing (Channel 4) 198–9
Franz Ferdinand 215 Greer, Germaine 157, 159
French Kiss (Kasdan) 151 Gregson, Ian 46–7
French Lieutenant’s Woman, The (1981 Postmodern Literature 46–7
film, Reisz) 47 Griffiths, D.W. 172, 185
Friends (NBC) 132–33, 160, 163–5 Groening, Matt 18, 125
Frith, Simon 130, 207, 212 Guinness, Alec 160
Art into Pop 130 Gulliver’s Travels (Swift) 137–8
Fukuyama, Francis 242
fundamentalism, religious 31, 35, 236–8 Habermas, Jürgen 2
Funny Games (1997 film, Haneke) 177 Hacking, Ian 228
276 Index

Haddon, Mark 136, 221, 228, 233 hypermodernity 41–3


The Curious Incident of the Dog in hypertext 95, 221–3
the Night-Time 136, 221, 228–9,
232–3 I am Legend (2007 film,
haphazardness 52, 84, 86–7, 89, 92, Lawrence) 181
112, 119, 121, 141–2, 160, 168, Ice Age (Blue Sky) 17
171, 188, 198, 203, 205 Ice Age: The Meltdown (Blue Sky) 8, 17
happenings 75, 93 I Love Lucy (CBS) 135
Happy Feet (Animal Logic) 17–18 IMDb (International Movie
Happy Mondays 130, 212 Database) 220
Happy slapping 142 improvisation 84–7, 98–100, 141, 144,
Hard Day’s Night, A (1964 film, 198
Lester) 129 Intellectual Impostures (Sokal and
Hardy, Oliver 18 Bricmont) 38
Harris, John 214 Internet 46, 64–8, 71–2, 80, 83, 89,
Harry Potter (film series, Warner 101–23, 131, 148, 187, 189–90,
Bros.) 126, 128, 152–3, 176 200–1, 211, 217, 219, 221–2, 230,
Harry Potter (franchise) 160 237, 240 [See also Web 2.0]
Harry Potter (novel series, iPod 104, 200, 211, 230
Rowling) 136–8, 161–2, 221 Iser, Wolfgang 55–6
[See also Rowling, J. K.] Islamism 31, 46
Harvey, David 40 iTunes 213
Hassan, Ihab 150 ITV 82, 197–8
Hillis Miller, J. 57
Hill Street Blues (NBC) 163 Jackass (franchise) 142–43, 148
hip-hop 88, 145, 207 Jackson, Peter 139, 158, 176, 180–1
Hirst, Damien 26 King Kong 128, 149, 161, 174, 176
Hitchcock, Alfred 139, 185 Lord of the Rings (film trilogy) 126,
Vertigo 23 139, 152–3, 158, 160–1, 176,
Hitchens, Christopher 237–8 180–1, 184, 186
God is Not Great 237 Jagger, Mick 146
Hoffman, Dustin 172, 229 James, Henry 117, 151
Hollyoaks (Channel 4) 162 Jameson, Fredric 1–2, 10, 27, 32, 34,
Homer 138, 156–9 40–1, 48, 85, 88, 125, 149, 166,
The Odyssey 41, 157–8 206–7, 215
Hopkins, Anthony 182 “Postmodernism, or the Cultural
Houellebecq, Michel 237 Logic of Late Capitalism” 32
Plateforme 237 Postmodernism, or, The Cultural
house music 87–9, 99 Logic of Late Capitalism 206
House of Flying Daggers (Yimou) 158, Signatures of the Visible 125
184 Jane Austen Book Club, The (2007 film,
Hutcheon, Linda 48, 88 Swicord) 219–20
The Politics of Postmodernism 48 Jazz 100, 206, 209, 216
Index 277

Jencks, Charles 44–5 López, José 44


Jennings (novel series, Lost (ABC) 163
Buckeridge) 137 Lucas, George 127, 160, 179–80
John, Elton 214 [See also Star Wars, etc.]
Johnson, B.S. 23, 89–95, 99, 192 Lumière brothers 183–4, 187
The Unfortunates 89–95, 99, 191, Lyotard, Jean-François 27, 32, 38,
222 40–1, 43, 206, 226, 234–6
Jolie, Angelina 182 The Postmodern Condition 234–6
Jones, Dylan 213
Joseph, Keith 229 MacCabe, Colin 54–5
Joyce, James 91, 158, 167 McEwan, Ian 24, 41
Ulysses 64–5, 91, 158, 167 McGregor, Ewan 160
“Just Like Honey” (Jesus and Mary McKenzie, Scott 129
Chain) 185 McMahon, Alison 158
Madame Bovary (Flaubert) 52, 107
Keen, Andrew 120–1 Madonna 130, 135, 200
Kelley, David E. 196 Maguire, Tobey 153
Kelly, Kevin 224 Malkovich, John 182
Kermode, Frank 28, 33 Marcotti, Gabriele 205
Kerouac, Jack 70 Marx Brothers 63, 175
Knowing Me, Knowing You . . . with Massumi, Brian 235
Alan Partridge (BBC) 48 Méliès, Georges 183
Kraftwerk 88 message boards 52, 68, 72, 103,
Kubrick, Stanley 158, 185 105–11, 118–19, 142, 171,
195, 202
Lacan, Jacques 40, 69 Miller, Michael 119
Laïdi, Zaki 226 Minogue, Kylie 216
Landow, George 221 Mitchell, David 196
Larry Sanders Show, The 144 modernism 36, 38, 48, 91, 125, 129,
Lasch, Christopher 67 150, 167, 176, 187, 196, 208, 218,
Led Zeppelin 210–11 220, 225–6, 230
Libertines, the 216 modernity 2–3, 42–4, 208,
The Libertines 216 225–6
Linklater, Richard 183 Monkees, the 13
A Scanner Darkly 183 Monty Python’s Flying Circus
Waking Life 183 (BBC) 199
Lipovetsky, Gilles 5, 41–3 Moulin Rouge (Luhrmann) 18
L’Ecran Global 43 “Mr Tambourine Man” (Byrds) 212
Hypermodern Times 5, 41–3 MTV 8, 161, 200
Literary in Theory, The (Culler) 28 Mummy, The (1999 film,
Little Britain (BBC) 132 Sommers) 134, 149, 175–6, 184
Litt, Toby 23 Murray, Stuart 232
Living in the Sun (BBC) 154 My Family (BBC) 190
278 Index

Myrick, Daniel and Sánchez, Parachutes (Coldplay) 214–15


Eduardo 143–4 Paramount 121
The Blair Witch Project 22, 143–5, Pavane (Roberts) 138
148 Peep Show (Channel 4) 196–7
MySpace 121 Pepys, Samuel 111–12
myth/mythology, digimodernist Perec, Georges 23
137–9, 152–4, 158, 161, 170, Performance (Cammell and
176–7, 179–80, 182–3, 221 Roeg) 54–5
performance capture 182
Nabokov, Vladimir 115–16, 171–2 performatism 39–41
Lolita 116 Peter Jackson’s King Kong (Ubisoft) 169
Neighbors (Network Ten) 162 Pet Sounds (Beach Boys) 215
Net, Blogs and Rock ‘n’ Roll Picasso, Pablo 8, 173
(Jennings) 103 Picture of Britain (BBC) 189–90
new aestheticism 30 Pilgrim’s Progress (Bunyan) 156
New Aestheticism, The (ed. Joughlin Pink Floyd 207, 210, 213, 215
and Malpas) 30, 34 “Arnold Layne” 213
New Puritans, the 22–4, 26–7 Pirates of the Caribbean (film series,
new sincerity 151 Walt Disney Pictures) 126, 128,
Nirvana 126, 212 149, 161
No Going Back (Channel 4) 154 Pirates of the Caribbean: The Curse of
Norris, Christopher 28, 38 the Black Pearl (Verbinski) 1
What’s Wrong with Pixar 8–10, 15–17, 126, 128, 185
Postmodernism 38 Cars 16–17
Not Saussure (Tallis) 38 Finding Nemo 16
The Incredibles 16–17
Oasis 130, 211, 213, 230 Ratatouille 17
Be Here Now 211, 213 Toy Story 8–10, 128
Definitely Maybe 211 Toy Story 2 9–10
Obama, Barack 214 Player, The (Altman) 47
One A.M. (Chaplin) 175 Pleasantville (Ross) 226
onwardness 52, 83, 102–5, 107, podcasts 200–1, 205
111–12, 117, 159, 162, 168, 171, Pong (Atari) 169
203, 205, 226 Pornography 42, 75–80, 99, 142
Open Season (Sony) 17 Portman, Natalie 160
Orwell, George 11–12, 193 postcolonialism 74, 128
Animal Farm 11–12 postfeminism 14, 137, 232
Osbourne, Ozzy 214 Postmodernist Fiction (McHale ) 46
O’Toole, Laurence 79–80 post-structuralism 27, 31, 33–4, 37–8,
Pornocopia 79–80 40–1, 43, 56, 58–60, 69, 114–15,
124, 163, 222, 238, 246
Pan’s Labyrinth (del Toro) 176 post-theory 27–36, 222
pantomime 95–9, 191 Potter, Garry 44
Index 279

Propp, Vladimir 13, 193 Rocky Horror Picture Show, The


Morphology of the Folk Tale 13 (Sharman) 75
pseudoautism 54, 79, 229–31, 245 Rodin, Auguste 26
pseudonymity, digimodernist Rolling Stones 129, 207–8, 210–13,
textual 52, 79–80, 106–7, 112, 202 215
pseudoscience, digimodernist Aftermath 210
textual 141, 147–8, 154–6, Exile on Main Street 211
158, 185 “Play with Fire” 213
Pullman, Philip 15, 136–8, 161 “Satisfaction” 208, 212
The Amber Spyglass 138 “Sympathy for the Devil” 215
Clockwork: Or All Wound Up 15 Rome (BBC/HBO/Rai) 149
The Golden Compass (Northern Rorty, Richard 150
Lights) 138 rotoscoping 183
His Dark Materials 136–38, 160, Rowling, J.K. 136–8, 162 [See also
221, 236 Harry Potter etc.]
Pulp 213 Harry Potter and the Sorcerer’s
“Common People” 213 Stone 137
Pump up the Volume (Bidder) 89 Rushdie, Salman 23–4, 207, 237
“Pump up the Volume” Midnight’s Children 23
(M/A/R/R/S) 87 Russell, Ken 161
Pynchon, Thomas 51, 115–17, 140 Russian Ark (Sokurov) 186
The Crying of Lot 49 115–16, 140 Ryan, Meg 151
Gravity’s Rainbow 51 Ryman, Geoff 222–3
253 222–3
Radiohead 211, 216–17
Amnesiac 217 Saatchi, Charles 26
Hail to the Thief 217 Saussure, Ferdinand de 34, 38, 220
In Rainbows 217 Schaeffer, Francis 7
Kid A 217 Scorsese, Martin 172, 185, 207
OK Computer 211, 217 Scorpion King, The (Russell) 184
Rainbow, The (Lawrence) 92 Scott, Ridley 178–9, 228–9
Rain Man (Levinson) 229, 232 Blade Runner 9, 186, 228–9
Rand, Ayn 109, 191 Gladiator 152, 178
Night of January 16th 191 Kingdom of Heaven 179
reader response theory 54–5 Scream (Craven) 143
reality TV 60, 79, 141–4, 147–8, 153, Secker & Warburg 92–3
160, 188, 192, 200, 231 Second Life 102
reception theory 55 Seinfeld (NBC) 133, 135, 199
Reed, Lou 216 Selden, Raman 55
remodernism 25–6 self-scheduling 200
Renoir, Jean 175 Sergeant, Jean-Claude 235
Richard & Judy (Channel 4) 219 Serota, Nicholas 24, 26
Ricks, Christopher 207 Sessions, John 85
280 Index

Sex and the City (HBO) 119, 133, Raiders of the Lost Ark 175
160, 163 The War of the Worlds 177
Sex Pistols, the 215 Springsteen, Bruce 212
Shakespeare, William 219, 221 Spurlock, Morgan 184–5
Shaun of the Dead (Wright) 154 Super Size Me 184–5
Shelley, Mary 173 Star Wars Episode I: The Phantom
Sherman, Cindy 139–40 Menace (Lucas) 152
Untitled Film Stills 139–40 Star Wars Episode II: Attack of the
Simpsons Movie, The (Fox) 18 Clones (Lucas) 180
Simpsons, The (Fox) 10, 18, 46, 126, Star Wars Episode III: Revenge of the
132, 155, 162–3 Sith (Lucas) 180
Sim, Stuart 150 Star Wars Episode IV: A New Hope
Sing-a-long-a Sound of Music 75 (Lucas) 127, 160
Singin’ in the Rain (Kelly and Star Wars (film series, 20th Century
Donen) 181 Fox) 115, 128, 160–1, 179–80
Skinner, Frank 198 Star Wars (original film trilogy, 20th
Sky Captain and the World of Century Fox) 9, 169, 179–80
Tomorrow (Conran) 182–3 Star Wars (prequel film trilogy,
Smiths, the 87–8, 212, 214 Lucas) 126, 152–3, 160–1, 179–80
“Panic” 87 Stinky Cheese Man and Other Fairly
SMS (text message) 69–71, 113, 148, Stupid Tales, The (Scieszka and
189, 191, 202, 204, 206 Smith) 15
Snow Patrol 214–15 St. Matthew Passion (Bach) 237
Final Straw 214–15 Stone, Oliver 140, 177, 179
social networking sites 103, 121–3, Alexander 152, 179
142, 220 JFK 140
Songs of Praise (BBC) 135 Natural Born Killers 177
Sopranos, The (HBO) 163 Stone Roses, the 130, 212
Sound of Music, The (Wise) 135 Street-Porter, Janet 118
South Park (Comedy Central) 132 Streets, the 216
Space Invaders (Taito) 175 Strokes, the 214–15
Spears, Britney 131, 215 Is This It 214–15
Spector, Phil 129 stuckism 24–7, 40
Spellbound (Blitz) 185 Suede 130, 208, 212
Spice Girls, the 130–1 supermodernity 44
Spice 130 super-subjectivity 169–71
Spider-Man (film series, Sony Swinburne, Algernon Charles 124, 211
Pictures) 126, 153
Spider-Man 2 (Activision) 169 Talking Heads 206
Spielberg, Steven 127, 173–4, 177–8 Fear of Music 215
Close Encounters of the Third Tamara (Krizanc) 98
Kind 169, 174 Tarantino, Quentin 126, 155, 177, 207
Jurassic Park 9, 173–4 Pulp Fiction 64, 155
Index 281

Tate Britain 26, 189 “Victoria Derbyshire” (BBC 5


Technotronic 88 Live) 202–6
“Get Up (Before the Night is “Video Killed the Radio Star”
Over)” 88 (Buggles) 200
“Pump up the Jam” 88 Vinterberg, Thomas 18–21, 24
“This Beat is Technotronic” 88 Festen (The Celebration) 19, 21
Teletext 82 It’s All about Love 21
Ten (Kiarostami) 187 Von Trier, Lars 18–21, 24, 186
Thatcher, Margaret 229 The Boss of it All 186–7
That’ll Teach ‘Em (Channel 4) 148 Dancer in the Dark 21
Theory’s Empire (ed. Patai and The Idiots 19, 21–2
Corral) 28–9, 36
This Film is Not yet Rated (Dick) 184 Wachowski, Andy and Wachowski,
Thomson, Charles 24, 26 Larry 45, 141
Thorne, Matt 22–4 The Matrix 14, 143
300 (Snyder) 183–4 The Matrix (film series) 45, 140,
Thynne, Jane 201 152–3, 158, 161
Timecode (Figgis) 187 Wakefield, Andrew 228
Tolkien, J.R.R. 18, 156–9, 161, 180–1 Wal-Mart: The High Cost of Low Price
Lord of the Rings (novel series) (Greenwald) 184
156–8, 160 Walters, Ben 144
Tony ‘n’ Tina’s Wedding (Allen et al) 98 “Wardrobe, The” (Tokarczuk) 41
Top of the Pops (BBC) 130 Warhol, Andy 183, 190
Transformers (Bay) 128, 135, 242 Empire 183
Trial, The (Kafka) 93 Warner Bros. 55
Tristan and Isolde (Reynolds) 184 War of the Worlds, The (Wells) 174
Tristram Shandy (Sterne) 15, 47, 98–9 Waterland (Swift) 64, 91, 149
Troy (Peterson) 152, 179 Web 2.0 6, 54, 60, 65–6, 71, 101–23,
Truffaut, François 20, 47 134, 142, 154, 166, 202, 204, 222
La Nuit Américaine (Day for [See also Internet]
Night) 47 Webb, Jean 15
Tudors, The Webb, Robert 196–7
(Showtime/BBC/TV3/CBC) 149 Weir, Peter 141
Twister (de Bont) 177 The Truman Show 140, 191
Welles, Orson 175, 185
UGC (user-generated content) 66, Citizen Kane 23, 53, 65, 159, 184
71–2, 80, 83, 120 West Wing, The (NBC) 160, 163, 199
Usborne Books 136 White Stripes, the 215
Elephant 215
VCR 63, 82, 188, 200 Whose Line is It Anyway?
Verve, the 213, 230 (Channel 4) 83–7, 99, 191, 198
“Bitter Sweet Symphony” Who Wants to be a Millionaire?
(song) 213, (video) 230 (ITV) 197–8
282 Index

Wife Swap (Channel 4) 147, 154 Wood, Michael 167


Wikinomics (Tapscott and Woolf, Virginia 111, 221
Williams) 103 Mrs Dalloway 126, 218
Wikipedia 23, 40, 57, 60, 71–2, 103, Word processor 70, 218
105, 112–20, 123, 134, 142, 151, World of Warcraft (Blizzard) 158
155, 171, 220
Wilson, Brian 215 X Factor, The (ITV) 131, 214
Wilson, Tony 48 X-Men (film series, 20th Century
Winehouse, Amy 215 Fox) 126, 152–3, 179
Back to Black 215
Winfrey, Oprah 219 Yang, Jonathan 111
Winterbottom, Michael 47–8, 186 “Yellow Wallpaper, The” (Perkins
9 Songs 186 Gilman) 41
Tristram Shandy: A Cock and Bull YouTube 22, 60, 71–2, 102–3, 118–21,
Story 47–8 123, 142–3
24 Hour Party People 48
Wittgenstein, Ludwig 34–5, 37, 65, 220 Zittrain, Jonathan 121
Wizard of Oz, The (Fleming) 9 Žižek, Slavoj 28, 30, 33
Wolfe, Tom 118 The Fright of Real Tears 28

Vous aimerez peut-être aussi