Vous êtes sur la page 1sur 66

PHILOSOPHER CHANGES

COURSE OF CANADIAN
HISTORY BY INFORMING
RECENT SUPREME
COURT ASSISTED-DEATH
DECISION
In 2011, the B.C. Civil Liberties Association (BCCLA) filed a lawsuit claiming that physician-assisted dying should be
legal. That case ended up before the Supreme Court of Canada last fall, and on February 6, 2015, the BCCLA won
their case. The landmark decision overturned the Supreme Court’s 1993 ruling that physician-assisted suicide is illegal.
The court has suspended its decision for 12 months to give lawmakers a chance to write new laws that reflect the
ruling.

Wayne Sumner, University Professor Emeritus in the Department of Philosophy, offered to help the BCCLA with their
case in 2011 and became directly involved in their efforts. He used arguments he developed in his book, Assisted
Death: A Study in Ethics and Law (2011), to provide expert testimony that played a significant role in the Supreme
Court’s decision. Sumner spoke to A & S News about the decision and his contribution to it.

This has been called a landmark ruling. What is the significance of the Supreme Court’s decision?

It is historic. It changes the landscape in Canada in pretty profound ways. Up to this point there have been a variety of
things that can be done for patients experiencing significant suffering in the last stages of their lives, including a
number of things that would actually hasten their deaths, such as honouring a patient’s refusal of further life-sustaining
treatment, terminal sedation, and the use of painkillers to the extent that it might compromise life. All of these were
legal and widely regarded as ethical, but the law had drawn a firm line between them and any form of physician-
assisted death, whether it’s physician-assisted suicide or euthanasia. What this judgment does is to cross that line.
Competent adults who are able to make a clear request and who experience intolerable suffering from an irremediable
disease will now have a right to physician-assisted suicide and physician-administered euthanasia. The judgment
makes options available to these patients that they didn’t have before.

It is rare for an academic philosopher to contribute to a case before the Supreme Court. How did you come to
be involved?

I’ve been teaching and writing in bioethics for a long time. In the 1990s, I started teaching courses specifically on end-
of-life issues and developed my own views on physician-assisted dying. When I retired from teaching in 2008, I
decided to put together the views that I wanted to defend on these questions, so I wrote Assisted Death trying to
systematically provide a case that physician-assisted death is ethical and should be legal. Those were the two aims of
the book. At just about the time the book appeared, in 2011, I learned that the BCCLA was mounting a challenge to the
constitutionality of the Canadian laws covering assisted death. I wrote an email to tell them that I had just published a
book on this very question and that I would be delighted to help out in any way I could.

Joseph Arvay, the lead council for the BCCLA, asked me to serve as an expert witness in ethics in the case before the
B.C. Supreme Court. Expert witnesses are usually expert on facts — it’s usually scientists of various kinds who are
recognized by the court. Joe didn’t know of any case in which anyone had ever been recognized by a Canadian court
as an expert witness on an ethical issue, so this might have been a first.

Part of my contribution was to write an ethical opinion that condensed material from my book. I argued that there are
no significant ethical differences between assisted death and various other end-of-life treatment options that can have
the effect of hastening death. The argument established that the legal distinction between physician-assisted death and
other end-of-life treatments is not grounded in the ethics of these practices. This is what they used at the B.C. Supreme
Court trial. Madam Justice Lynn Smith ended up agreeing with us, concluding that there was no ethical difference
here.

So that was my role. I was an expert witness for the plaintiffs on ethics. My testimony, my evidence was incorporated
into the B.C. trial judge’s decision and all of her findings, including my arguments, went forward to the Supreme Court
of Canada.

What is it like to make a significant contribution to a Supreme Court ruling as an academic philosopher?

I think it’s just super-cool. I have always held the view that philosophers should try to make a difference in matters of
public policy, that we have skills that we can bring to the table and that it’s a shame if we don’t do that on whatever
issue happens to animate us. I’ve advocated that for a long time, but I never dreamed that I could be part of anything as
momentous as this decision. This is a high-watermark of my career as a public intellectual.

What do you make of the public discourse responding to the decision?

I think in one respect the actual reporting of the case has been misleading, by emphasizing physician-assisted suicide.
The 1993 Supreme Court case challenged only the prohibition of assisted-suicide. This time the challenge was also to
the provision in the Criminal Code that prohibits someone from consenting to their own death, so laws that were struck
down had to do with both physician-assisted suicide and euthanasia.

The upshot for legal and medical practices seems huge. Where do we go from here?

Obviously this decision is not one that the Harper government is keen to act on. I hope that they take the conditions
that the court laid down seriously and write them into the law. But even if they do, there are still going to be many
questions about how provincial ministries of health should operationalize these practices. Do you require a second
opinion as a matter of law? How do you ensure that the patient is competent and able to make his or her own
healthcare decisions? All of these questions are up in the air. There is the initial euphoria that the old laws have been
struck down and then you wake up the next day and wonder how we’re going to get it right.
Nostalgia Just
Became a Law of
Nature
New theories have mixed perception and knowledge into the
hardest of sciences.

J ohn Ruskin called it the pathetic fallacy: to see rainstorms as passionate, drizzles

as sad, and melting streams as innocent. After all, the intuition went, nature has no
human passions.
Imagine Ruskin’s surprise, then, were he to learn that the mathematics of perception,
knowledge, and experience lie at the heart of modern theories of the natural world. Quite
contrary to his stern intuition, quantitative relationships appear to tie hard, material laws
to soft qualities of mind and belief.

The story of that discovery begins with the physicist Ludwig Boltzmann, soon after
Ruskin coined his phrase at the end of the 19th century. It was then that science first
strived, not for knowledge, but for its opposite: for a theory of how we might ignore the
messy details of, say, a steam engine or chemical reaction, but still predict and explain
how it worked.

Boltzmann provided a unifying framework for how to do this nearly singlehandedly before
his death by suicide in 1906. What he saw, if dimly, is that thermodynamics is a story not
about the physical world, but about what happens when our knowledge of it fails. Quite
literally: A student of thermodynamics today can translate the physical setup of a steam
engine or chemical reaction into a statement about inference in the face of ignorance.
Once she solves that (often simpler) problem, she can translate back into statements
about thermometers and pressure gauges.
The ignorance that Boltzmann relied upon was maximal: Whatever could happen, must
happen, and no hidden order could remain. Even in the simple world of pistons and
gases, however, that assumption can fail. Push a piston extremely slowly, and
Boltzmann’s method works well. But slam it inward, and the rules change. Vortices and
whirlpools appear, streams and counter-streams, the piston stutters and may even stall.
Jam the piston and much of your effort will be for nothing. Your work will be dissipated in
the useless creation and destruction of superfluous patterns.

INTO TURBULENCE Water collapses into turbulence in an ASCII fluid simulator written by Yusuke Endoh. Over time, as waves and
stream become splashes and puddles, the original patterns dissipate and the work that went into their arrangement is lost
forever.Yusuke Endoh

How that law works itself out—how waste occurs in the real world, beyond the ideal—
was unavailable to Boltzmann. The thermodynamics of the 19th century needed to wait
for equilibrium to return, for all of these evanescent and improbable structures to fade
away. Our ignorance in the equilibrium case is absolute: We know that there is nothing
more to know. But in a world out of equilibrium we know there is something more to
know, but we do not know it.
Non-equilibrium thermodynamics was unexplored territory for many years. Only in 1951,
45 years after Boltzmann’s death, were we able describe how small adjustments that
kick a system ever so slightly out of equilibrium vanish in time, through something called
the Fluctuation Dissipation Theorem. Compared to quantum mechanics or relativity,
neither of which were subjects until the time of Boltzmann’s death, but which threw up
surprise after surprise for decades, thermodynamics appeared to operate on glacial
scales.

By the opening of the 21st century, however, a science based on a theory of inference
and prediction had entered a renaissance, driven in part by rapid progress in machine
learning and artificial intelligence. My crash course in the new thermodynamics came
from a lecture by Susanne Still of the University of Hawaii in 2011. It was there that Still
announced, with her collaborators, a new relationship between the dissipation in a
system (the amount of work we do on it that is wasted and lost) and what we know about
that system.1
Still and her collaborators showed how dissipation is bounded by
the unnecessary information a system retains—information irrelevant to the system’s
future behavior. With an admirable poetry of mind, they referred to this as nostalgia, the
memories of the past that are useless for the future. What they showed is that the
nostalgia of a system puts a minimum on the amount of work you will lose in acting on it.
Quite literally, work: Whirlpools set up when you jam down a piston are not (yet) lost—
with careful tracing, one can jitter the piston back to recover their energy. It is only over
time, as they break apart and become unknowable, as they become increasingly
unpredictable to you, that the work that went into their creation is lost.

Here is the melancholy of a forgotten memory, a childhood room packed into boxes, the
irrecoverable details of an afternoon drizzle, appearing quite literally in physical law.
Those memories trace a past that no longer matters. Brought to our attention, they tell us
a story of loss, and threaten to consume our present, if we’re not careful. In physics, too,
nostalgia carries a penalty.

We can invert this result: Dissipation is connected to how hard it is to recover what a
system used to look like. In order to rewind, one needs to retrodict, to predict backward—
and complete retrodiction is impossible when nostalgia means that many different pasts
are compatible with the same future. The exact mathematical relations between
nostalgia, irreversibility, and dissipation are elaborate, specific, and always a bit of a
surprise when they fall out of the equations.

One of the more remarkable contributions to the new thermodynamics in recent years
has been by Jeremy England of the Massachusetts Institute of Technology, released the
year after Still’s group.2 Focusing on the irreversibility of a system, rather than its
nostalgia, England put together an account of the biological world. He described the
ways in which evolution might drive organisms not only to make use of the free energy in
their environments, but to do so in a maximally dissipative fashion.
England’s work seems to explain why, over 3 billion years, our ecosystem turned into a
giant green solar panel, feeding towers of herbivores and predators as part of a natural
process that smears out the energy of the sun. We exist, in this interpretation, because
we dissipate as reliably as possible the massive source of work at the center of our solar
system.

While Still’s work connects nostalgia to dissipation and loss, England’s work seems to
say that life itself is brought into being by the demands of dissipation. Beings like us exist
precisely because we create our worlds—physical, chemical, biological, mental, social—
and tear them down faster than the alternatives. Nostalgia may be bittersweet, but it may
also underwrite our existence.

Le philosophe et le djihadiste
Par Jacob Rogozinski

Dans une récente tribune (Le Monde, 28 janvier), Alain Badiou qualifie de « crime
fasciste » l’assassinat des journalistes de Charlie Hebdo et des Juifs de
l’hypermarché casher. Peu importe que les tueurs se soient réclamé Al-Qaida et de
Daech, peu importe qu’ils aient donné à leur acte une signification religieuse (« nous
avons vengé le Prophète ! ») : comme si rien n’avait changé depuis les années 1930,
notre philosophe n’y voit que du « fascisme ». Il s’obstine en effet à ressusciter le
vieux nom sanglant de « communisme » et à désigner comme « fasciste » ce qui lui
fait obstacle. Obstination qui le rend sourd et aveugle à ce qu’il y a de nouveau, de
singulier dans la situation présente.
Selon le dogme archéo-marxiste auquel il se cramponne, le fascisme n’est qu’un
simple avatar de la domination capitaliste : les « bandes armées meurtrières » et les
démocrates qui se mobilisent contre leurs crimes « appartiennent au même monde,
celui du capitalisme prédateur ». Entre eux, aucune opposition réelle : leurs
intérêts « sont partout les mêmes » ; tous feraient partie de la même « pièce
historique en trompe-l’œil » ; si bien que la République le « totem républicain »- ne
vaudrait guère mieux que ceux qui l’attaquent.
Badiou en vient ainsi à renvoyer dos-à-dos les assassins et leurs victimes. Cette
posture en ni-ni revient en fait à prendre partie pour l’une des deux forces en
présence. En insistant sur les brimades policières dont sont victimes les jeunes de
banlieue, il laisse entendre que les tueurs n’auraient fait que répliquer à une
agression préalable. Ils l’avaient bien cherché, ces chiens de pornographes
de Charlie Hebdo qui ne faisaient « qu’aboyer avec ces mœurs policières dans le
style 'amusant' des blagues à connotation sexuelle » Une telle complaisance envers
des meurtriers ne nous étonnera guère, venant d’un homme qui a longtemps fait
l’apologie des Khmers Rouges et persiste à célébrer la « révolution culturelle »
chinoise, avec ses persécutions et ses massacres, comme l’un des plus glorieux
événements du XX° siècle. Il importe d’écarter ces œillères stalino-maoïstes, si l’on
veut comprendre le phénomène du terrorisme djihadiste.

SOI-DISANT CALIFE
Phénomène paradoxal qui s’enracine dans l’Islam, tout en le défigurant. C’est ce
qu’ont compris les foules innombrables qui ont manifesté pour le dénoncer, sans
jamais s’en prendre à l’Islam comme tel. Il convient en effet de distinguer ce dispositif
de pouvoir qu’est le djihadisme et ce dispositif de croyance qu’est la religion
musulmane, au même titre que les autres religions. Certes, ces deux types de
dispositifs peuvent s’étayer, se conjoindre, voire dans certains cas fusionner ; mais il
arrive que leur conjonction se défasse, qu’un dispositif de croyance devienne un
foyer de résistance au pouvoir, ainsi que le montre l’histoire des hérésies et des
dissidences religieuses.
Pourquoi affirmer que le djihadisme est un dispositif de pouvoir ? Parce qu’il vise la
conquête du pouvoir souverain, du pouvoir d’État. En atteste le nom même du plus
puissant de ses réseaux - « État islamique »- et le titre de « calife » que s’est arrogé
son chef. Pouvoir qui excède cependant les limites territoriales d’un État au sens
traditionnel : le premier geste du soi-disant calife, après la prise de Mossoul, aura
consisté à abolir la frontière entre l’Irak et la Syrie, comme pour montrer au monde
que son pouvoir a vocation à s’étendre de manière illimitée en faisant de la Terre
entière un dar ul harb, un « domaine de la guerre » où ses réseaux peuvent frapper
où bon leur semble.
Il y a, Foucault nous l’a appris, différentes sortes de dispositifs de pouvoir : dispositifs
d’exclusion, de normalisation disciplinaire, dispositifs de sécurité et de contrôle. Il faut
ajouter à cette liste des dispositifs dont l’unique vocation consiste à anéantir les
sujets dont ils font leur cible : des dispositifs de terreur. Certains choisissent des
cibles déterminées, alors que d’autres s’efforcent de faire le plus grand nombre de
victimes. En fait, un même dispositif de terreur peut user indifféremment de ces deux
modes d’action : selon les circonstances, le terrorisme djihadiste opère tantôt par des
assassinats ciblés, tantôt par des attentats aveugles, bien que ses objectifs
fondamentaux restent les mêmes.
La force et la vérité du mot d’ordre "Je suis Charlie" tient à cela : il ne s’agissait pas
seulement de manifester notre solidarité avec toutes les victimes de l’attentat, mais
aussi d’affirmer que n’importe qui peut devenir la cible du dispositif de terreur. Dans
la stratégie du djihadisme, assassinats et attentats cessent d’être de simples moyens
au service d’une fin : l’exercice de la terreur devient lui-même le but de l’action.

BRANDIR LE TOTEM DU « COMMUNISME »


Ce qu’exprime parfaitement l’une de ses références majeures, le Pakistanais S.K.
Malik : « Frapper de terreur le cœur de l’ennemi n’est pas seulement un moyen, c’est
aussi une fin en soi (…). C’est le point où la fin rejoint les moyens et se confond avec
eux ». Face à l’offensive du djihadisme, s’impose la plus large alliance possible de
toutes les forces qui lui résistent. Persister à opposer les « rouges » et les
« tricolores », à brandir le totem du « communisme » pour dénigrer le combat
nécessaire contre le dispositif de terreur, voilà qui témoigne d’un profond
aveuglement. Comment se fait-il cependant qu’un tel dispositif parvienne à
s’implanter en Occident, dans certaines franges de la jeunesse ?
Foucault ne s’est pas assez interrogé sur ce qui incite les individus à adhérer aux
dispositifs de pouvoir. Pour qu’un homme accepte de se soumettre à un dispositif, il
faut que celui-ci soit parvenu à capter certains de ses affects, de ses désirs, de ses
fantasmes, à les intensifier ou les modifier, à les infléchir en les orientant vers
certaines cibles. Les affects qui animent un grand nombre de jeunes, victimes du
chômage, du racisme, de leur relégation dans des quartiers déshérités, sont des
sentiments de révolte contre l’injustice : l’indignation, la colère. Il arrive toutefois
qu’une juste colère se transforme en un autre affect qui ne tient plus aucun compte
du juste et de l’injuste, mais vise uniquement à détruire son objet.
Cet affect mortifère est la haine. En captant la révolte, l’indignation, la colère, les
dispositifs de terreur les exacerbent, les font virer à la haine et donnent à cette haine
des cibles contre lesquelles se déchaîner. Comment empêcher le djihadisme
d’exploiter une rébellion légitime ? En luttant concrètement contre l’injustice qui
l’engendre, contre toutes les formes d’oppression et de ségrégation ; mais aussi en
travaillant collectivement, patiemment, à re-fonder un projet d’émancipation qui aura
tiré la leçon des désastres du XX° siècle. Seule une politique d’émancipation qui
saurait « tirer sa poésie de l’avenir et non du passé » pourra parvenir à briser la
logique de la haine.

the will to marxism: need we


work in an economy of
abundance?
By Jeremy Brunger.

In the 19th century, English philosopher John Stuart Mill revamped Jeremy Bentham’s philosophy of utilitarianism,
whereby in tandem they mathematized the world according to their vision of liberal capitalism. While students of
history today might consider utilitarianism a somewhat dated way to view the world, it was utterly revolutionary for
its time, exempting nothing in nature from the purview of human endeavor. Even pleasure became a quantity
under the utilitarians, who conceived it as a measure of utility, subordinating all qualities to quantities (with certain
exception such as the pleasures of high literature or other pursuits favored by their respective bourgeois milieu),
erasing the measures of quality from the social sphere –– mankind became reduced to a series of numbers,
which man first had invented. The quantum as utility, it seemed, had invented man and converted him into its
religion. Man could look at the world around him only as a man might look at a department store, seeing a price
here, and a calculus there. Mother Nature became the sum of the square footage of ground rents, man became
the man-hour, and capital came to represent the very summit of Western achievement and Enlightened progress.
In the process of capitalist development, which began to subtly invade man’s interior cognition in the 18th century
and came to dominate it in the 19th, man became immiserated, even in the most powerful global empire ever to
exist then: the England of Karl Marx’s Capital. Little did capital know that it would never quite hear the end of that
book, though it was not proletarians who produced it but rather a poor Ph.D. supporting his family with the scraps
given to him by a communist in the guise of a factory boss. A century and a half later over half of the population of
America lives at or around subsistence level, again in the most powerful global empire ever to grace the
Earth.Capital will continue to co-exist with capital because capital cannot behave otherwise than as antagonism. It
quantifies in order to exploit and kills what it cannot.

The mere fact that the American government uses a statistical construction, the GDP, as a measure of its
progress is telling, for its imperial-progressive march towards the infinite neither dares to account for nor declares
the happiness of its people. Even the Founders, whose mythos compares to the founding myths of every empire,
thought the pursuit of happiness –– let us quote the Declaration –– was coincident with private property, that is,
property from which one man could exclude another. For the slave-holding bourgeois legalists of the 18th century,
actual human happiness was another matter entirely. Happiness remains a potential category for fulfillment but
our present history ignores its accounting. Instead, the monetary return on investment rules us all, no matter that
money was first put into circulation by human hands.
Karl Marx, despite his current retro-perspective image as a Victorian patriarch and anti-modern voodoo
economist, was really a special breed of technocrat much ahead of his contemporaries. He believed the evolution
of technology would free mankind from the toil of labor and that this technology would, by definition, be created by
man rather than foisted upon him by Nature at large. The strange capacity of techne, unique to man alone, would
emancipate the human race from the grinding horror of animal life (for Marx was fanatic about Darwin’s biology,
even going so far as to try to dedicate Capital to him). Man’s liberation would lie in his creativity, which 19th
century wage-slave industrial economies –– the principle object of Marx’s ridicule –– at once stoked and artificially
repressed by virtue of the grueling work hours and minuscule wage compensation they imposed. The technology
Marx saw in his time ruled man and made him a sort of machine; he desired a new machine, one that at once
liberated man and allowed him to approach the fellows of his species as equals.

It should not take a world-historical thinker to advise the world that it can enjoy its products once its labor has
succeeded output with its harvest. It would, indeed, be difficult to enjoy the fruit of one’s exponentially more
productive labor if one has no time or energy with which to do so. Thus was the stranglehold that the industrial
bourgeoisie of Marx’s era had over its subjects, the men (and women and children) who by their sweat and blood
built the English empire inwardly and outwardly. Yet the concept of the individual was still subsumed under the
cloak of religious ideality, ensuring that no matter how much abundance one had at home, one was still enslaved
to the structure of his politics. What work was done then, was done not for the man who performed it, through his
sacrifice of time and flesh. It was done in the service of his accountant, who at once acted as priest, god, and
employer lest he find himself in a debtor’s prison, on the stocks, or on streets lined with coal-dust, prostitutes, and
urchins begging to eat the soles from his shoes. Most certainly, it was not the capitalist class who decided to end
child labor as “Rule Brittannia” resounded from the port cities. This was the central travesty of progress. Marx’s
thoughts on the working day had it that it neither be ten to fourteen hours — as it was in industrial England — nor
be so intensive if work still had to be done in his envisioned communist society, at least in such intensive work
spaces. There is no difference between the cotton loom, the production line, or the registry list in a call center,
which were and are all ran by human beings who would rather not do what need not be done.

We still have grocery stores that command their cashiers to labor according to mathematically standardized
speed scores that split to the second how much they sell with how much they take to give their customers back
their cash; oil-changing marts that follow the same process; gas stations that forbid their clerks to accept less than
a penny’s cost of the merchandise consumed; managerial rubrics that rule not according to human interview but
to imagined cost-effectiveness; government bureaus that count heads instead of faces; and the death camps of
health insurance that would sooner have a homeless population than a healthy one. Forges of steel or lives
reduced to a cold and insane mathematics: this is not a choice. Our humanism has not been sold, it has been
stolen. Marx witnessed the same theft in his own 19th century even as he presaged ours.

Such intensive economies hardly disappeared during the 20th century, even as technological innovation thrived in
the private and public spheres alike: Ford Motor Company gave us the mass-produced automobile along with the
mass-produced man and the US government in contract with other governments and domestic private companies
ushered in the information age. Yet, people often worked harder than ever: the protestant work ethic was
supreme up to the 1960s, when cultural revolution seemed on the brink of a eschatological dawn. The more we
knew, it seems, the more we worked. This is the chief paradox of modernity: we have so much material and
intellectual abundance, yet appear trained, disciplined even, to think it necessary to earn a living despite our
historically unique largesse. There has been much talk in the last 150 years about communism working only in
theory, as if any other comparable theory has worked in practice, pure capitalism included. But never before has
an economic base been poised to marry the intellectual monuments built from, and upon it, were we only ready to
forge the bonds in order to unite them now that our intellect compares to our materials. The labor pool has never
been larger, nor more educated. We have thus far only straddled this gulf, too afraid of what we might accomplish
on the other side.

We live within the horizon of that gulf. Diverse anarchists, Marxists, Catholic distributionists, mainstream social-
capitalist theories of development, and proponents of other theories (and practical examples) of human society
abound on the internet, that creation of the sovereign nation-state in league with private capital. Such diffusion
was not possible before the information age, when even centuries after Gutenberg’s great leap forward we still
had access to information at a snail’s pace. Now, history is a palimpsest so easily dissected that we need only
click a button to view its inner details, such as the reason why Jesus moved to Jerusalem or why the horrific Stalin
approved of the discipline of sociology despite its bourgeois roots in French and German academic discourse.
Furthermore, we can use the internet to examine Marx’s England, the economic structure of our own
neighborhoods, the mathematical parallels between popular music and the Icelandic saga, or a whole host of
economic theory after Adam Smith first philosophized on the dialectics between the moral and the material. A
brave new world indeed: both the material resources of man’s dominion and the intellectual curiosity of his mind
are thriving in the 21st century, as the inroads of that thriving lay bare our shortcomings in understanding our
histories both particular and general.

Among such a bounty of ideas, there is, of course, much conflict. One especially interesting development in
economic theory subsequent to Marx’s anthropology of the deprived English is that of “the post-scarcity
economy.” The idea is at least as old as the 19th and perhaps even the 18th century, when Thomas Paine
in Agrarian Justice was promoting a sort of proto-socialism against the property-adoring Enlightenment thinkers of
his milieu. But the practical reality of it lies firmly in the late 20th and early 21st centuries, when material
abundance was so massive that the only thing halting human progress was humanity itself and not the
technological innovations it came to wield. John Steinbeck in his The Grapes of Wrath, arguably the greatest
novel in the American canon, lambasted the misfortunes of the dismal science of orthodox economics that saw fit
to destroy food rather than distribute it to the victims of the Great Depression, because of how such equivalent
distribution would affect the reigning price mechanism of foodstuffs. Other literature of the 1930s and 40s
abounds with such themes, from James T. Farrell’s A Note on Literary Criticism to Paul Laurence Dunbar’sThe
Sport of the Gods, a novel about African American migration towards the North and their discontent subsequent
to being proletarianized.

Our money economy is a direct descendent of the economy of scarcity that defined all of human history until quite
recently — again, until the 20th and 21st centuries. At least in the developed world, we now have — as we had
then — the material resources to behave as though we do not require the signaling vehicle of cash to refer to
material commodities in order to purchase them for our consumption. It persists despite itself. Such a modal
object of value-storage is a survival of the “good old days”, when human life was defined not by what it had, but
by what it did not have: thus scarcity was identical with history. Now, finance is all fiat, which is to say it is a
priestly phantom that nevertheless controls our lives by directing and reflecting our motivations. For once, our
access to the world outweighs our hunger for such access. The fact we confuse cash for accounting only tells on
us to our higher intellects, saying that we are lagging behind the times of our own futurity. The watershed will
leave its marks.
All that is fiscal is, of course, wholly political: there is no political decision that does not cohere towards an
economic result. In the eyes of capital we are little more than labor rents, as are capital’s highest employees —
the reification of the abstractions we human beings have produced over the centuries has fitted itself as our
designated master, from the server of grits and ham to the chief executive officer. We might even consider
Congress to be largely the legislative arm of capital, since their proper concerns are budgetary in nature. Five
hundred years ago the West entered early modernity, ushering in the marriage of scientific inquiry and humanism
we enjoyed for so long. Somewhere along the line the process inverted and turned man into a commodity. The
transmogrification of man into rent continues its onslaught through the centuries, all the while supposing our
negative consent.

The most obvious symbol of this transformation is the dollar, or rather, any cash symbol that makes man see his
neighbor as a potential predator in the realm of plenty. The dollar is a magic symbol that can motivate an army of
pediatricians to treat adolescent pancreatic cancer or drive horrific genocides. John Stuart Mill, despite his
excesses, would have abhorred a police state intent on protecting private property even at the cost of human
liberty, along with the legislature that produced it, utility be damned. It is our insistence on using the same
structure that served our misery in the past that continues our misery into the present.

The United States, with all its capital stocks, standing contracts dating back centuries, massive labor forces,
predatory immigration policies, technological abundance, has enough resources to end domestic poverty
tomorrow. The government could do this without even having to remold the behavior of its citizens, whether you, a
subject of the government, are a conservative who believes poverty is a moral problem, or whether you are a
liberal who believes that poverty is the direct result of economic warfare against labor. The material abundance is
there in time and space, no matter the language games of property that compete in claiming it for one interest
group or another. The abundance is tethered by a legal and cultural superstructure which we call America,
clothed as it is in the dollar which seems to be more socially powerful than the people themselves who actually
constitute civil society, of which the dollar (along with its prices) is a subordinate tool.

Regardless of the ideological bent of its people, the objects themselves are there: the housing, food, clothing, and
so on are there in the physical world but are prevented from being distributed by this language game we call
“legal tender.” Cash, therefore, only signifies labor that is backed by the structural violence of the state, which
monopolizes violence in order to monopolize human regulation. The massive profit margin of international
business is largely a loophole of this process rather than the end result. Money is merely an engine for controlling
human beings, for forcing their antagonistic co-operation, for convincing them of scarcity. The greatest
philosopher of the World Wars, Ludwig Wittgenstein, stumbled onto a great fiction with his philosophy of the
human tongue and all the perils it entails. He just did not know he had unearthed the secret of modern human
enslavement, or rather its most pernicious aspect. The dollar is not a proper representative of value but a bead on
the abacus of social control.

The average citizen attains that legal tender by performing what we moderns call “work” and what the poet Dante
called “Hell.” The way we think is overdetermined by the past, by its determinant historical scarcity, which bore
both our greater and our lesser habits into the present. Nevermind that we needn’t work, or that agrarian mores
ought not be the same as urban mores, or that human life simply does not operate according to the same
guidelines in the 21st century that it might have operated in accordance with in the 1st. The word “determine”
means “to limit,” to overdetermine to “super-limit,” quite without reason. Without real material limits, we are still
limited, not by our lowly origins as in Darwin’s biology, but by their hyped-up successors: the postmodern financial
economy coupled with the cynicism of a corrupted political structure that conceives of itself as lasting for centuries
more.

A structural revolution presupposes, and trumps, a legal one –– for we have invented them all by virtue of our
participation. Most of this participation is barely conscious, or unconscious, as the French social theorist Louis
Althusser had it in his On the Reproduction of Capitalism. The same criticism pervades Andre Gorz’s
classic Critique of Economic Reason, dating from the 1980s, when liberal capitalism made its brutal return under
the triumphant neoliberal regime and the tentacular spread of the IMF’s loan programs. What continues is what is
allowed, provided it is not first recognized as an evil we only co-opt. We are enslaved only as long as we do not
think we are enslaved.

Simone de Beauvoir once asked her audience “Must We Burn Sade?” We have not burned Greece, the original
foundation of most of our philosophy, despite its being a slave-owning republic built on the selfsame principle of
brutal exploitation. Neither need we burn ourselves in heaving America toward its inexorable summit, wherein the
principle of liberty, both collective and individual, resolves its contradictions. It will either resolve, or denature back
into nothingness. Its people are the agents of this crossroads, but first it must ask itself some hard and pointed
questions: it must self-criticize, not in the manner of reciting a catechism, but in the manner of a free people who
despise hypocrisy and the hollow ideals of empire. If not this, then decline.

Need we work as if we are but Sisyphus on the mountainside rather than crowded around Prometheus as he
shows mankind how to kindle our way out of the primeval darkness? Why grind corn when you can mass-produce
it with a few men and a few machines? Why insist on sweating by the toil of your brow when you are already in
the land of milk and honey? Why pretend the moral sphere of modern life is comparable to the bronze age, when
most people worked, despite their detestation of it, because their environment offered them no other choice? Why
demand others perform busy work even if it is not economically necessary, only, as some suppose, morally
necessary (keeping in mind the suggestion that others must work for things not necessary to society already
smacks of slavery)? Why work for nothing save the satisfaction of a political minority intent on subjugating the
poor, who themselves always have comprised the majority of humankind?

Most of these questions could be fallaciously answered in a response that naturalizes work, as all reigning
behavioral and political regimes do: “there is no other way,” or “the way we do things now is consonant with the
natural world.” But all of these answers ring as false now, in the great age of human artifice, as they did when the
first builders of the ziggurats convinced the common man to give his surplus to the ruling class, according to
divine and natural warrant. Capitalism is a pyramid scheme, a system of caste that convinces its adherents that
horror is joy, and joy horror.

After all, the idea of the “dignity of labor” originally came not from laborers but from those who profited from their
misery. The idea that misery was dignified was not grassroots — it was corporate, especially as it was expounded
by the Church Militant (not coincidentally, the Catholic church still uses the language of corporate expansion to
describe itself).

Such probing questions are, of course, more theoretical today than practical tomorrow –– this fact cannot be
denied. If the theory were practiced one would still need to figure out the logistics of national distribution stemming
down all the way to the local field, the maintenance costs of stores and storage facilities, and so on (of course,
this is work that people are already doing rather efficiently). In general we call this already-existing figure of
logistics “the free market;” but it is, as such, still entirely dependent on using the dollar as a replacement for labor,
when in fact labor is the true force behind the dollar’s successful interpolation of our collective consciousness. A
more radical approach to this is one of demystification: the dollar means nothing more than “the nation” in liquid
form, and if we as human beings could re-open our eyes and judge the material world around us, we would see
nothing but the material world in all its splendor, touched-up by the mysticism of political and economic language.
Our confusing and confused web of language, which devolved from the sonnet into the labor contract, would allow
us to recognize the role that signifiers –– such as money –– play in our everyday lives once we were freed from
the false consciousness of exchange-value. And the most common and most powerful form of that labor contract
is the dollar, which separates man from his desires: Marx himself called it a pimp between man and his relations
of need and object in his early humanist Manuscripts of 1844. It is not that we do not possess resources that
border on the infinite; it is that the structure of our societies implicitly forbids their democratic distribution. The
largess is not designed to be distributed, but hoarded. Meritocracy, if not merit itself, is the great lie of the
bourgeois era: it has never truly existed, even among those who most vocally express its reality.
That realpolitik question “who gets what” is meaningless when everyone can get enough simply because there is
enough to get, for once.

Were God real, and in the mood to inquire into mankind’s industry, he would have to question why so many
people make so little in compensation for their labor when they seem so intent on chasing the dollar — why they
do not recognize that money economies function almost mystically as mechanisms of social control rather than as
systems of real value regulation. This question is identical to contemporary Marxism, or if you will, this question is
the critical eye of Marxism unperverted by the barbarism of the 20th century Soviet experiment. There was a great
epistemological and practical break between what Marx thought of the world and what the Marxists did to the
world. But it will not do to discard our visionaries because the blind who followed them began to see the world
through jaundiced eyes. Blind slaughter never solved anything, least of all the problem of human liberation.

But questions harass any utopia, even one that admits to not promising utopia. One such important question that
will be on everyone’s mind in the coming decades is: why should an American work a 10 hour day for less than a
living wage if the country is already the wealthiest in the world, yet one which features some of the worst
standards of living seen in the first world? Think of inner city Nashville, Detroit, Seattle, Chicago, Dallas, Trenton,
the wastelands of the highway states, the drug-addled counties of the South and the Southeast, the death-rattles
of industry resounding loud in the North, the homeless crowding around the Capitol, the black-market edging its
way towards the Southwest states, the new normal of the precariat whose families cannot expect any stability
save that of imprisonment –– with this last spreading across the entire country. In 2015 few American children can
expect to attain adulthood in any degree of wholeness or stability; such a misery should not be blamed on such
an abstraction as “the economy” rather than the mis-dealings of the men and women who have come before
them, who held the world in their arms and decided to let it fail because of circumstances they chose to ignore.

Why should a 3 hour working day be considered more insane than an 8-10 hour working day or a fifth of the
population put in jail? Why should the sane submit to the insane, who think it right that a minority should decide
whether or not a majority lives in penury or pleasure? Why should a particular mode of market-capture undermine
our ability to seek philosophy and self-understanding rather than the economic bottom line, which, if it doesn’t kill
us, certainly won’t follow us to the short-faring graves that await us all, even those few who manage to profit?
Why slave for the dollar when the dollar is itself a slave –– our slave, one we politically produce? The social and
historical trends of the upcoming century will no doubt re-insert themselves from the 19th century into the 21st
century. We will have no choice but to live through them. What will prove our grit is how well we provide the
answers to these questions. Marx was, after all, an avid reader of Darwin. What cannot endure, will not endure.
We ride the tides of this history whether we like it or not.

Marxism is not a drop-out culture: it is a culture of opting in, of examining the real conditions of our mutual
existence and striving towards a world in which neither you nor I are miserable, exploited, or enslaved to
circumstances beyond the control of the society we decide to constitute. Ought we weep for that joyful science
which is not economics? We have already wept for our better functions, now that we have only our worse to
support us. History bears the straight story: to impoverish is to radicalize, from the peasant revolts of the Middle
Ages to the brink of popular immiseration on which we now stand. The world’s measure of utility is not man’s
servitude to man. Dr. Thomas Piketty, in his recent Capital in the 21st Century, discussed a predictive concept
economists call “general appropriation.” This is dry academic jargon: what it means is the poor reclaiming from
the rich through brute force. Perhaps this too can be avoided.

The Soviet experiment in freedom is dead; the American experiment in freedom is dying. Marx opened his
chapter on “The Working Day” in Capital with one of his most famous observations: “between equal rights, force
decides.” Shall we call his bluff or prove it true in our will to Marxism, wherein restriction is defined as farce and
human freedom as the only universal imperative? I suspect if we do not use our time to pursue freedom wisely,
someone else will sell it out from under us, even as the GDP increases along with the suicide rate.

Cognitive Capitalism

There are broadly three ways of thinking historically about capitalism. One draws on Marx’s value theory and pretty much treats
capital as eternal. Its appearances may change but its essence is always the same, until the revolution, which strange to say never
comes.
The second is able to think more historically. For example the regulation school came up with a convincing portrait of what it
called the Fordist regime of regulation. In this version capitalism has stages, each of which are qualitatively different. But it tends
to be troubled by the current stage, which can only be described negatively as lacking the attributes of the last. Hence it speaks
of post-Fordism. In general, when change is described via modifiers, such as post or neo or late, one is not really thinking the
specificity of an historical period, but merely saying that it is like or not like another.
The third approach is to try and define the specificity of the twenty-first century social formation. An excellent example might
be Yann Moulier Boutang’s Cognitive Capitalism (Polity Press, 2011), a book which presents in English the results of a
research program that has been going on in French for some time. As Boutang says, “cognitive capitalism is a paradigm, or a
coherent research program, that poses an alternative to post-Fordism.” (113) It no longer takes Fordism as the norm, and it
certainly does not get bogged down in theories of eternal capital. Its attention is on “new vectors of the production of wealth”
(135)
This is a challenge, as not even capitalism’s biggest fans seem to have much of a clue how to describe it. Perhaps Francis
Fukuyama was right, and there are no new ideas, just comedies and tragedies of repetition. But Boutang wants to step back from
post-situationist thought, whether than of Baudrillard or others, for whom capital becomes an absolute, and all of politics
foreclosed: “is this capitalism so absolute?” (3)
Perhaps it calls rather for a fresh analysis, “a kind of small defrag program for Marxism’s mental hard drive.” (8) “Are we, in
particular, going to remain obstinately stuck to the perspective of the value of working time, of the utility or scarcity of
resources, in order to measure a wealth that depends on the time of life and on the super-abundance of knowledge?” (4)
Boutang’s method is, like that of Terranova, shaped by the Italian workerist tradition, and its strong commitment to the point
of view of living labor. Like them, his jumping off point is Marx’s Grundrisse, especially the ‘Fragment on Machines’, and
particularly the concept of the ‘general intellect’. If Marx were to appear by time-machine in today’s California, he might find
that at least some of the work being done there is no longer explainable via recourse to scarcity and physical labor. There has
been another ‘great transformation’, as Karl Polanyi might call it. After mercantilist and industrial capitalism comes cognitive
capitalism.
Industrial capitalism at its peal – what the regulation school calls Fordism – was characterized by cheap energy, foreign labor
importation, cheap raw materials, full employment, fixed exchange rates, low or even negative real interest rates, price inflation,
and wage rises in line with productivity. But rather than concentrate on the break-down of that system as the regulationists do,
Boutang is more interested in the features of what replaced it.
Boutang is rather sparing with the term ‘neoliberal’, which is so often used now as a kind of linguistic operator to describe by
contrast what this era is supposed to mean. The rise of finance is clearly a key feature of our times, but for Boutang neither
economic ideology nor financial speculation is causative. The rise of finance is what has to be explained.
The explanation is an interesting one. With the conversion of intellectual activities into tradable assets, work dematerialized, and
the contours of the company became unclear. Financialization is a way of assessing the value of production when production is
no longer just about labor and things. Finance both predicts and actualizes futures in which private companies extract value
from the knowledge society, where the boundaries of who ‘owns’ what can never be clear.
Cognitive capitalism has its problems, however. Ours is a time in which we witness the crash of unlimited resource extraction
against limits. It is a time of “the revenge of externalities” and the predation of the “bio-fund” (20) when “the city turns into a
non-city.” (22) The global urban crisis – what Mike Davis calls the Planet of Slums (Verso 2006) – is a witness to the
exhaustion of positive externalities upon which capital has depended. Which would be another way of figuring what Paul
Burkett, following Marx, sees as the resources both natural and human that capital uses “free of charge.”
Those are the problems cognitive capitalism appears completely unable to solve. What it did solve, after a fashion, is the
problem of the network effect. Value creation now relies on public goods, on complex processes, and things that it is very
difficult to price. Financialization is a response to that complexity.
Boutang follows Lazzarato in speaking of “immaterial labor” (31), a term I never liked, although not quite for the same
reasons as some of Boutang and Lazzarato’s other critics. I think it is important to hang on to the materiality of information-
based sciences and technologies. Indeed, information changes the way one thinks about what the ‘matter’ in materialism might
be. For Boutang and Lazzarato, capitalism has changed in that “the essential point is no longer the expenditure of human labor-
power, but that of invention power.” (32) Now the potential for future innovation is incorporated into pricing of future
possibilities.
Immaterial labor is supposed to be an updating of Marx’s category of abstract labor, the aggregate of concrete labors that make
up socially necessary labor time, or that labor time whose value is realized in exchange value when commodities are successfully
sold. But perhaps there’s a more thorough rethinking of the role of information in production that is really called for here.
Boutang thinks that in its advanced centers – what the situationists called the overdeveloped world – a new form of capitalism
has emerged. “We call this mutating capitalism – which now has to deal with a new composition of dependent labor (mostly
waged) – ‘cognitive capitalism’, because it has to deal with collective cognitive labor power, living labor, and no longer simply
with muscle power consumed by machines driven by ‘fossil fuel’ energy.” (37) Like the Italian workerists, the emphasis is on
living labor, with the twist that for Boutang cognitive capitalism comes to be more dependent on it.
Cognitive capitalism is not limited to the ‘tech’ sector. As I argued in Telesthesia(Polity Press 2013), if one looks at the
top Fortune 500 companies, it is striking how much all of them now depend on something like cognitive labor, whether in the
form of R+D, or logistics, or the intangibles of managing the aura of brands and product lines.
Moreover, this is not a simple story of the exogenous development of the forces of production. This is not a revival of the
‘information society’ thesis of Daniel Bell and others, a theory which shied away from the complexities of capitalism. There’s a
story here about power and hegemony, not just pure linear tech growth. It is interesting, for example, to watch a billion dollars
or more being spent on American elections, to try to keep the fossil fuel industry’s state protections and subsidies, so that new
kinds of tech intensive capital does not render it all obsolete. Boutang points towards a more complex way of understanding
‘capital’ than the Italians, for whom it is always more or les the same thing, and always purely reaction to labor’s struggles to
make value for itself.
Boutang also wants to separate knowledge from information, and to avoid making a fetish of the latter. Knowledge-work is the
way information is made. This is salutary. However, I wonder if it might not be the case that just as the dead labor congealed
into fixed capital overtook living labor, so too the dead cognition reified into information systems might not have taken over
from the living labor of knowledge workers. This might be what the era of ‘big data’ is really about. Perhaps after an era of
‘primitive accumulation’, based on the circuit K-I-K’, this has now been subsumed into the mature form of I-K-I’, where
information systems shape living knowledge production to their form, and for the purpose of extracting more information, I-
prime. Hence I am skeptical of one of Boutang’s key themes: “… the novelty we are witnessing is the centrality of living labor
that is not consumed and not reduced to dead labor in mechanism.” (54)
It is certainly helpful, I think, to focus on knowledge as a kind of work, rather than to submit in advance to bourgeois
categories, where one would speak of ‘intellectual capital’ without wondering where and how it was made. Boutang also takes
his distance from the state-led schemes of the regulation school, who hanker for a return to something like an industrial world
with Keynsian regulatory tools, and for whom finance can only be rent-seeking.
I would have liked to know more about the science + labor alliance policy that Boutang attributes to the French Communists in
the postwar years. In Britain this was called ‘Bernalism’, after J D Bernal, its post effective advocate. What I find useful in that
tradition is that, unlike in Boutang and others, it understands the problem as not one of new kinds of labor, but of labor’s
potential alliance with a quite different class – what elsewhere I called the hacker class.
Given how different Boutang finds cognitive labor to be to physical labor, I question why it has to be thought as labor at all,
rather than as the social activity of a quite different class. Boutang at least canvasses this possibility, in mentioning Franco
Berardi’s idea of a “cognitariat” (97) and Ursula Huws of acybertariat, but the least settled part of attempts to think the
current mode of production seem to me to be questions of the classes it produces and which in turn reproduce it.
The symptom of this for me is the emergence of new kinds of property relation, so-called ‘intellectual property’, which became
private property rights, an which were extended to cover an ever-wider range of information products. Boutang is aware of this:
“One of the symptoms indicating that both the mode of production and the capitalist relations of production are changing is
the importance assumed nowadays by institutional legal issues. Never has there been so much talk of property rights, by way of
contesting them as well as by way of redefining them.” (47) Perhaps one could press this even further.
Of course, Boutang is not one of those who thinks the ‘new economy’ is somehow magically ‘weightless’. He points out that it
does not eliminate material production so much as re-arrange it in space and time. “Not only are the parameters of space and
time being radically altered, but the radical overhaul of representations that is underway affects the conception of acting and of
the agent/actor doing things, as well as concepts of producing, of the producer, of the living and of the conditions of life on
earth.” (48) While Boutang does not go there, I will: what it was not just a new stage of capitalism but a new mode of
production? What if this was not capitalism, but something worse? I think it is a necessary thought-experiment, if the concept of
‘capitalism’ is ever to be a valid historical one. We need to have a sense of the conditions under which it could be said to have
transformed into something else entirely.
Whenever one suggests such a thing, the counter-arguments quickly default to one or other ideological tropes. One is that if one
thinks this is not capitalism, one might subscribe to some version of the California Ideology, and be oneself a dupe of various
new age Powerpoint-slingers. But why does thinking one thing has ended automatically mean one must believe it was succeeded
by something else? That does not follow at all. It points rather to the poverty of imagination of todays (pseudo)Marxists that can
only imagine capital to eternal. They seem to have a hard enough time with Boutang’s thesis that it is in a new stage, so needless
to say the thought-experiment in which something else succeeds it is literally unthinkable. Hence I would press even harder than
Boutang on this: “does this not bring immediately into question the capitalist mode of production as a whole, and not just the
dominant system of accumulation?” (115)
But I digress. Boutang takes a lively interest in the evident fact that in certain parts of the over-developed world, companies
have. California ideology, and yet they have “discovered and invented the new form of value.” (49) He attempts to inventory
them. Cognitive capitalism affects all sectors. Across the board new tech increases the power of immaterial. But tech change is
no longer an exogenous resource, but the very thing accumulation aims at. Value production comes to depend on social
cooperation and tacit knowledge. The complexity of markets means increasing efficiency can’t just be solved by economies of
scale. Consumption has become a productive and part of research and development. Information now manages production
cycles in realtime. There’s a plurality of inputs into most productions, including new kinds of labor. There are new spatial forms,
including the clustering of production systems. There is a crisis of property rights, in parallel with attempts to capture positive
externalities by successful firms.
Cognitive capitalism looks for spatial and institutional forms that allow it to capture value from things other than traditional
labor, including all of those things that writers from Terranova to Trebor Scholz call non-labor or digital labor. Hence the rise
of the network, as a third organizational form alongside the market and hierarchy. Networks are quick to identify resources
when time, attention and care are what are scarce. ‘Labor’ becomes about connectivity, responsiveness, autonomy and
inventiveness, and is hard to measure such labor in time units. (But is it still labor?)
What motivates this new kind of (non)labor besides wealth or power is libido sciendi, or the desire to know. “in cognitive
capitalism we are witnessing the emergence of the systematic exploitation of a third passion – or desire – as a factor of efficiency
in human activity deployed in an enterprise… What I am referring to here is the libido sciendi – the passion for learning and the
taste for the game of knowledge.” (76)
Béatriz Préciado has a quite interesting critique of what she sees as the anti-corporeal and masculinist bias of such a way of
thinking about what drives the contemporary economy. The passions might be a broader question, which is why in The Spectacle
of Disintegration I went back to Charles Fourier and his theory of the twelve passions. It might be better to say that one of the
things today’s economy is about is the productive use of all twelve of those passions, of which libido sciendi – or
Lyotard’s parology – might be just one.
As Pekka Himanen showed already in The Hacker Ethic, there’s a quite different relation to both time and desire at work in
what Boutang calls cognitive labor, and what I called the hacker class. They might at times be motivated by libertarian
ideologies, but as Gabriella Coleman has shown, the actual ethnography of hackers reveals a more complex ideological field.
Not that traditionally ascribed to labor, but not one entirely consumed with petit bourgeois dreams, libertarian or otherwise.
Following Luc Boltanski and Eve Chiapello, Boutang sees the development of work after Fordism as being about coopting
the rebellion from work’s alienated form. “Work comes to dress itself in the clothes of the artist or of the university. The values
of creativity only become capable of being exploited by an intelligent capitalism to the extent that they were promoted as a
value, first experimentally and then as a norm of living.”(88) Hence, at least in part, “the ‘hacker’ individual is closer to the
creative artist and the ivory-tower professor than to the risk-taker or the possessive individualist.” (90) This might not however
take full account of the rose of the ‘Brogrammer’, product of elite American universities who studied programming rather than
go to business school, and for whom tech is just a means to get into business. The ethnographic realities of class are always
complicated.
Even so, while start-up culture is designed to shape a kind of petit-bourgeois personality, not everyone drinks that kool-aid.
Many will discover that there is now a kind of second degree exploitation, not of labor per se but of one’s capacity to hack, to
invent, to transform information. Who knows? Some might even question the split that this emerging mode of production
forces between labor and creation, which was the basis of Asger Jorn’s very prescient situationist critique of political
economy. For Boutang this new division is like that between the ‘free’ worker and the slave in mercantilist capitalism – which I
must point out is a division between two different classes.
Perhaps one could even open up the question of whether the tensions within the ruling class point toward the formation of a
different kind of ruling class. One part of the ruling class really insists on the enclosure of information within strict private
property forms, while another part does not. One part has lost the ability to produce information goods strapped to physical
objects and charge as if they were just physical objects. This is the case not just with things like movies or music, but also with
drugs and increasingly with sophisticated manufactured goods. You can buy a pretty good knock-off of an iPad now for a
fraction of the price.
And yet there’s a tension here, as there is another kind of value production that is all about the leaky and indeterminate way in
which social knowledge gets turned into products. One could frame this as an instability for a ruling class which does not know
which of these is more important, or whether both tendencies can really occur at once. Or whether it is even a split between
different kinds of ruling class: one still dependent on extracting surplus labor power and selling commodities; one dependent
instead on asymmetries of information and commanding the processes of social creation themselves.
In Boutang, the markets act as multiplier and vector for values produced by other means. “Like the giant Anteus, who could
only recharge his strength by keeping his feet on the ground, cognitive capitalism, whose purpose is to produce value (and not
commodities or use values), needs to multiply its points of contact with a society that is in motion, with living activity.” (108-
109) Entrepreneurial intelligence is now about converting social networks into value. The entrepreneur is a surfer who does not
create the wave. Here, like Marx, Boutang understands value creation as taking place off-stage, and made invisible by a kind of
market fetishism. These days it is not the commodity that is the fetish so much as the great man of business. As if the world just
issued fully formed from Steve Jobs’ brain. Cognitive capital is based on knowledge society, but is not the same thing.
There’s a tactical value in seeing cognitive and industrial capital as distinct. “The real challenge us thus to minimize as far as
possible this phase during which cognitive capitalism and industrial capitalism can build anti-natural alliances in order to control,
restrain or break the power of liberation of the knowledge society.” (112) Perhaps there are fissures between them that can be
worked in the interests of the dispossessed peoples of all kinds.
In a lovely metaphor, Boutang talks about a ruling class that has figured out how to capture the productive labor of its worker-
bees when they make honey, but is only just figuring out how to capture the value of their efforts at “pollination.” (117) What
Boutang calls the knowledge society underlying cognitive capitalism is precisely that pollination, that practice of collaborative
effort between humans and non-humans to make worlds. Hence: “Cognitive capitalism reproduces, on an enlarged scale, the
old contradictions described by Marx, between the socialization of production and the rules of appropriation of value.” (120)
In ‘Escape from the Dual Empire’, I talked about the two-sided nature of this emerging mode of production, one looking
for ways to commodify knowledge in the form of information to sell on the market, the other doing the same thing but
producing military products for sale exclusively to the state. One could I think press further on this than either I or Boutang
have done. The military origins of Silicon valley tend to be a bit invisible in Boutang’s account.
Boutang understands the precarity that arises out of the current, rather disorganized stage of the class struggle, and where
“getting the multitude to work for free is the general line of cognitive capitalism” (133) Cognitive capital both depends on the
pollinating efforts of a knowledge society built on a social-democratic pact and yet undermines it at every turn. “Knowledge
becomes the raw material, but it now creates real ‘class’ divisions….” (131)
So far the only way of governing this mess that even partly works is, paradoxically enough, finance. “Finance can be said to be
the only way of ‘governing’ the inherent instability of cognitive capitalism, even if it introduces new factors of instability…”
(136) Hence “In the cognitive capitalism school of thought, flexible production and financialization are both seen as being
subordinate to the achievement of permanent innovation (the substance of value).” (139)
The value of companies has become intangible, and accounting rules don’t quite capture the value of knowledge contained in
the firm. Finance is a way of assessing and capturing the value of the externalities on which companies actually rely. Price is
formed by forming opinion among traders. Financial markets are themselves part of long term capture of publics as a resource.
“One could even argue that one of the main activities of cognitive capitalism is the production of different kinds of publics, of
which the stock market public is not the least.” (145) This is an original and provocative thesis.
In a concluding ‘Manifesto for Pollen society’, Boutang notes that there is now “Wealth in society, but poverty of social
organization…” (149) The “human cyborg” comes into being as cognitive capitalism acquits power over life itself. (150) For
Boutang, the privatization of social cooperation is a regression. Particularly given the “urgency of the environmental question”
one has to ask about the priorities of a mode of production that seeks more power over externalities for which it does not pay.
(173) “The only thing that our magicians, pirates and conquistadors of finance have forgotten is that pollination requires the
existence of bees!” (189)

Yanis Varoufakis: How I became an erratic


Marxist
Before he entered politics, Yanis Varoufakis, the iconoclastic Greek finance minister
at the centre of the latest eurozone standoff, wrote this searing account of European
capitalism and and how the left can learn from Marx’s mistakes

Yanis Varoufakis. Illustration by Ellie Foreman-Peck

Yanis Varoufakis

In 2008, capitalism had its second global spasm. The financial crisis set off
a chain reaction that pushed Europe into a downward spiral that continues
to this day. Europe’s present situation is not merely a threat for workers, for
the dispossessed, for the bankers, for social classes or, indeed, nations. No,
Europe’s current posture poses a threat to civilisation as we know it.

If my prognosis is correct, and we are not facing just another cyclical slump
soon to be overcome, the question that arises for radicals is this: should we
welcome this crisis of European capitalism as an opportunity to replace it
with a better system? Or should we be so worried about it as to embark
upon a campaign for stabilising European capitalism?

To me, the answer is clear. Europe’s crisis is far less likely to give birth to a
better alternative to capitalism than it is to unleash dangerously regressive
forces that have the capacity to cause a humanitarian bloodbath, while
extinguishing the hope for any progressive moves for generations to come.

For this view I have been accused, by well-meaning radical voices, of being
“defeatist” and of trying to save an indefensible European socioeconomic
system. This criticism, I confess, hurts. And it hurts because it contains
more than a kernel of truth.

I share the view that this European Union is typified by a large democratic
deficit that, in combination with the denial of the faulty architecture of its
monetary union, has put Europe’s peoples on a path to permanent
recession. And I also bow to the criticism that I have campaigned on an
agenda founded on the assumption that the left was, and remains, squarely
defeated. I confess I would much rather be promoting a radical agenda, the
raison d’être of which is to replace European capitalism with a different
system.

Yet my aim here is to offer a window into my view of a repugnant European


capitalism whose implosion, despite its many ills, should be avoided at all
costs. It is a confession intended to convince radicals that we have a
contradictory mission: to arrest the freefall of European capitalism in order
to buy the time we need to formulate its alternative.

Why a Marxist?

When I chose the subject of my doctoral thesis, back in 1982, I deliberately


focused on a highly mathematical topic within which Marx’s thought was
irrelevant. When, later on, I embarked on an academic career, as a lecturer
in mainstream economics departments, the implicit contract between
myself and the departments that offered me lectureships was that I would
be teaching the type of economic theory that left no room for Marx. In the
late 1980s, I was hired by the University of Sydney’s school of economics in
order to keep out a leftwing candidate (although I did not know this at the
time).
Yanis Varoufakis: ‘Karl Marx was responsible for framing my perspective of
the world we live in, from my childhood to this day.’ Photograph: PA

After I returned to Greece in 2000, I threw my lot in with the future prime
minister George Papandreou, hoping to help stem the return to power of a
resurgent right wing that wanted to push Greece towards xenophobia both
domestically and in its foreign policy. As the whole world now knows,
Papandreou’s party not only failed to stem xenophobia but, in the end,
presided over the most virulent neoliberal macroeconomic policies that
spearheaded the eurozone’s so-called bailouts thus, unwittingly, causing the
return of Nazis to the streets of Athens. Even though I resigned as
Papandreou’s adviser early in 2006, and turned into his government’s
staunchest critic during his mishandling of the post-2009 Greek implosion,
my public interventions in the debate on Greece and Europe have carried no
whiff of Marxism.

Advertisement

Given all this, you may be puzzled to hear me call myself a Marxist. But, in
truth, Karl Marx was responsible for framing my perspective of the world
we live in, from my childhood to this day. This is not something that I often
volunteer to talk about in “polite society” because the very mention of the
M-word switches audiences off. But I never deny it either. After a few years
of addressing audiences with whom I do not share an ideology, a need has
crept up on me to talk about Marx’s imprint on my thinking. To explain
why, while an unapologetic Marxist, I think it is important to resist him
passionately in a variety of ways. To be, in other words, erratic in one’s
Marxism.

If my whole academic career largely ignored Marx, and my current policy


recommendations are impossible to describe as Marxist, why bring up my
Marxism now? The answer is simple: Even my non-Marxist economics was
guided by a mindset influenced by Marx.

A radical social theorist can challenge the economic mainstream in two


different ways, I always thought. One way is by means of immanent
criticism. To accept the mainstream’s axioms and then expose its internal
contradictions. To say: “I shall not contest your assumptions but here is why
your own conclusions do not logically flow on from them.” This was, indeed,
Marx’s method of undermining British political economics. He accepted
every axiom by Adam Smith and David Ricardo in order to demonstrate
that, in the context of their assumptions, capitalism was a contradictory
system. The second avenue that a radical theorist can pursue is, of course,
the construction of alternative theories to those of the establishment,
hoping that they will be taken seriously.

My view on this dilemma has always been that the powers that be are never
perturbed by theories that embark from assumptions different to their own.
The only thing that can destabilise and genuinely challenge mainstream,
neoclassical economists is the demonstration of the internal inconsistency
of their own models. It was for this reason that, from the very beginning, I
chose to delve into the guts of neoclassical theory and to spend next to no
energy trying to develop alternative, Marxist models of capitalism. My
reasons, I submit, were quite Marxist.

When called upon to comment on the world we live in, I had no alternative
but to fall back on Marxist tradition

When called upon to comment on the world we live in, I had no alternative
but to fall back on the Marxist tradition which had shaped my thinking ever
since my metallurgist father impressed upon me, when I was still a child,
the effect of technological innovation on the historical process. How, for
instance, the passage from the bronze age to the iron age sped up history;
how the discovery of steel greatly accelerated historical time; and how
silicon-based IT technologies are fast-tracking socioeconomic and historical
discontinuities.

My first encounter with Marx’s writings came very early in life, as a result of
the strange times I grew up in, with Greece exiting the nightmare of the
neofascist dictatorship of 1967-74. What caught my eye was Marx’s
mesmerising gift for writing a dramatic script for human history, indeed for
human damnation, that was also laced with the possibility of salvation and
authentic spirituality.

Marx created a narrative populated by workers, capitalists, officials and


scientists who were history’s dramatis personae. They struggled to harness
reason and science in the context of empowering humanity while, contrary
to their intentions, unleashing demonic forces that usurped and subverted
their own freedom and humanity.

This dialectical perspective, where everything is pregnant with its opposite,


and the eager eye with which Marx discerned the potential for change in
what seemed to be the most unchanging of social structures, helped me to
grasp the great contradictions of the capitalist era. It dissolved the paradox
of an age that generated the most remarkable wealth and, in the same
breath, the most conspicuous poverty. Today, turning to the European
crisis, the crisis in the United States and the long-term stagnation of
Japanese capitalism, most commentators fail to appreciate the dialectical
process under their nose. They recognise the mountain of debts and
banking losses but neglect the opposite side of the same coin: the mountain
of idle savings that are “frozen” by fear and thus fail to convert into
productive investments. A Marxist alertness to binary oppositions might
have opened their eyes.

A major reason why established opinion fails to come to terms with


contemporary reality is that it never understood the dialectically tense “joint
production” of debts and surpluses, of growth and unemployment, of wealth
and poverty, indeed of good and evil. Marx’s script alerted us these binary
oppositions as the sources of history’s cunning.

From my first steps of thinking like an economist, to this very day, it


occurred to me that Marx had made a discovery that must remain at the
heart of any useful analysis of capitalism. It was the discovery of another
binary opposition deep within human labour. Between labour’s two quite
different natures: i) labour as a value-creating activity that can never be
quantified in advance (and is therefore impossible to commodify), and ii)
labour as a quantity (eg, numbers of hours worked) that is for sale and
comes at a price. That is what distinguishes labour from other productive
inputs such as electricity: its twin, contradictory, nature. A differentiation-
cum-contradiction that political economics neglected to make before Marx
came along and that mainstream economics is steadfastly refusing to
acknowledge today.

Both electricity and labour can be thought of as commodities. Indeed, both


employers and workers struggle to commodify labour. Employers use all
their ingenuity, and that of their HR management minions, to quantify,
measure and homogenise labour. Meanwhile, prospective employees go
through the wringer in an anxious attempt to commodify their labour
power, to write and rewrite their CVs in order to portray themselves as
purveyors of quantifiable labour units. And there’s the rub. If workers and
employers ever succeed in commodifying labour fully, capitalism will
perish. This is an insight without which capitalism’s tendency to generate
crises can never be fully grasped and, also, an insight that no one has access
to without some exposure to Marx’s thought.

Science fiction becomes documentary

In the classic 1953 film Invasion of the Body Snatchers, the alien force does
not attack us head on, unlike in, say, HG Wells’s The War of the Worlds.
Instead, people are taken over from within, until nothing is left of their
human spirit and emotions. Their bodies are shells that used to contain a
free will and which now labour, go through the motions of everyday “life”,
and function as human simulacra “liberated” from the unquantifiable
essence of human nature. This is something like what would have
transpired if human labour had become perfectly reducible to human
capital and thus fit for insertion into the vulgar economists’ models.

Invasion of the Body Snatchers. Photograph: SNAP/REX

Every non-Marxist economic theory that treats human and non-human


productive inputs as interchangeable assumes that the dehumanisation of
human labour is complete. But if it could ever be completed, the result
would be the end of capitalism as a system capable of creating and
distributing value. For a start, a society of dehumanised automata would
resemble a mechanical watch full of cogs and springs, each with its own
unique function, together producing a “good”: timekeeping. Yet if that
society contained nothing but other automata, timekeeping would not be a
“good”. It would certainly be an “output” but why a “good”? Without real
humans to experience the clock’s function, there can be no such thing as
“good” or “bad”.

If capital ever succeeds in quantifying, and subsequently fully


commodifying, labour, as it is constantly trying to, it will also squeeze that
indeterminate, recalcitrant human freedom from within labour that allows
for the generation of value. Marx’s brilliant insight into the essence of
capitalist crises was precisely this: the greater capitalism’s success in
turning labour into a commodity the less the value of each unit of output it
generates, the lower the profit rate and, ultimately, the nearer the next
recession of the economy as a system. The portrayal of human freedom as
an economic category is unique in Marx, making possible a distinctively
dramatic and analytically astute interpretation of capitalism’s propensity to
snatch recession, even depression, from the jaws of growth.

When Marx was writing that labour is the living, form-giving fire; the
transitoriness of things; their temporality; he was making the greatest
contribution any economist has ever made to our understanding of the
acute contradiction buried inside capitalism’s DNA. When he portrayed
capital as a “… force we must submit to … it develops a cosmopolitan,
universal energy which breaks through every limit and every bond and posts
itself as the only policy, the only universality the only limit and the only
bond”, he was highlighting the reality that labour can be purchased by
liquid capital (ie money), in its commodity form, but that it will always carry
with it a will hostile to the capitalist buyer. But Marx was not just making a
psychological, philosophical or political statement. He was, rather,
supplying a remarkable analysis of why the moment that labour (as an
unquantifiable activity) sheds this hostility, it becomes sterile, incapable of
producing value.
At a time when neoliberals have ensnared the majority in their theoretical
tentacles, incessantly regurgitating the ideology of enhancing labour
productivity in an effort to enhance competitiveness with a view to creating
growth etc, Marx’s analysis offers a powerful antidote. Capital can never win
in its struggle to turn labour into an infinitely elastic, mechanised input,
without destroying itself. That is what neither the neoliberals nor the
Keynesians will ever grasp. “If the whole class of the wage-labourer were to
be annihilated by machinery”, wrote Marx “how terrible that would be for
capital, which, without wage-labour, ceases to be capital!”

What has Marx done for us?

Almost all schools of thought, including those of some progressive


economists, like to pretend that, though Marx was a powerful figure, very
little of his contribution remains relevant today. I beg to differ. Besides
having captured the basic drama of capitalist dynamics, Marx has given me
the tools with which to become immune to the toxic propaganda of
neoliberalism. For example, the idea that wealth is privately produced and
then appropriated by a quasi-illegitimate state, through taxation, is easy to
succumb to if one has not been exposed first to Marx’s poignant argument
that precisely the opposite applies: wealth is collectively produced and then
privately appropriated through social relations of production and property
rights that rely, for their reproduction, almost exclusively on false
consciousness.

In his recent book Never Let a Serious Crisis Go to Waste, the historian of
economic thought, Philip Mirowski, has highlighted the neoliberals’ success
in convincing a large array of people that markets are not just a useful
means to an end but also an end in themselves. According to this view,
while collective action and public institutions are never able to “get it right”,
the unfettered operations of decentralised private interest are guaranteed to
produce not only the right outcomes but also the right desires, character,
ethos even. The best example of this form of neoliberal crassness is, of
course, the debate on how to deal with climate change. Neoliberals have
rushed in to argue that, if anything is to be done, it must take the form of
creating a quasi-market for “bads” (eg an emissions trading scheme), since
only markets “know” how to price goods and bads appropriately. To
understand why such a quasi-market solution is bound to fail and, more
importantly, where the motivation comes from for such “solutions”, one can
do much worse than to become acquainted with the logic of capital
accumulation that Marx outlined and the Polish economist Michal
Kaleckiadapted to a world ruled by networked oligopolies.
Neoliberals have rushed in with quasi-market responses to climate change,
such as emissions trading schemes. Photograph: Jon Woo/Reuters

In the 20th century, the two political movements that sought their roots in
Marx’s thought were the communist and social democratic parties. Both of
them, in addition to their other errors (and, indeed, crimes) failed, to their
detriment, to follow Marx’s lead in a crucial regard: instead of embracing
liberty and rationality as their rallying cries and organising concepts, they
opted for equality and justice, bequeathing the concept of freedom to the
neoliberals. Marx was adamant: The problem with capitalism is not that it is
unfair but that it is irrational, as it habitually condemns whole generations
to deprivation and unemployment and even turns capitalists into angst-
ridden automata, living in permanent fear that unless they commodify their
fellow humans fully so as to serve capital accumulation more efficiently,
they will cease to be capitalists. So, if capitalism appears unjust this is
because it enslaves everyone; it wastes human and natural resources; the
same production line that pumps out remarkable gizmos and untold wealth,
also produces deep unhappiness and crises.

Having failed to couch a critique of capitalism in terms of freedom and


rationality, as Marx thought essential, social democracy and the left in
general allowed the neoliberals to usurp the mantle of freedom and to win a
spectacular triumph in the contest of ideologies.

Perhaps the most significant dimension of the neoliberal triumph is what


has come to be known as the “democratic deficit”. Rivers of crocodile tears
have flowed over the decline of our great democracies during the past three
decades of financialisation and globalisation. Marx would have laughed long
and hard at those who seem surprised, or upset, by the “democratic deficit”.
What was the great objective behind 19th-century liberalism? It was, as
Marx never tired of pointing out, to separate the economic sphere from the
political sphere and to confine politics to the latter while leaving the
economic sphere to capital. It is liberalism’s splendid success in achieving
this long-held goal that we are now observing. Take a look at South Africa
today, more than two decades after Nelson Mandela was freed and the
political sphere, at long last, embraced the whole population. The ANC’s
predicament was that, in order to be allowed to dominate the political
sphere, it had to give up power over the economic one. And if you think
otherwise, I suggest that you talk to the dozens of miners gunned down by
armed guards paid by their employers after they dared demand a wage rise.

Why erratic?
Having explained why I owe whatever understanding of our social world I
may possess largely to Karl Marx, I now want to explain why I remain
terribly angry with him. In other words, I shall outline why I am by choice
an erratic, inconsistent Marxist. Marx committed two spectacular mistakes,
one of them an error of omission, the other one of commission. Even today,
these mistakes still hamper the left’s effectiveness, especially in Europe.

Marx’s first error – the error of omission was that he failed to give sufficient
thought to the impact of his own theorising on the world that he was
theorising about. His theory is discursively exceptionally powerful, and
Marx had a sense of its power. So how come he showed no concern that his
disciples, people with a better grasp of these powerful ideas than the
average worker, might use the power bestowed upon them, via Marx’s own
ideas, in order to abuse other comrades, to build their own power base, to
gain positions of influence?

Marx’s second error, the one I ascribe to commission, was worse. It was his
assumption that truth about capitalism could be discovered in the
mathematics of his models. This was the worst disservice he could have
delivered to his own theoretical system. The man who equipped us with
human freedom as a first-order economic concept; the scholar who elevated
radical indeterminacy to its rightful place within political economics; he was
the same person who ended up toying around with simplistic algebraic
models, in which labour units were, naturally, fully quantified, hoping
against hope to evince from these equations some additional insights about
capitalism. After his death, Marxist economists wasted long careers
indulging a similar type of scholastic mechanism. Fully immersed in
irrelevant debates on “the transformation problem” and what to do about it,
they eventually became an almost extinct species, as the neoliberal
juggernaut crushed all dissent in its path.

How could Marx be so deluded? Why did he not recognise that no truth
about capitalism can ever spring out of any mathematical model, however
brilliant the modeller may be? Did he not have the intellectual tools to
realise that capitalist dynamics spring from the unquantifiable part of
human labour; ie from a variable that can never be well-defined
mathematically? Of course he did, since he forged these tools! No, the
reason for his error is a little more sinister: just like the vulgar economists
that he so brilliantly admonished (and who continue to dominate the
departments of economics today), he coveted the power that mathematical
“proof” afforded him.

Why did Marx not recognise that no truth about capitalism can ever spring
out of any mathematical model?
If I am right, Marx knew what he was doing. He understood, or had the
capacity to know, that a comprehensive theory of value cannot be
accommodated within a mathematical model of a dynamic capitalist
economy. He was, I have no doubt, aware that a proper economic theory
must respect the idea that the rules of the undetermined are themselves
undetermined. In economic terms this meant a recognition that the market
power, and thus the profitability, of capitalists was not necessarily reducible
to their capacity to extract labour from employees; that some capitalists can
extract more from a given pool of labour or from a given community of
consumers for reasons that are external to Marx’s own theory.

Alas, that recognition would be tantamount to accepting that his “laws”


were not immutable. He would have to concede to competing voices in the
trades union movement that his theory was indeterminate and, therefore,
that his pronouncements could not be uniquely and unambiguously correct.
That they were permanently provisional. This determination to have the
complete, closed story, or model, the final word, is something I cannot
forgive Marx for. It proved, after all, responsible for a great deal of error
and, more significantly, authoritarianism. Errors and authoritarianism that
are largely responsible for the left’s current impotence as a force of good
and as a check on the abuses of reason and liberty that the neoliberal crew
are overseeing today.

Mrs Thatcher’s lesson

I moved to England to attend university in September 1978, six months or


so before Margaret Thatcher’s victory changed Britain forever. Watching the
Labour government disintegrate, under the weight of its degenerate social
democratic programme, led me to a serious error: to the thought that
Thatcher’s victory could be a good thing, delivering to Britain’s working and
middle classes the short, sharp shock necessary to reinvigorate progressive
politics; to give the left a chance to create a fresh, radical agenda for a new
type of effective, progressive politics.

Even as unemployment doubled and then trebled, under Thatcher’s radical


neoliberal interventions, I continued to harbour hope that Lenin was right:
“Things have to get worse before they get better.” As life became nastier,
more brutish and, for many, shorter, it occurred to me that I was tragically
in error: things could get worse in perpetuity, without ever getting better.
The hope that the deterioration of public goods, the diminution of the lives
of the majority, the spread of deprivation to every corner of the land would,
automatically, lead to a renaissance of the left was just that: hope.

The reality was, however, painfully different. With every turn of the
recession’s screw, the left became more introverted, less capable of
producing a convincing progressive agenda and, meanwhile, the working
class was being divided between those who dropped out of society and those
co-opted into the neoliberal mindset. My hope that Thatcher would
inadvertently bring about a new political revolution was well and truly
bogus. All that sprang out of Thatcherism were extreme financialisation, the
triumph of the shopping mall over the corner store, the fetishisation of
housing and Tony Blair.

Margaret Thatcher at the Conservative party conference in


1982. Photograph: Nils Jorgensen/Rex Features

Instead of radicalising British society, the recession that Thatcher’s


government so carefully engineered, as part of its class war against
organised labour and against the public institutions of social security and
redistribution that had been established after the war, permanently
destroyed the very possibility of radical, progressive politics in Britain.
Indeed, it rendered impossible the very notion of values that transcended
what the market determined as the “right” price.

The lesson Thatcher taught me about the capacity of a long-lasting


recession to undermine progressive politics, is one that I carry with me into
today’s European crisis. It is, indeed, the most important determinant of my
stance in relation to the crisis. It is the reason I am happy to confess to the
sin I am accused of by some of my critics on the left: the sin of choosing not
to propose radical political programs that seek to exploit the crisis as an
opportunity to overthrow European capitalism, to dismantle the awful
eurozone, and to undermine the European Union of the cartels and the
bankrupt bankers.

Yes, I would love to put forward such a radical agenda. But, no, I am not
prepared to commit the same error twice. What good did we achieve in
Britain in the early 1980s by promoting an agenda of socialist change that
British society scorned while falling headlong into Thatcher’s neoliberal
trap? Precisely none. What good will it do today to call for a dismantling of
the eurozone, of the European Union itself, when European capitalism is
doing its utmost to undermine the eurozone, the European Union, indeed
itself?

A Greek or a Portuguese or an Italian exit from the eurozone would soon


lead to a fragmentation of European capitalism, yielding a seriously
recessionary surplus region east of the Rhine and north of the Alps, while
the rest of Europe is would be in the grip of vicious stagflation. Who do you
think would benefit from this development? A progressive left, that will rise
Phoenix-like from the ashes of Europe’s public institutions? Or the Golden
Dawn Nazis, the assorted neofascists, the xenophobes and the spivs? I have
absolutely no doubt as to which of the two will do best from a disintegration
of the eurozone.

I, for one, am not prepared to blow fresh wind into the sails of this
postmodern version of the 1930s. If this means that it is we, the suitably
erratic Marxists, who must try to save European capitalism from itself, so be
it. Not out of love for European capitalism, for the eurozone, for Brussels, or
for the European Central Bank, but just because we want to minimise the
unnecessary human toll from this crisis.

What should Marxists do?

Europe’s elites are behaving today as if they understand neither the nature
of the crisis that they are presiding over, nor its implications for the future
of European civilisation. Atavistically, they are choosing to plunder the
diminishing stocks of the weak and the dispossessed in order to plug the
gaping holes of the financial sector, refusing to come to terms with the
unsustainability of the task.

Yet with Europe’s elites deep in denial and disarray, the left must admit that
we are just not ready to plug the chasm that a collapse of European
capitalism would open up with a functioning socialist system. Our task
should then be twofold. First, to put forward an analysis of the current state
of play that non-Marxist, well meaning Europeans who have been lured by
the sirens of neoliberalism, find insightful. Second, to follow this sound
analysis up with proposals for stabilising Europe – for ending the
downward spiral that, in the end, reinforces only the bigots.

Let me now conclude with two confessions. First, while I am happy to


defend as genuinely radical the pursuit of a modest agenda for stabilising a
system that I criticise, I shall not pretend to be enthusiastic about it. This
may be what we must do, under the present circumstances, but I am sad
that I shall probably not be around to see a more radical agenda being
adopted.

Europe’s elites are behaving as if they understand neither the nature of the
crisis, nor its implications for the future

My final confession is of a highly personal nature: I know that I run the risk
of, surreptitiously, lessening the sadness from ditching any hope of
replacing capitalism in my lifetime by indulging a feeling of having become
agreeable to the circles of polite society. The sense of self-satisfaction from
being feted by the high and mighty did begin, on occasion, to creep up on
me. And what a non-radical, ugly, corruptive and corrosive sense it was.
My personal nadir came at an airport. Some moneyed outfit had invited me
to give a keynote speech on the European crisis and had forked out the
ludicrous sum necessary to buy me a first-class ticket. On my way back
home, tired and with several flights under my belt, I was making my way
past the long queue of economy passengers, to get to my gate. Suddenly I
noticed, with horror, how easy it was for my mind to be infected with the
sense that I was entitled to bypass the hoi polloi. I realised how readily I
could forget that which my leftwing mind had always known: that nothing
succeeds in reproducing itself better than a false sense of entitlement.
Forging alliances with reactionary forces, as I think we should do to stabilise
Europe today, brings us up against the risk of becoming co-opted, of
shedding our radicalism through the warm glow of having “arrived” in the
corridors of power.

Radical confessions, like the one I have attempted here, are perhaps the
only programmatic antidote to ideological slippage that threatens to turn us
into cogs of the machine. If we are to forge alliances with our political
adversaries we must avoid becoming like the socialists who failed to change
the world but succeeded in improving their private circumstances. The trick
is to avoid the revolutionary maximalism that, in the end, helps the
neoliberals bypass all opposition to their self-defeating policies and to retain
in our sights capitalism’s inherent failures while trying to save it, for
strategic purposes, from itself.

A Point of View: The revolution of


capitalism
Karl Marx may have been wrong about communism but he was right
about much of capitalism, John Gray writes.

As a side-effect of the financial crisis, more and more people are starting to
think Karl Marx was right. The great 19th Century German philosopher,
economist and revolutionary believed that capitalism was radically unstable.

It had a built-in tendency to produce ever larger booms and busts, and over
the longer term it was bound to destroy itself.

Marx welcomed capitalism's self-destruction. He was confident that a popular


revolution would occur and bring a communist system into being that would be
more productive and far more humane.

Marx was wrong about communism. Where he was prophetically right was in
his grasp of the revolution of capitalism. It's not just capitalism's endemic
instability that he understood, though in this regard he was far more perceptive
than most economists in his day and ours.
More profoundly, Marx understood how capitalism destroys its own social base
- the middle-class way of life. The Marxist terminology of bourgeois and
proletarian has an archaic ring.

But when he argued that capitalism would plunge the middle classes into
something like the precarious existence of the hard-pressed workers of his
time, Marx anticipated a change in the way we live that we're only now
struggling to cope with.

He viewed capitalism as the most revolutionary economic system in history,


and there can be no doubt that it differs radically from those of previous times.

Hunter-gatherers persisted in their way of life for thousands of years, slave


cultures for almost as long and feudal societies for many centuries. In contrast,
capitalism transforms everything it touches.

It's not just brands that are constantly changing. Companies and industries are
created and destroyed in an incessant stream of innovation, while human
relationships are dissolved and reinvented in novel forms.

Capitalism has been described as a process of creative destruction, and no-


one can deny that it has been prodigiously productive. Practically anyone who
is alive in Britain today has a higher real income than they would have had if
capitalism had never existed.

Negative return
The trouble is that among the things that have been destroyed in the process
is the way of life on which capitalism in the past depended.

Defenders of capitalism argue that it offers to everyone the benefits that in


Marx's time were enjoyed only by the bourgeoisie, the settled middle class that
owned capital and had a reasonable level of security and freedom in their
lives.

In 19th Century capitalism most people had nothing. They lived by selling their
labour and when markets turned down they faced hard times. But as
capitalism evolves, its defenders say, an increasing number of people will be
able to benefit from it.

Fulfilling careers will no longer be the prerogative of a few. No more will people
struggle from month to month to live on an insecure wage. Protected by
savings, a house they own and a decent pension, they will be able to plan their
lives without fear. With the growth of democracy and the spread of wealth, no-
one need be shut out from the bourgeois life. Everybody can be middle class.

In fact, in Britain, the US and many other developed countries over the past 20
or 30 years, the opposite has been happening. Job security doesn't exist, the
trades and professions of the past have largely gone and life-long careers are
barely memories.

If people have any wealth it's in their houses, but house prices don't always
increase. When credit is tight as it is now, they can be stagnant for years. A
dwindling minority can count on a pension on which they could comfortably
live, and not many have significant savings.

More and more people live from day to day, with little idea of what the future
may bring. Middle-class people used to think their lives unfolded in an orderly
progression. But it's no longer possible to look at life as a succession of stages
in which each is a step up from the last.

In the process of creative destruction the ladder has been kicked away and for
increasing numbers of people a middle-class existence is no longer even an
aspiration.

Risk takers
As capitalism has advanced it has returned most people to a new version of
the precarious existence of Marx's proles. Our incomes are far higher and in
some degree we're cushioned against shocks by what remains of the post-war
welfare state.

But we have very little effective control over the course of our lives, and the
uncertainty in which we must live is being worsened by policies devised to deal
with the financial crisis. Zero interest rates alongside rising prices means
you're getting a negative return on your money and over time your capital is
being eroded.

The situation of many younger people is even worse. In order to acquire the
skills you need, you'll have to go into debt. Since at some point you'll have to
retrain you should try to save, but if you're indebted from the start that's the
last thing you'll be able to do. Whatever their age, the prospect facing most
people today is a lifetime of insecurity.

At the same time as it has stripped people of the security of bourgeois life,
capitalism has made the type of person that lived the bourgeois life obsolete.
In the 1980s there was much talk of Victorian values, and promoters of the
free market used to argue that it would bring us back to the wholesome virtues
of the past.

For many, women and the poor for example, these Victorian values could be
pretty stultifying in their effects. But the larger fact is that the free market works
to undermine the virtues that maintain the bourgeois life.

When savings are melting away being thrifty can be the road to ruin. It's the
person who borrows heavily and isn't afraid to declare bankruptcy that survives
and goes on to prosper.
When the labour market is highly mobile it's not those who stick dutifully to
their task that succeed, it's people who are always ready to try something new
that looks more promising.

In a society that is being continuously transformed by market forces, traditional


values are dysfunctional and anyone who tries to live by them risks ending up
on the scrapheap.

Vast wealth

Looking to a future in which the market permeates every corner of life, Marx
wrote in The Communist Manifesto: "Everything that is solid melts into air". For
someone living in early Victorian England - the Manifesto was published in
1848 - it was an astonishingly far-seeing observation.

At the time nothing seemed more solid than the society on the margins of
which Marx lived. A century and a half later we find ourselves in the world he
anticipated, where everyone's life is experimental and provisional, and sudden
ruin can happen at any time.

A tiny few have accumulated vast wealth but even that has an evanescent,
almost ghostly quality. In Victorian times the seriously rich could afford to relax
provided they were conservative in how they invested their money. When the
heroes of Dickens' novels finally come into their inheritance, they do nothing
forever after.

Today there is no haven of security. The gyrations of the market are such that
no-one can know what will have value even a few years ahead.

This state of perpetual unrest is the permanent revolution of capitalism and I


think it's going to be with us in any future that's realistically imaginable. We're
only part of the way through a financial crisis that will turn many more things
upside down.

Currencies and governments are likely to go under, along with parts of the
financial system we believed had been made safe. The risks that threatened to
freeze the world economy only three years ago haven't been dealt with.
They've simply been shifted to states.

Whatever politicians may tell us about the need to curb the deficit, debts on
the scale that have been run up can't be repaid. Almost certainly they will be
inflated away - a process that is bound to be painful and impoverishing for
many.

The result can only be further upheaval, on an even bigger scale. But it won't
be the end of the world, or even of capitalism. Whatever happens, we're still
going to have to learn to live with the mercurial energy that the market has
released.
Capitalism has led to a revolution but not the one that Marx expected. The
fiery German thinker hated the bourgeois life and looked to communism to
destroy it. And just as he predicted, the bourgeois world has been destroyed.

But it wasn't communism that did the deed. It's capitalism that has killed off the
bourgeoisie.
THE PHILOSOPHICAL FOUNDATIONS OF EFFECTIVE ALTRUISM

by Michael Lopresto

We, as members of an affluent society, have a moral obligation to help those who
are far worse off than we are. To establish this moral obligation, I'll use Peter
Singer's Life Saving Analogy from his seminal (1972) paper "Famine, Affluence
and Morality" – an argument that strongly influenced my thinking when I first read
it as an undergraduate (more years ago than I care to remember), and still strongly
influences me today. It sparked off a movement known today by the name Effective
Altruism.

The Life Saving Analogy asks us to imagine walking past a pond, where we happen
to see a child drowning. We can safely and easily save the life of the child, but in
the process would ruin our new pair of shoes, which cost, say, $300. We all judge
that it would be wrong not to save the life of the child, and that the cost of the shoes
doesn't have any moral significance in comparison. And yet – and this is the
analogy – we are in a position right now where we could save someone's life for
exactly the same cost, who would otherwise die of poverty-related illness. The only
difference is that we can't directly see the person we would save; but this fact alone
makes no moral difference. Therefore, we have a moral obligation to give money to
those who would die of poverty-related illness, and to alleviate poverty-related
suffering, because our money would actually make a difference – that's
the effective part of effective altruism – and because our lives would not be any
worse off in any significant sense.

So the view is that we as affluent people have a moral obligation to donate a


percentage of our income to charities that have been proven to be highly effective,
since it's inconsistent to accept that we ought to save the drowning child in front of
us, but not save a person who's far away, when it's within our ability to do so. I'll
quickly rebut three common objections to the effective altruist view.

1. The first objection is that donations to charity can't make the different that
people would like them to, because charities invariably have massive overheads
and administrative fees that prevent your money getting to those who need it.
However, the problem with this objection is that there are non-profit organisations
like Give Well that analyse a huge number of charities for their efficiency and
effectiveness. For example, Give Well have shown that if you donate $10 to the
Against Malaria Foundation, at least $7 goes to those who need it, and if you
donate $10 to Give Directly, $9.10 goes to those who need it. This money makes a
huge difference to those living in extreme poverty, on less $1.25 per day.

2. The second objection is that contrary to the Life Saving Analogy, being a
great spatial distance away from someone in need does in fact make a moral
difference, such that the further away you are from someone, the less moral
obligation you have to them. To see that this objection doesn't work, we can
imagine a different scenario. Imagine being a security guard watching a pond that
on the other side of the world, via CCTV. You see a child fall into the pond, and
there's no one around to save her. You can push a button that will instantly drain
the water from the pond, saving the child's life, but you know you'll be up for a
water bill of $300. Should you push that button? Again, we'd say of course you
should, and that the $300 you'll have to pay is insignificant in comparison.

3. The third objection is perhaps the most serious, and I must admit that I don't
have a principled answer to this objection like I do the previous objections. It says
that accepting the Life Saving Analogy leads to extreme demand. If we're ever in a
position to make a different in the life of someone who lives in extreme poverty,
we ought to perform that action. However, it's nearly always the case that instead of
buying tickets to the movies or paying for our child's violin lessons, we could be
giving that money to someone in desperate poverty. But, goes the objection, this
leads to an absurd outcome, so there must be a problem with accepting the Life
Saving Analogy. As I said, I don't have a principled answer to this objection, but I
do have a pragmatic answer: for anyone who lives a comfortable life in an affluent
society, giving around 10% of your income to those in desperate need won't make
your life any less comfortable. And furthermore, if everyone in affluent societies
gave 10% of their income, that would make massive difference.

Thinking about the moral demands of affluence elucidates many other closely
related issues and questions. For example, a charity like Give Directly is only made
possible by current technology, and its availability and cost effectiveness. Part of
their process is to satellite imagery to identify places where people live in thatch
roofs, one of the most reliable indicators of poverty. Furthermore, the abundance of
very cheap mobile phones makes it easy for organisations for Give Directly to keep
in touch with recipients. Technology transforms the ways in which we can exercise
our moral agency, and perform moral actions that have effects on the other side of
the world. So there's no question of spatial distance having absolutely no moral
significance.

Another philosophical question that's elucidated for me concerns the nature of


morality itself. Many of us have a strong intuition that we have moral obligations,
first and foremost, to our loved ones, friends and those in our own community.
These intuitions can obviously go too far, and give rise to ugly forms of
nationalism. However, many people feel the need to hold on to a healthy form of
partiality. This forcefully raises a fundamental question about the nature of
morality. Are moral obligations partial or impartial? That is, do moral obligations
to our friends and loved ones override the moral obligations we have to strangers?
Most people believe that they do override the obligations we have to strangers. Or
at least, that's how they live their everyday lives (that's certainly how I live mine).
But he Life Saving Analogy teaches us that they don't. We have the exact same,
impartial, obligations to family and stranger alike. It makes absolutely no moral
difference whether or not it's our child drowning in the pond, or someone else's.

So far so good. The only problem is that this conclusion is somewhat at odds with
human nature. This is because our moral intuitions are the product of evolution, and
the mechanisms of human evolution were gene selection, kin selection and group
selection. And for most of human history, humans lived in groups of no more than
150. So our moral intuitions were never designed to think about larger numbers of
people, still less people we never see. Evolution has made our moral intuitions
profoundly unreliable. Throughout our evolutionary history, we only ever had to
make moral decisions about people we knew intimately (unless if it was someone
from an out-group, in which case they would be extended little or no moral
consideration). The best way to our intuitions back on track, it seems to me, is to
consult our best moral theory. And our best moral theory says that morality is
impartial.

history from the early modern


philosophers
Interview by Richard Marshall.

Daniel Garber knows philosophy makes some parents go silent and it’s broad enough to encompass everything
worth while. He thinks about the history of seventeenth century philosophy, about what makes the early moderns
modern, about the giants of the time and what we learn from studying the lesser known ones too, about the
importance of Kant to our conception of the early moderns, about Leibniz, about contrasts between Leibniz and
Descartes and Spinoza, about the metaphysical schemes of the time, about Descartes and Galileo, about Hobbes
and Spinoza, Pascal’s wager, and about x-phi and comparing our present context with the early mods. This one
wakes us up to the long years we’ve been travelling…

3:AM: What made you become a philosopher?

Daniel Garber: When I began as an undergraduate at Harvard in 1967, I thought that I was going to become a
physicist. My father wanted me to become an engineer, but physics was close enough for him. I was also very
interested in music, in composition and in conducting. When I began at Harvard, I also became interested in
linguistics. I found the new Chomsky transformational grammar fascinating. George Lakoff, then an assistant
professor in Linguistics drew me into his circle. I discovered Philosophy through a course in on the Enlightenment
and the French Revolution in the History Department. That, in turn, drew me into the Philosophy Department. I got
a “C” on my first course in Philosophy—Locke, Berkeley, and Hume. And I was hooked. Physics had become
somewhat dull; I could do very well without really making an effort. But I did make an effort in my Philosophy
class, and got a poor grade. It seemed to me that there was obviously more depth in Philosophy than in Physics.
(I later discovered that most of the others in the Philosophy class were grad students, taking it for distribution, and
that I had no business being there.) So I signed up for Philosophy. (When I called home to tell my parents, there
was a strange silence at the other end of the telephone…) In part it was the experience I had with the one course.
But it was also because I realized early on that I could pursue virtually all of my interests within Philosophy: the
subject was broad enough to encompass all of the things that interested me about physics, about language, about
music, and more.

My first years I was doing pretty straight Harvard analytic philosophy, particularly epistemology. In my senior year
in college, I took Dreben’s famous seminar in the history of analytic philosophy. I wrote on Russell and his early
papers on philosophical analysis. It was then that I first developed a taste for work in the history of philosophy.
The interest in Russell took me forward to the Vienna Circle and Logical Positivism. But what took me back to the
seventeenth century was, ironically, Quine, one of my teachers.

I was a grad student, still at Harvard. When I first read his famous essay, “Epistemology Naturalized,” I was
completely hooked at the idea of naturalizing philosophy and making it continuous with the natural sciences. But
the treatment of Descartes struck me as curious. Descartes was the villain of the essay, a philosopher who split
philosophy off from the natural sciences. But at the same time I was reading that, I was assisting in a course
where we were reading Descartes. The edition used was that of Anscombe and Geach. That edition contained
excerpts from some of Descartes’ scientific writings. That made me wonder how Descartes himself thought of the
relation between his scientific writings and his philosophy: how could someone who framed laws of motion and
did work in optics be anything but a Quinean naturalist? So I began working on that. From another direction,
Maurice Mandelbaum’s work on Locke and Boyle made me wonder about his thought. And before long I found
myself spending a great deal of my time doing historical work.

I ended up writing a dissertation in epistemology with Roderick Firth and Hilary Putnam. In my years as an
assistant professor at Chicago I continued to work in epistemology, switching over to the Bayesian program. I am
very proud of the papers I published in that area, which are still read and still regularly refuted. But during that
time I found myself working more and more on the history of early modern philosophy and science. After my
tenure decision, I decided that I had to make a choice: I couldn’t continue to do both and do both well. So I chose
history, and have never regretted it.

3:AM: You’re working primarily in studies of the early moderns, which is roughly Europeanphilosophy of the
seventeenth century isn’t it? I guess the shorthand sketch of this crucial time goes something like ‘move aside
Aristotle we’re going to build everything like a machine’. What would you say made the early moderns modern?

DG: Your characterization of the early moderns is not inaccurate. Aristotelianism in one form or another was the
common culture of the period: it was the philosophy generally taught in all of the schools. As such, it was studied
by virtually every educated person in Europe. And many (though not all) who rejected the Aristotelian philosophy
adopted some version of the mechanical philosophy. But there were more new strands than the mechanical
philosophy. There was the new experimentalism and natural history. There was also the emphasis on
mathematics in understanding nature. There was also a resurgence of interest in atomism, which is something a
bit different from the mechanical philosophy. And these didn’t always coincide in the same figures, nor were these
always shared by those who rejected Aristotelianism.

What, then, made the early moderns modern? Interestingly, not the mechanical philosophy, the idea that
everything can be explained through size, shape and motion and the collision of bodies, something that was
widely called into question after Newton and the theory of universal gravitation. On the other hand, the early-
modern emphasis on experiment and mathematics in many quarters still resonates with us. But to my mind, what
was most modern was the rejection of authority—the authority of the schools, the ancients, the church, the
university—and the development of the idea that knowledge advances through the free exchange of new ideas
and the competition among people who attack problems from different perspectives.

3:AM: I guess the giants of this period are Descartes, Spinoza and Leibniz, Locke,Berkeley and Hume. If we
only knew these six would we be able to understand the options that were being worked out at this time or would
we be missing crucial elements? I suppose another way of framing this question would be: had these not come
along were their lesser known contemporaries who’d have taken us in a different direction?

DG: These six were certainly important, but they weren’t the only ones. At very least one would have to add
figures like Bacon, Galileo, Hobbes (not only for politics, but for his materialistic conception of the world),
Huygens, Newton. But there are also a number of others who are not so well known. In the late sixteenth century
there were figures like Telesio and Patrizi. (In fact, when Descartes was working, it was probably Telesio who was
considered the father of modern philosophy!) Later in the seventeenth century there were other important figures
like Mersenne, Gassendi, the Cambridge Platonists Henry More and Ralph Cudworth, Antoine Arnauld and
Malebranche. And there were a slew of others even less known today, including Basson, Gorlaeus, Hill, Fludd,
d’Espagnet, and many, many others. Recently there has been a surge of interest in women philosophers in the
period, including Elisabeth of Bohemia, Margaret Cavendish, Anne Conway and others. And the list is constantly
growing.

What do we learn from studying these lesser-known figures? In a way, they provide the context for the larger
figures who have entered the canon. There is no question that Descartes, for example, shaped the philosophical
landscape more than Henry More. But to understand what Descartes really meant, why he was singled out
among his contemporaries you have to understand More as a reader, correspondent, and commentator on
Descartes, as well as many other lesser-known contemporaries, such as Clauberg or de La Forge or Rohault:
they help us understand what Descartes really meant, what was striking and important and original to his
contemporaries, and how his central doctrines were understood by those who shared an intellectual context with
him. Now, to understand Spinoza, it is generally acknowledged that you have to understand Descartes and
Descartes’ influence. But Spinoza’s Descartes was not our Descartes: his readings were shaped by the readings
that other contemporaries made. Spinoza’s Descartes, the Descartes that shaped his philosophy was a
seventeenth-century Descartes that can only be found by reading the lesser-known figures who read him,
commented on him, and fought with him. For me, then, it seems essential to step outside the short list of major
figures, and explore the larger intellectual geography. But it’s also just really interesting to immerse yourself in a
different world, to explore an exotic intellectual landscape, to put yourself into the intellectual world before it was
known who would emerge as the important thinkers, and who would sink into obscurity.
What would the world have been like if the big guys hadn’t been there? Who knows? It would have been so much
different from the world that we live in that I have real trouble conceiving it. It is hard enough exploring our world
without worrying about other distant possible worlds.

3:AM: An interesting exercise you undertook with Beatrice Longuenesse was to look at the achievements through
the lens of Kant’s transcendental philosophical approach to metaphysics. He saw the efforts of the early
moderns as reasons ruined edifices. Did Kant read the moderns correctly or did he distort and mislead? How
important is Kant to how post Kantian philosophers like ourselves have read the early moderns?

DG: I think that Kant was very important to our conception of the early moderns. Kant read the history of
philosophy as the prelude to his own philosophy. It is generally thought that the standard distinction between
Rationalists and Empiricists that has shaped both pedagogy and research in the period derives from Kant and his
followers, who saw the Kantian philosophy as putting together these two great traditions. Now, Kant’s interests
were not historical: he was interested in his great predecessors as philosophers, who attempted to deal with the
same problems that interested him, unsuccessfully, in his view. But in reading the history in the way in which he
did, he introduced distortions that have colored our historical understanding of the period ever since. “Rationalist”
and “Empiricist,” conceived of as schools of thought or coherent traditions were not part of the intellectual
geography of the early modern period.

My not-so-hidden agenda in the conference I did with Beatrice and in Kant and the Early Moderns, the book that
came out of it was to undermine the distinction, as it is usually drawn, and to think about some of these characters
as they thought of themselves. Reason, for example, was very important to Descartes, but he also recognized the
importance of the senses and was an important experimentalist. Furthermore, he wasn’t driven mainly by
epistemological questions; among his contemporaries, his physics was probably more salient than anything else.
And it is somewhat strange to link Descartes in a school with Spinoza and Leibniz, both of whom differed
profoundly from him, and saw themselves as anti-Cartesians. On the other hand, Berkeley, usually listed among
the empiricists, shared a great deal with Descartes’ views on mental substance, and differed profoundly from
Locke on those basic questions. In many ways, he is best read as a follower of the Cartesian Malebranche.

3:AM: You’ve done a lot of work on Leibniz. Is he your favourite from this period? What’s the appeal?

DG: I adore Leibniz, but he is only one among a number of favorites in the period. Now that the book, Leibniz:
Body, Substance, Monad is out, I’m working less on Leibniz and more on other figures, such as Bacon, Hobbes,
and Spinoza. I’m also working on a project on the new philosophies (in the plural) that sought to replace
Aristotelianism, including many figures now virtually unknown, the struggle among the new philosophies and the
debates over the value of novelty.

But still, I find Leibniz fascinating. I was initially attracted to Leibniz precisely because I found him so difficult to
crack. Monads? What are they? And why would anyone as smart and sensible as Leibniz seems to have been
proposed something so apparently outlandish? They seem particularly outrageous when you reflect on the fact
that Leibniz was a serious physicist. A number of conservation laws we now accept—the conservation of inertia
and his conservation of what we now call energy—first appear in his writings. I wanted to see if I could figure out
where monads come from. Part of the answer comes in understanding the views that he was opposing, such as
the Cartesian conception of body as extended substance. But much of the answer comes from understanding
how he came to posit them. My first surprise was the discovery that he didn’t always believe in a world of monads.
This is still controversial, but it seems evident to me that during much of his career, including the years in which
he wrote the ‘Discourse on Metaphysics’ and the Correspondence with Arnauld, monads were not in the picture.

Instead, he believed in a world of corporeal substances, understood like tiny animals, living things that were
composed of body and soul. What motivated this metaphysics was the basic commitment that the world had to be
made up of genuine individuals, something largely missing from the Cartesian world of extension. But sometime
in the mid- and late 1690s, when Leibniz was around 40 years old, I would claim, considerations around the idea
of what it meant to be a genuine individual forced Leibniz to give up corporeal substances as the basic building
blocks, and adopt the theory of monads. The theory of monads did not emerge full blown from his philosophical
imagination; over the years that followed its original introduction in the late 1690s Leibniz spent considerable
effort working out the details of the monadological metaphysics. I find it fascinating to watch it grow and develop.

3:AM: Leibniz never published his ‘Monadology’ and seemed to have kept it well hidden didn’t he? What he was
attempting to do in this work? And why do you think he didn’t publish it?

DG: The ‘Monadology’ was written in 1714, at the request of some correspondents who were very confused
about the theory of monads. We now have a great number of Leibniz’s texts, both published and unpublished,
and freely mix them when we are interpreting his thought. But his contemporary readers had only the published
texts, unless, like de Volder or Des Bosses, they were lucky enough to engage Leibniz in extensive
correspondence. And what is extraordinary is that the theory of monads barely appears in the published writings.
When in 1714 he was asked to explain his theory, he took the request seriously, and produced two writings: the
‘Monadology’ and the ‘Principles of Nature and Grace.’ Neither was published during Leibniz’s lifetime, but it
was only the latter that was sent to anyone, so far as we know. The ‘Monadology’ was only published in 1720 and
1721, in German and Latin translations. How the editors got the manuscript is still something of a mystery.

Why did he write it? My conjecture is this. Leibniz had discussed monads with some of his correspondents, most
notably de Volder and Des Bosses, but in a sort of unsystematic way, in response to their questions and to the
demands of the moment. Remember here that Leibniz was not a professional philosopher; he was involved in
many, many other projects of all sorts, and metaphysics was just one of his many
interests. Leibniz’s metaphysical thought had evolved considerably since the last attempts at systematic
development in the 1690s, and I suspect that he took this opportunity to set out his thought in a systematic way.
(It is important to note here that ‘Monadology’ was not Leibniz’s own title for the work.) Why did Leibniz choose to
distribute the ‘Principles’ and not the ‘Monadology’? There are lots of conjectures, but I honestly don’t know. And
why did he choose not to publish either? No doubt in part because he thought that the theory was so distant from
common sense that it would not be accepted. But, I suspect, it was also in part because he wasn’t entirely
satisfied with it. The large gap in the ‘Monadology’ is the relation between the world of monads and the world of
bodies. While this is an issue that is taken up in some of the correspondences, it is entirely missing in the
‘Monadology’. I suspect that he felt reluctant to make his theory public until this problem was resolved. But this is
just a conjecture.
3:AM: I think you argue that Leibniz was unlike Descartes and Spinoza in that he wasn’t someone who thought
you had to build a totalizing system but rather he worked in all sorts of different domains and areas. He sounds
like an early version of the modern intellectual landscape of specialization where even if there is an assumption of
a single world there’s no obligation to have to know everything to be able to study some of its parts in isolation. Is
this right? Could you say something about this aspect of Leibniz?

DG: There is some truth to that. Leibniz did think that everything fits together, and that there is a kind of harmony
among the many different parts of his world view, which included metaphysics, physics, logic, mathematics, the
life sciences, including medicine, alchemy, history, politics, geology, and on and on and on. But, at the same time,
he never wrote a single work that can be said to represent the canonical statement of his views, such as
Descartes’Meditations or Principia, or Spinoza’s Ethics. Students are often sent to the ‘Discourse on
Metaphysics’ or ‘Monadology’, but these are not really comparable to the canonical writings of other contemporary
philosophers: both were withheld from publication, and there is reason to think that he was not entirely satisfied
with either. Leibniz’s writings tend to be shorter and focus on bits of his system, presented in such a way that they
can be accepted piecemeal, without having to accept his whole framework. Good examples of that are the ‘New
System’ and the ‘Specimen Dynamicum’, both published in 1695. The one argues for the hypothesis of pre-
established harmony, and the other outlines features of his dynamics, the theory of force in physics, offering
focused development of a portion of his thought. And both were published in learned journals.

It is very significant that Leibniz began writing at just the moment that the learned journal was invented, and
virtually all of his publications in philosophy, physics, and mathematics were in such journals. His way of working
was very well suited to that form, and perhaps shaped by it. This is, indeed, something that resonates with
contemporary philosophy. A David Lewis may well have a larger systematic view in mind when publishing a
journal article, but nevertheless, his articles are focused on certain well-defined points and can be appreciated
independently of the larger framework into which they may be fit.

3:AM: This period was full of great metaphysical schemes and some, like Descartes, thought that you had to
have this grounding in order for physics to work. I think you say he had a go at Galileo for being remiss on this.
Was Descartes’ view the typical approach to physics at the time?

DG: Yes and no. In a way, Descartes’ idea that physics needs grounding was a reflection of what was going on in
the schools. In the Aristotelian manuals of natural philosophy, one always began with general truths about space,
time, motion, and the grounding of physics in matter, form and privation. Descartes’ metaphysical foundations
were not the same as the Aristotelian foundations, but the idea of foundations was, in a sense, borrowed from
what he had learned at school. Galileo took a very different view. Unlike Descartes, he didn’t claim to know the
nature of body, and never articulated anything he called a law of nature. For him it was mathematics that was
primary. In his Two New Sciences, Galileo sought to display the mathematical structure in nature. In this respect,
though, he was following a path within earlier scientific thought different from the one Descartes chose to follow.
Descartes was a natural philosopher or physicist, which meant that all knowledge must be grounded in knowledge
of the true underlying causes. Mixed mathematics, which included positional astronomy, optics, music and
mechanics, on the other hand, attempted to give a mathematical description of nature without descending to the
level of physics, which treated the real causes, efficient, final, material, and formal that govern the world. But this
mathematical description was not physics, for Descartes, and he criticized Galileo for that reason. And so, for
example, Descartes rejected Galileo’s account of free fall because Galileo was ignorant of the real cause of the
free fall of heavy bodies. A physics of heavy bodies could only be done, Descartes thought, if we know what is the
true cause of gravitation. Descartes thought that he had the true cause of heaviness and free fall—the motion of a
vortex of subtle matter surrounding the earth. But unfortunately, the causal account was too complicated for him
to be able to derive a mathematical account of the relation between time and distance fallen when a heavy body
falls.

3:AM: You write about Descartes program for philosophy as a dead end, that failed, and that was doomed from
the start. Was this was down to its too wide a scope, compared to Galileo and Newton? Why do you think that
despite all this rather damning summary we miss something important if we don’t see each of his arguments and
doctrines within the full context of his system?

DG: I wouldn’t say that Descartes’ program was doomed “from the start.” It was a reasonable program to
undertake at that moment. But then, so was Galileo’s rather different and incompatible program. Indeed, in
Descartes and Galileo we have a lovely instance of what Kuhn should have recognized as incommensurable
paradigms. Descartes and Galileo could perfectly well understand what one another were doing; there was no
incommensurability in that sense. But they had incompatible epistemic values. For Galileo what was important
was mathematical structure. If you could get a causal story as well, that was even better, but secondary to the
mathematics. For Descartes, on the other hand, it was the causal story that was important. He was quite satisfied
with the causal story he could tell about free fall, and it didn’t concern him so much that he couldn’t give a
mathematical account of free fall. From the point of view of a contemporary, though, it wasn’t obvious which of
these two incompatible programs should be adopted.

In the end, the Galilean program seemed to win out over the Cartesian program, though perhaps not for some
time. (The Galilean program informs Newton’s physics, and the Cartesian approach informs Leibniz’s. But both
Newton and Leibniz were live programs well into the eighteenth century.) At the same time, you are right, we do
miss something important in Descartes if we don’t understand the way in which his thought embraces a certain
view of the physical world and certain methodological views about the relation between physics and metaphysics.

3:AM: As well as metaphysics and physics the period is also important in what it developed in terms of morality
and politics. Hobbes and Spinoza are important but contrasting figures you’ve written about. Hobbes thought
physics grounded ethics and politics didn’t he? Can you say something about Hobbes’s materialism and his idea
of human nature, and how it contrasts with Spinoza’s vision?

DG: Actually, in many ways Hobbes and Spinoza are quite similar. Hobbes did indeed think that politics was
grounded in physics. (There is a significant debate over whether Hobbes had an ethics, properly speaking.) After
a preliminary essay on logic, what we might now call epistemology, his system begins with physics or natural
philosophy. This is the project of his De corpore. The physics, in turn, grounds a theory of the human being, a
physical object of particular interest. This is the project of De homine. And finally, there is the politics of De cive,
where we discuss how human beings come together to form the commonwealth. The Leviathan is a popular
presentation of the system.

Spinoza’s system is not altogether different. While there is an element of physics (in my view, inspired more by
Hobbes than by Descartes), his system begins in part I of the Ethics with metaphysics, God or nature. But very
quickly we are in the territory of the human being and its affects in parts II and III, an account of human cognition
and the passions that owes a great deal to Hobbes’s account. This, in turn, grounds a politics in part IV (and in
the Tractatus Theologico-Politicus as well) that is obviously inspired by the contractarian politics Hobbes
advanced.

There are important differences, of course. For Hobbes, the ultimate goal is a stable commonwealth; he has little
to say beyond that. For Spinoza, though, the formation of the civil state is a stepping stone toward ultimate
salvation, what he calls beatitude or intellectual love of God. This is something that goes beyond anything in
Hobbes. Connected with this difference is the fact that Spinoza recognizes a domain of thought distinct in some
ways from that of extension, and recognizes a domain of the eternal distinct from that of the temporal. But even
so, the imprint of Hobbes can be seen through much of what Spinoza wrote.

3:AM: You also wrote a little book about Pascal, What Happens After Pascal’s Wager: Living Faith and
Rational Belief. This seems rather different from the other things that you have written in the history of
philosophy. How does it fit in?

DG: In a way it is different, but in many ways it fits nicely with the larger issues that interest me. The early modern
period filled with talk about God. One might say that God is omnipresent. In Descartes, God guarantees clear and
distinct perceptions and grounds the laws of nature, in Spinoza God is at the foundation of everything, in Leibniz,
God’s creation of the best of all possible worlds sets the balance of good and bad in the world. Even Hobbes finds
that he has to discuss God: the whole second part of the Leviathan is about God and the Bible. So it is not so
surprising that I would want to take up one of the central figures in religious thought in the period. Pascal’s wager
has always been of special interest to me. I used to work in probability and decision theory, and connected with
that, in the history of probability. The wager is one of the very first attempts to take probabilistic reasoning outside
of the relatively orderly domain of games of chance and apply it to something else.

I am not a believer, but even so I find myself drawn to Pascal’s wager argument. I know that there are some
standard objections to it, and many consider it hopeless. I actually think that it is better than many do. But that’s
not the argument of the book. What intrigues me even more is what happens after the wager. The wager
argument tries to convince the reader that believing in God is a better bet than not: you will win whether or not
God actually exists. Suppose you are convinced, and agree that it would be a good thing to believe in God. How
do you do it? Contrary to what Descartes seems to have held, belief is not voluntary: one can’t simply decide to
believe in God and then do it. And Pascal knows that. What he advises instead is that we should play the part of
the believer, and the belief will come:
Learn from those who were bound like you, and who now wager all they have. … Follow the way by which they
began: they acted as if they believed, took holy water, had masses said, etc. This will make you believe naturally
and mechanically.

I think that Pascal is very astute here psychologically speaking. I’m sure that if you follow his advice, you will
come to belief. But should you trust it? Is it just a matter of self-deception? Or rather, is it that by following this
procedure you will eliminate your prejudices and come to appreciate truths that had escaped you before? This is
what I explore in the book.

While it is connected with my historical interests, this project is more epistemology than history, an exploration of
belief and reason. In the end I am not convinced to take the step Pascal recommends, and I remain an
unbeliever. But I am not as confident as some of my philosophical colleagues that there is decisive evidence
against the theological perspective and that it is positively irrational to believe.

3:AM: Contemporary philosophy seems to be much more like these early moderns in that withxphi and cross
disciplinary work in physics, history, ethics, politics, law, philosophy of mind etc etc. Philosophy seems less
isolated from the rest of the academy, in both humanities and sciences than it was in the near past, say in the
seventies. Is this something you’d agree with? How do you assess the state of philosophy at the moment and how
does it compare with its state in the early modern times?

DG: It’s complicated. I was there in the 1970s, as a grad student and as a young faculty member. X-phi didn’t
exist then, but even so, there was significant cross-disciplinary work. Quine, whom I mentioned earlier, had his
program for epistemology naturalized. More generally, he argued that philosophy was continuous with the natural
sciences. This, in a way, was continuous with the Logical Positivist tradition out of which he came. They also
thought that all real knowledge was scientific, but unlike Quine, they recognized a special kind of knowledge,
logical knowledge, which is where Philosophy fit. Quine was, in a way, logical positivism without the
analytic/synthetic distinction. And there was also the program of History and Philosophy of Science (HPS), which
sought to integrate philosophy of science with history of the development of the sciences. When I arrived at the
University of Chicago in the fall of 1975, I immediately joined the Committee on the Conceptual Foundations of
Science, an amalgam of philosophers, historians, and active participants from a variety of the sciences, including
Mathematics, Physics, Biology, Astrophysics and Anthropology. It was a lively group. Though there were some
isolationists, there were many of us, even back then in the Dark Ages, who saw Philosophy as a part of a larger
intellectual whole.

But in an important sense, this is not exactly like it was in the early modern period. Philosophy as we now practice
it, as a discipline with an identity distinct from that of the sciences, did not exist then. Philosophy was at the center
of the curriculum for students in the “arts” curriculum, the years preceding specialization in Theology, Medicine, or
Law. Philosophy generally had four parts: logic, ethics, metaphysics, and natural philosophy or physics. Physics
included a general part, which treated of notions like space, time, matter, form, and privation, and a specific part,
which included cosmology, terrestrial physics, and living things, including plants, animals, and humans. When a
Descartes or a Leibniz did philosophy, they didn’t limit themselves to what modern philosophers thought about,
issues in epistemology or metaphysics or ethics or politics. The entire world—including what we think of as the
scientific world—was part of their domain. Unlike modern philosophers, they didn’t have to take what experts in
the sciences give them and work within its parameters: they could and did dabble in all branches of systematic
knowledge. This is an important difference from philosophers today; even with the new interdisciplinarity, there is
a sense in which there are different professional expertises that we have to recognize. Though philosophers have
learned to cross the lines, there are still lines to cross, professional associations, departments and so on that
guard their special authority. They didn’t exist in the early modern period.

It is hard to say exactly when that changed. In some ways you can see a distinction between philosophy and the
sciences emerging in Locke or Berkeley; perhaps it didn’t fully emerge until later in the eighteenth century with
Hume and Kant. In a way, with the new interdisciplinary spirit of philosophy we are working our way back to what
we had in the days of Descartes and Galileo, Leibniz and Newton. But there is still a ways to go.
What's Wrong With Public Intellectuals?
By Mark Greif

F or years, the undigitized gem of American journals had beenPartisan Review. Last year its

guardians finally brought it online. Some of its mystery has been preserved, insofar as its format
remains hard to use, awkward, and hopeless for searches. Even in its new digital form it retains a
slightly superior pose.
The great importance of Partisan Review did not arise recently from its inaccessibility. The
legendary items that first ran in its pages can be found in any good library, in collections by
contributors who met as promising unknowns: Mary McCarthy, Clement Greenberg, Hannah
Arendt, Saul Bellow, Elizabeth Hardwick, Leslie Fiedler, or Susan Sontag. Alongside those
novices, PR had the cream of Europe, in translation or English original: Sartre, Camus, Jean
Genet, Beauvoir; Ernst Jünger, Karl Jaspers, Gottfried Benn; plus T.S. Eliot, Orwell, Auden,
Stephen Spender.

Partisan Review obtained the first work of the up-and-coming and often the best work of the
famous, though it was notoriously underfunded and skeletally staffed. It gave readers the first
glimpse of much of what would form the subsequent syllabus of midcentury American literature.

But Partisan Review has indeed mattered in more recent decades for its position in a debate to
which its absence from view has been altogether relevant. More than any other publication of the
mid-20th century, the journal has been a venerable stalking horse recruited into a minor culture
war. The strife concerned what’s awkwardly called "public intellect"—that is, the sphere in which
"public intellectuals" used to thrive. "Public intellectuals," as Russell Jacoby defined them near
the start of this culture war, in 1987, are simply "writers and thinkers who address a general and
educated audience." The customary sally was that PR exemplified a bygone world of politically
strenuous, culturally sophisticated, and intellectually exacting argument standing in opposition to
the university, because it was addressed to a broad, unacademic readership. It was said to be both
more usefully influential and more rigorous than any forum we have now, reflecting poorly upon
today’s publications and editors.Partisan Review stood as the phantom flagship of "what we have
lost" since the late 1960s (the period in which the magazine began, not incidentally, its long-
lasting decline).
Something has gone wrong in our collective idea
of the "public."
A dozen years ago, as a graduate student, I sat down and started to read through the run ofPartisan
Review on paper in the Yale library’s periodicals room. I completed the issues from 1934, the
year of its founding, to 1955, when it started to lose energy. It was a thing of wonder, retaining a
taut momentum for a score of years—powerful enough to engulf, for a month or two, a twenty-
something who was otherwise agitated by the imminent Iraq invasion and eager to interrupt his
studies to Google news from CNN, The Nation, The New Republic.

I read to two purposes besides curiosity. Foremost was the academic. I had begun research that
would later ground a book it took me 10 years to get into shape—an alternative intellectual and
literary history of the mid-20th century. The other purpose, 12 years ago, was starting the small
magazine n+1. I had been something of a sucker for those calls for revived public intellectualism.
Yet I knew how unsatisfactory their resolution had become. My co-founders and I—all of us
planning together this unfunded magazine—imagined a joke headline to express where things
were heading: "Solution to Intellectual Crisis: Senior Scholars Write Op-Eds." Maybe a younger
generation could intervene? Both my library research and our creation of n+1 hid efforts to test
the times. What about intellectual life after the turn of the 21st century? Wasn’t ours still the same
world, rich with possibilities? What had we lost?

The discovery that most stayed with me from that naïve first reading ofPartisan Review was that,
yes, it was impossibly good. It was better than I expected or could have imagined, maybe the best
American journal of the century, or ever. There are yearlong streaks one could enter in which
every article in every issue is compelling, from Winter to Fall (when it was quarterly) or
January/February to November/December (when it was bimonthly), and at least one or two items
in each number would be masterworks.

And yet: The precise ways in which it was excellent seemed very different from what was
commonly said about it, or what nostalgists supposed. It especially differed from the supposed
appeal to public-mindedness or a "general reader" as people understand it today. This has
complicated much more for me the sense of "what we have lost"—to a degree that still confuses
me. And whether we have lost anything that mere will, or "outreach" or "engagement," could ever
get back.
I f you ask the conditions that allowed Partisan Review to reach greatness—broaching an

inquiry into what is necessary for the creation of "public intellect" in general, in the mid-20th
century past—you face some unruly historical particulars.
First was the stimulus of the Communist Party USA. The magazine began as a youth-club
publication of the party’s New York bureau. From 1934 to 1937, the editors championed Soviet
dictates for proletarian writing, and the house tone bore the weight of party cant. A 21-year-old
contributor went on maternity leave, and the editors praised her effort "to produce a future citizen
of Soviet America." Then, from 1937, a change in cultural policy led the party to roll up its youth
clubs, and two editors, Philip Rahv and William Phillips, took Partisan Review independent.

Second was the equalizing power of the Great Depression. With global capitalist collapse came
expectation of socialist reconstruction and, more practically, a lot of free time, when so many,
young and old, were underemployed and fractious. As William Phillips wrote in an early issue:
"Most of us come from petty-bourgeois homes; some, of course, from proletarian ones. But the
gravity of the economic crisis has leveled most of us (and our families) to meager, near-starvation
existence." The editors held open houses to workshop fiction and poetry on weekday afternoons.
Political independence for Partisan Review in 1937 still meant revolutionary socialism, just
without Stalin. Freed from the party, though, its first-generation Jewish founders linked up with
young American intellectuals, like Dwight Macdonald and Mary McCarthy, educated at Yale or
Vassar, who brought in money and connections to keep the magazine afloat. The golden age of
radicalism in politics, high modernism in aesthetics, and arrogance above all, which intellectual
historians have most admired in PR, was launched by this combined demographic.

Then, World War II. This eruption pitched intellectual Europe into the New Yorkers’ laps.
Internationalism had been a fixed principle already. But one expected the most important changes
to occur on the Continent and its greatest minds to stay put there. By the early 1940s, the bulk of
established European Jewish, leftist, or simply antifascist scholars and artists were on American
shores, as refugees in the orbit of New York or Hollywood. Most were eager to meet any
American group that would commit to them the same high opinion and intellectual interest they
felt for themselves—and the "New York Intellectuals" were Europhiles. Though the editors
of Partisan Review split internally in 1940 over U.S. entry into the war (with the renegades going
to two new journals, politicsand Commentary), access to the ruin of Europe made them uniquely
strong. It justified the juxtaposition of big names, internationally, with unknown Jews from the
Bronx.

It also made the magazine institutionally potent. The combination of knowledgeable, left-wing
anti-Communism with firsthand possession of a European émigré inheritance, all hammered
together through American literary and artistic networks in the great metropolis, was a rare alloy.
And as the United States emerged as the lone Western superpower, and its State Department
sought to woo a rebuilt Europe away from the Soviet alternative, this metal came increasingly into
demand. PR gained a kind of establishment support. This source of its success has been regretted
by historians as often as the magazine’s outsized authority has been saluted. To critics, it was as if
the recalcitrant stuff of critical thought had been weaponized. The establishment link marks the
somewhat uncomfortable side of Richard Hofstadter’s famous statement in 1963’s Anti-
Intellectualism in American Life that Partisan Review, against much philistinism elsewhere, had
become a "house organ of the American intellectual community." But had the house organ
become a consensus mouthpiece?

All of this has been well chewed over by those who gnaw on the period. Some mourn PR’s
radicalism, while others mourn its supposed liberal Americanism and patriotism. Often what is
said is that the Partisan Review’s writers and commentators had a courage and freedom that we do
not. And yet—oddly enough—these latter eulogies focus specifically on freedom from the
university. Hail, brave ghosts who address a "public" of "educated general readers" on a sunlit
plaza of the mind, undamaged by specialization and professionalism, pretension and ideology!
("Not to be overly dramatic, but I sometimes think," reflected Russell Jacoby on more recent
times, "we face the rise of a new intellectual class using a new scholasticism accessible only to the
mandarins, who have turned their back on public life and letters.")
Personally, I do not think that any of these old conditions or attunements preclude, by their
absence in our own time, a new birth of public intellect just as great as that of the earlier period.
Nor do I think "the university" is to blame for the change that does exist. If I try to say what really
does seem meaningfully different in our moment, I’m led elsewhere. Something has gone wrong
in our collective idea of the "public."

W hen The Chronicle Review invited me, with the spur of Partisan Review’s digital

reappearance, to compare it with the "state of polemic" now, in 2015, I confess my heart sank.
Fools rush in where angels fear to tread, and it is so hard to distinguish in your own time what is
temporary rubble and what is bedrock once you get the historical jackhammer whirring. Yet I do
feel certain that quite common, well-intentioned arguments about "public writing" and polemic
now are misguided, and the university-baiting is annoying. And this is not unrelated to the ways
elegists are wrong about Partisan Review.
So: Here are some of those things that nostalgists get wrong aboutPartisan Review, at least in its
major phase from the 1930s through the 1950s. First is that it deradicalized or became merely a
political vehicle of the Establishment. It’s true that it ceased to be Trotskyist, and it supported the
U.S. war against the Nazis, but it still retained a vision of future socialism that would make The
New York Times’s hair stand on end. The point, I think, is that one need not always water down
stringent politics to be taken seriously by power; better to be superior and truthful on all fronts
and let compromisers come to you.

Second is that it wholly "Americanized," coming to think of its aspirations as nationalistically


American, and the United States as the true source of authority and world thought. On the
contrary, Europe remained the other world—the greater world—which the New York Intellectuals
continued to view as the Olympus they must try to live up to and steal fire from. Our much newer
solipsism, in which American thought, predominating globally, has no other geographical place to
look up to and emulate, seems quite new (and perilous)—not at all the situation in evidence in
1945 or 1960.

Third, though most complicated, is the idea that Partisan Review and its thinkers and theorists
made their lives outside of the university. This might have been true among wealthy belles-
lettrists and little magazine modernists of 1920; it was not true by 1950. A more democratic layer
of intellect meant fewer thinkers with "independent means"—which meant that nearly everybody
eventually had to teach. Irving Howe became a professor at Brandeis, Daniel Bell at Harvard;
Lionel Trilling was already at Columbia, with F.W. Dupee, and both Sidney Hook and William
Barrett were at NYU. Hannah Arendt spent her career at the New School, Saul Bellow much of
his at the University of Chicago, Leslie Fiedler at the University of Montana, while even poor
Delmore Schwartz taught creative writing everywhere he could. Even PR’s editors, Philip Rahv
and William Phillips, moved into teaching. The magazine itself wound up supported in later years
by Rutgers and then Boston University.

At the arrival of the Great Recession, in 2007-8, I ruefully reminded friends and students that the
Depression of the 20th century, despite its miseries, had been surprisingly good for intellect. I
think we have all the dislocation, injustice, and economic inequality we need, when we look at our
America—and the classes of writers, teachers, arguers, dreamers, "petty bourgeois" or proletarian,
have indeed even been flattened and equalized a bit, in their salaries and prospects. Maybe they
need to be flattened even more, to truly take the measure of popular life in America. But the
outrages on offer are surely outrageous enough. As for depoliticization: Students stew in
philosophies of radical social change on one side, and observations of the corruption in the present
order on the other. I don’t know anyone’s bookshelf without its Marx and Wollstonecraft, its
Chomsky and Naomi Klein. The thing we’ve lost is really party politics, and it has been replaced
by music-centered subculture as the main beacon for the organizing (and self-organizing) of
youth. Scratch through the surface of any little magazine of the last 30 years and you’ll find the
inspiration of ’zines and DIY punk rock (hip hop may serve a parallel function through different
channels). But that may be a subject for another occasion.

Which leaves the question of the university. The economics of higher education in the
contemporary moment may be bad for many of us—teachers, students, and temporary passers-
through. But, again—this should not be a priori bad for public intellect or public debate. Quite the
opposite. A large pool of disgruntled free-thinking people who are not actually starving, gathered
in many local physical centers, whose vocation leads them to amass an enormous quantity of
knowledge and skill in disputation, and who possess 24-hour access to research libraries, might be
the most publicly argumentative the world has known.

And yet the philosophical and moral effect of "universitization" remains, I think, the most poorly
explained phenomenon of intellect from the late decades of the 20th century up till now. I don’t
mean that we don’t know the demographic shifts or historical causes, ever since the GI Bill. We
have enough statistics. I mean that we don’t have convincing speculative histories or insightful
accountings of the qualitative effects on ideas.

Confusingly, the "universitization of intellect" names overlapping changes. The most important
yet underappreciated was the process by which nearly all future writers of every social class came
to pass through college toward the bachelor’s degree. Another was the progress by which more
writers, including journalists, reviewers, poets, and novelists, as well as critics and historians and
social scientists, drew parts of their livelihood from periodic university teaching, whether they
were tenured professors or not. (This had clearly begun already by the "golden age" of public
intellect, in the 1940s to 1960s, as I’ve suggested.) The third, a corollary, was the vocational
integration in which formerly independent literary arts (fiction, poetry, even cultural criticism)
came to be taught as for-credit courses and degree-granting programs—with a credentialing spiral
whereby newly minted critics and intellectuals needed to have taken those courses and degrees in
order to pay rent by teaching them.

Our task is to make "the public" more brilliant,


more skeptical, more disobedient, and more
dangerous.
Nevertheless, the seriousness, intensity, and nobility of the university did not therefore get
communicated back outward, through writers’ remaining ties to the commercial sphere. The
university remained an accident, a blemish on the face of literature. The distaste for academia,
judging it essentially compromising to writers’ and critics’ practice, remains a compulsory conceit
for maintaining or resuming a place in commercial work. One must simultaneously differentiate
oneself from the university spiritually and embed oneself within it financially. I’d venture that the
long-term trend of the university, for culture, has thus been to be much more encompassing and
yet seem to matter less to that ultimate phantasm, "the real world." "Matter," that is, visibly—to
identity, in authority of open expertise in the arts and humanities, or pride of tone or university
style—whenever university skills (and salary) facilitate extra-university utterance. (This does not
answer what sorts of unconscious influences and determinations of art and ideas may be
happening underneath.) But this was never a foregone conclusion.

Here’s a personal confession. At the start of n+1, our conception anticipated, in fact depended
upon, striking a chord among under- appreciated academics. We founders were in our late 20s.
We had graduate degrees (fistfuls of M.F.A.’s, M.A.’s, and even one M.Phil. among us), but not
jobs. Looking upward to those who had gone further, the languishing professoriate’s reservoir of
erudite rage seemed a natural resource waiting to be unlocked. I, for one, was certain that if we
recreated a classic public-intellectual mode, by sticking difficult argument in the public eye—
keeping it elevated, superior, but unfattened by "literature reviews" and obeisances to mentors—
junior professors would flock to our banner and create classic public-intellectual provocations like
those of yore. Just think of the ranks of assistant professors, even newly tenured associates, all
frustrated, all possessed of backlogs of fierce critical arguments (with bankers’ boxes of research),
throwing caution to the wind and freeing these doves and falcons from their cages. Fly free,
beautiful birds!
The huge personal disappointment—and it puzzled me for a long time—was that junior professors
did not, by and large, give us work I wanted to print. I knew their professional work was good.
These were brilliant thinkers and writers. Yet the problems I encountered, I hasten to say, were
absolutely not those of academic stereotype—not esotericism, specialization, jargon, the
"inability" to address a nonacademic audience. The embarrassing truth was rather the opposite.
When these brilliant people contemplated writing for the "public," it seemed they merrily left
difficulty at home, leapt into colloquial language with both feet, added unnatural (and frankly
unfunny) jokes, talked about TV, took on a tone chummy and unctuous. They dumbed down, in
short—even with the most innocent intentions. The public, even the "general reader," seemed to
mean someone less adept, ingenious, and critical than themselves. Writing for the public
awakened the slang of mass media. The public signified fun, frothy, friendly. And it is certainly
true that even in many supposedly "intellectual" but debased outlets of the mass culture, talking
down to readers in a colorless fashion-magazine argot is such second nature that any alternative
seems out of place.

This was emphatically not what the old "public intellect," and Partisan Review, had addressed to
the public. Please don’t blame the junior professors, though. (Graduate students, it must said, did
much better forn+1, as they do still.)

S uppose we try a different, sideways description of the old public intellectual idea. "Public

intellect" in the mid-20th century names an institutionally duplicitous culture. It drew up accounts
of the sorts of philosophical, aesthetic, and even political ideas that were discussed in universities
more than elsewhere. It delivered them to readerships and subscriberships largely of teachers and
affiliates of universities—in quarterly journals funded by subscriptions, charitable foundations,
and university subsidies. But the culture it made scrubbed away all marks of university affiliation
or residence, in the brilliant shared conceit of a purely extra-academic space of difficulty and
challenge. It conjectured a province that had supposedly been called into being by the desires, and
demands, of "the real world." And this conceit, or illusion, was needed and ultimately embraced
on all sides—by the writers, by the readers, by the subsidizers—even, in fact, by parts of that "real
world" itself, meaning bits of commerce, derivative media, politics, and even "official"
institutions of government and civil society. The collective conceit called that space, in some way,
into being.
But the additional philosophical element that made this complicated arrangement work, and the
profound belief that sustained the fiction, on all sides, and made it "real" (for we are speaking of
the realm of ideas, where shared belief often just is reality), was an aspirational estimation of "the
public." Aspiration in this sense isn’t altogether virtuous or noble. Nor is it grasping and
commercial, as we use "aspirational" now, mostly about the branding of luxury goods. It’s
something like a neutral idea or expectation that you could, or should, be better than you are—and
that naturally you want to be better than you are, and will spend some effort to become capable of
growing—and that every worthy person does. My sense of the true writing of the "public
intellectuals" of the Partisan Review era is that it was always addressed just slightly over the head
of an imagined public—at a height where they must reach up to grasp it. But the writing seemed,
also, always just slightly above the Partisan Reviewwriters themselves. They, the intellectuals,
had stretched themselves to attention, gone up on tiptoe, balancing, to become worthy of the more
thoughtful, more electric tenor of intellect they wanted to join. They, too, were of "the public," but
a public that wanted to be better, and higher. They distinguished themselves from it momentarily,
by pursuing difficulty, in a challenge to the public and themselves—thus becoming equals who
could earn the right to address this public.

Aspiration also undoubtedly included a coercive, improving, alarmed dimension in the postwar
period. The public must be made better or it would be worse, ran the thought. The aspiration of
civic elites was also always to instruct the populace, to make them citizens and not "masses." Both
fascism and Sovietism had been effects of the masses run wild (so it was said). The GI Bill, and
the expansion of access to higher education after 1945, funded by the state, depended on an idea
of the public as necessary to the state and nation, but also dangerous and unstable in its
unimproved condition. This citizenry would fight for the nation. It would compete, technically
and economically, with the nation’s global rivals. And it must hold some "democratic" vision and
ideology to preserve stability. Even the worst elitists could agree to that. Hence the midcentury
consensus that higher education should "make," or shape, "citizens" for a "free society"—which
one hears from the best voices, and the worst, from that time.

Those of us attached to universities can feel, as strongly as anyone, how ideologies of the "public"
have changed drastically from the older conception. After all, it’s on the basis of this increasingly
servile, contemptuous, and antinational vision of "the public" that universities are being politically
degraded, in vocational rationales for the humanities and the state’s lost interest in public higher
education. The national indifference, from the top down, to the mass, the many, the citizenry, the
public, from the 1970s to the present, expresses a late discovery that the old value and
fearsomeness of the public had been erroneous. The mass public was no longer threatening, or
needed. After Vietnam, the public was no longer needed for military service, as an all-volunteer
army would fight for pay without inspiring protest. The public was no longer needed for mass
production, as labor was exported. A small elite of global origin, but funneled through American
private universities, would design all the new technological and financial instruments that could
keep U.S. growth and GDP high in aggregate, though distributed unequally.

Protest, not stability, seemed to arise in the late 1960s from a mass national education that put
students and professors together too comfortably in the universities, especially at the best of the
public systems, as in California. Nor would the rest of the public rise up and make trouble, even
as it was left behind for the sake of the new order. A scary and capable democratic public would
not find a voice in TV, or Hollywood, or the forms of communication that flattered the public as if
we liked to be dumb and powerless, nowhere coercing it with intellectual aspiration. All the
American public, the many, were needed for was as continuing consumers—as long as that
demand did not place too much burden on the state for support—and this could be accomplished
in the short term by loose credit. And even as wages stagnated, goods appeared, in the form of
new, underpriced fruits of globalized labor, their miraculously low costs to be put onto Visa and
MasterCard. I am only recounting a history that we have all learned to experience as cliché.

If all that’s so, there’s little enough that intellectuals in any location can undo immediately with a
flourish of rhetoric or a stroke of a pen. But insofar as a debate about priorities—and ideals—will
continue anyway in our little corner of the world, we ought to try to set it the right way round. The
idea of the public intellectual in the 21st century should be less about the intellectuals and how, or
where, they ought to come from vocationally, than about restoring the highest estimation of the
public. Public intellect is most valuable if you don’t accept the construction of the public handed
to us by current media. Intellectuals: You—we—are the public. It’s us now, us when we were
children, before the orgy of learning, or us when we will be retired; you can choose the exemplary
moment you like. But the public must not be anyone less smart and striving than you are, right
now. It’s probably best that the imagined public even resemble the person you would like to be
rather than who you are. (And it would be wise for intellectuals to stop being so ashamed of ties to
universities, however tight or loose; it’s cowardly, and often irrelevant.)

If there is a task, it might be to participate in making "the public" more brilliant, more skeptical,
more disobedient, more capable of self-defense, and more dangerous again—dangerous to elites,
and dangerous to stability; when it comes to education, dangerous to the idea that universities
should be for the rich, rather than the public, and hostile to the creeping sense that American
universities should be for the global rich rather than the local or nationally bounded polity. It is
not up to the public intellectual alone to remake "the public" as a citizenry of equals, superior and
dominant—that will take efforts from all sides. But it is perhaps up to the intellectual, if anyone,
to face off against the pseudo-public culture of insipid media and dumbed-down "big ideas," and
call that world what it is: stupid.

Lincoln and Marx


The transatlantic convergence of two revolutionaries.
by Robin Blackburn

A braham Lincoln, as president, chose to reply to an “Address” from the London-based

International Workingmen’s Association. The “Address,” drafted by Karl Marx, congratulated


Lincoln on his reelection for a second term. In some resonant and complex paragraphs, the
“Address” heralded the world-historical significance of what had become a war against slavery.
The “Address” declared that victory for the North would be a turning point for nineteenth-century
politics, an affirmation of free labor, and a defeat for the most reactionary capitalists who
depended on slavery and racial oppression.
Lincoln saw only a tiny selection of the avalanche of mail he was sent, employing several
secretaries to deal with it. But the US Ambassador in London, Charles Francis Adams, decided to
forward the “Address” to Washington. Encouraging every sign of support for the Union was
central to Adams’s mission. The Emancipation Proclamation of January 1863 had made this task
much easier, but there were still many sections of the British elite who sympathized with the
Confederacy and some who favored awarding it diplomatic recognition if only public opinion
could be brought to accept this.

The “Address” carried, beside that of Marx, the signatures of several prominent British trade
unionists as well as French socialists and German social democrats. The Ambassador wrote to the
IWA, explaining that the president had asked him to convey his response to their “Address.” He
thanked them for their support and expressed his conviction that the defeat of the rebellion would
indeed be a victory for the cause of humanity everywhere. He declared that his country would
abstain from “unlawful intervention” but observed that “The United States regarded their cause in
the present conflict with slavery-maintaining insurgents as the cause of human nature, and they
derived new encouragement to persevere from the testimony of the working men of Europe.”

Lincoln would have wished to thank British workers, especially those who supported the North
despite the distress caused by the Northern blockade and the resulting “cotton famine.” The
appearance of the names of several German revolutionaries would not have surprised him; the
defeat of the 1848 revolutions in Europe had swelled the flood of German migrants arriving in
North America. At an earlier date — in 1843 — Marx himself had thought of immigrating to
Texas, going so far as to apply to the mayor of Trier, his birthplace, for an immigration permit.

What path would world history have taken if Marx had become a Texan? We will never know.
What we do know is that Marx remained in touch with many of the exiles. His famous essay on
“The Eighteenth Brumaire of Louis Napoleon” was first published in New York in German. Not
all German émigrés were radicals, but many were. With their beer halls, patriotic songs, and
kindergartens, they helped to broaden the distinctly Puritan culture of Republicanism. They had
been educated to despise slaveholding, and eventually nearly two hundred thousand German
Americans volunteered for the Union army.

There was an affinity between the German democratic nationalism of 1848 and the free labor
doctrine of the newly-established US Republican Party, so it is not surprising that a number of
Marx’s friends and comrades not only became staunch supporters of the Northern cause but
received senior commissions. Joseph Weydemeyer and August Willich, both former members of
the Communist League, were promoted first to the ranks of Colonel and then to General.

Lincoln may have recognized the name Karl Marx when he read the IWA “Address,” since Marx
had been a prolific contributor to the New York Daily Tribune, the most influential Republican
newspaper of the 1850s. Charles A. Dana, publisher of the Tribune, first met Marx in Cologne in
1848 at a time when he edited the widely read Neue Rheinische Zeitung. In 1852, Dana invited
Marx to become a correspondent for the Tribune. Over the next decade he wrote — with some
help from his friend Engels — over five hundred articles for the Tribune. Hundreds of these
pieces were published under Marx’s name, but eighty-four appeared as unsigned editorials. He
wrote on a global range of topics, sometimes occupying two or three pages of a sixteen-page
newspaper.
Once the Civil War began, US newspapers lost interest in foreign coverage unless it directly
related to the war. Marx wrote several pieces for European papers explaining what was at stake in
the conflict and contesting the claim, widely heard in European capitals, that slavery had nothing
to do with the conflict. Important sections of the British and French elites had strong commercial
ties to the US South, buying huge quantities of slave-grown cotton. But some European liberals
with no direct link to the slave economy argued that secession by the Southern states had to be
accepted because of the principle of self-determination. They attacked the North’s option for war
and its failure to repudiate slavery.

In Marx’s eyes, British observers who claimed to deplore slavery yet backed the Confederacy
were simply humbugs. He attacked the visceral hostility to the North evident in the Economist and
the Times (of London). These papers claimed that the real cause of the conflict was Northern
protectionism against the free trade favored by the South. Marx rebutted their arguments in a
series of brilliant articles for Die Presse, a Viennese publication, which caustically demolished
their economic determinism, and instead sketched out an alternative account — subtle, structural,
and political — of the origins of the war.

Marx insisted that secession had been prompted by the Southern elite’s political fears. They knew
that power within the Union was shifting against them. The South was losing its tight grip on
federal institutions because of the dynamism of the Northwest, a destination for many new
immigrants. As the Northwest Territory matured into free states, the South found itself
outnumbered; the North was loath to recognize any new slave states. The slaveholders had
alienated Northerners by requiring them to arrest and return fugitive slaves, yet they knew they
needed the wholehearted support of their fellow citizens if they were to defend their “peculiar
institution.” Lincoln’s election was seen as a deadly threat because he owed Southerners nothing
and had promised to oppose any expansion of slavery.

Marx gave full support to the Union cause, even though Lincoln initially refused to make
emancipation a war goal. Marx was confident that the clash of rival social regimes, based on
opposing systems of labor, would sooner or later surface as the real issue. While consistently
supporting the North, he wrote that the Union would only triumph if it adopted the revolutionary
anti-slavery measures advocated by Wendell Phillips and other radical abolitionists. He was
particularly impressed by Phillips’s speeches in 1862 calling to strike down all compromises with
slavery. He approvingly quoted Phillips’s dictum that “God had placed the thunderbolt of
emancipation” in Northern hands and they should use it.

Marx continued to correspond with Dana and sent him his articles (Dana was fluent in German).
By this time Dana had left the world of journalism to become Lincoln’s “eyes and ears” as a
special commissioner in the War Department, touring the fronts and reporting to the White House
that Ulysses Grant was the man to back. Marx argued in Die Presse in March 1862 that the Union
armies should abandon their encirclement strategy and seek to cut the Confederacy in two. Dana
may have noticed that Grant had reached the same conclusion by instinct and experience. In 1863,
Dana became Assistant Secretary of the War Department.

Marx was delighted when Lincoln — emboldened by the abolitionist campaign and a
radicalization of Northern opinion — announced his intention to issue an Emancipation
Proclamation in January 1863. The Proclamation would make it difficult for the British or French
governments to award diplomatic recognition to the Confederacy. It also allowed for the
enrollment of freedmen in the Union army.

Marx and Lincoln had very divergent opinions on business corporations and wage labor, but from
today’s perspective they shared something important: they both loathed exploitation and regarded
labor as the ultimate source of value. In his first message to Congress in December 1861, Lincoln
criticized the “effort to place capital on an equal footing with, if not above, labor in the structure
of government.” Instead, he insisted, “labor is prior to and independent of capital. Capital is only
the fruit of labor . . . Labor is the superior of capital, and deserves much the higher consideration.”

Lincoln believed that in America the wage laborer was free to rise by his own efforts and could
become a professional, or even an employer. Marx held that this picture of social mobility was a
mirage, and that only a handful could succeed in acquiring economic independence.

For Marx, the wage worker was only partly free since he had to sell his labor to another so that he
and his family might live. But, since he was not a slave, the free worker could organize and
agitate for, say, a shorter working day and free education. Weydemeyer had launched an
American Labor Federation in 1853 which backed these objectives and which declared its ranks
open to all “regardless of occupation, language, color, or sex.” These themes became central to
the politics of Marx’s followers in America.
Lincoln’s assassination led Marx to write a new “Address” from the IWA to his successor, with a
fulsome tribute to the slain president. In this text, Marx described Lincoln as “a man neither to be
browbeaten by adversity, nor intoxicated by success, inflexibly pressing on to his great goal,
never compromising it by blind haste, slowly maturing his steps, never retracing them . . . doing
his titanic work as humbly and homely as heaven-born rulers do little things with the
grandiloquence of pomp and state. Such, indeed, was the modesty of this great and good man that
the world only discovered him a hero after he had fallen a martyr.” However, the tragic loss could
not prevent Northern victory opening the way to a “new era of the emancipation of labor.”

Marx and Engels were both soon troubled by the actions of Andrew Johnson, the new president.
On 15 July 1865, Engels wrote to his friend attacking Johnson: “His hatred of Negroes comes out
more and more violently . . . If things go on like this, in six months all the old villains of secession
will be sitting in Congress at Washington. Without colored suffrage, nothing whatever can be
done there.” Radical Republicans soon came to the same conclusion.

In the immediate aftermath of the war, and thanks in part to the publication of the IWA addresses,
the International attracted much interest and support in the United States.

Marx was putting the finishing touches on Capital: Volume I in 1866–67, and included a new
section at this late stage on the determinants of the length of the working day. The call for an
eight-hour day had emerged as a key demand in several US states. In 1867, the IWA welcomed
the appearance of a National Labor Union in the US, formed to spread the demand as a unifying
goal.

At its first conference the NLU declared: “The National Labor Union knows no north, no south,
no east, no west, neither color nor sex, on the question of the rights of labor.” Within the space of
a year, eight different Northern states adopted the eight-hour day for public employees.

The regions of the United States offered very different possibilities for political action. Only the
presence of Union troops in the South prevented white vigilantes, many of them Confederate
veterans, from terrorizing the freedmen. In Tennessee, South Carolina, and Louisiana, there were
black congresses that drew up a “Declaration of Rights and Wrongs,” insisting that freedom
would be a mockery if it did not entail equal access to buses, trains, and hotels, schools and
universities.
In the North and West, the boldest radicals organized sections of the International; by the late
1860s there were about fifty sections and a membership of perhaps five thousand. In December
1871 the IWA in New York organized a seventy-thousand-strong demonstration of sympathy with
the victims slaughtered in the suppression of the Paris Commune. The throng prominently
featured a black militia called the Skidmore Guards; many trade unionists with their banners;
Victoria Woodhull and the feminist leaders of Section 12; an Irish band; and a contingent
marching behind the Cuban flag. Many of the unions founded at this time included the word
“International” in their name.

But by the early 1870s Northern support for Reconstruction, with its expensive occupation of the
South and its bold affronts to racial prejudice, was beginning to ebb. A wave of corruption
scandals sapped Republican morale. The real problem, however, was that the Republican program
had come apart at the seams. Lincoln had hoped to build a strong and authoritative federal
government in Washington, and thus obtain respect for the rule of law throughout the restored
Union. In Marx’s eyes, Lincoln would have built the sort of “bourgeois democratic republic” that
would have allowed for the emergence of a labor party dedicated to free education, progressive
taxation, and an eight-hour work day.

These hopes were dashed. Lincoln’s assassination, the chaos and reaction of the Johnson
presidency, and the failure of Ulysses Grant, his successor, to impose moral leadership all
undermined or compromised the promise of an authoritative, undivided federal government. Marx
was not surprised by the emergence of “robber baron” capitalists, nor by the bitter class strife they
unleashed. He had expected — indeed predicted — as much.

But the failure of the federal state to impose its authority on the South was another matter, as was
the Northern bosses’ ability to crush strikes by deploying thousands of special constables and
Pinkerton men.

The end of slavery certainly validated the momentary alignment of Lincoln and Marx. During
Reconstruction (roughly 1868–76), freedmen could vote, their children could go to school, and
there were many black elected officials. In the North, there were gains for the eight-hour
movement and the first attempts to regulate the railroad corporations. But something of the
conservative spirit of the antebellum republic, with its aversion to federal taxation, lingered on in
the weakness of the federal power. In an ominous development, the Supreme Court declared that
the progressive income tax, introduced by the Lincoln administration in 1862, was
unconstitutional. Without the income tax, paying for the war would be much harder and future
redistribution impossible. Another retrograde step was a Supreme Court ruling that construed the
promise of equal treatment of “all persons” in the Fourteenth Amendment of 1868 — a measure
introduced to protect the freedmen — as offering protection to the new corporations, since they
were also deemed to enjoy the status of “persons.” The direct result of this decision was to make it
far more difficult for federal or local authorities to regulate corporations (the ruling is still in
force).

Reconstruction ended with a deal between Republicans and Democrats that resolved the
deadlocked Electoral College of 1876 by confirming the fractured authority of the state. This deal
allowed the candidate with fewer votes to enter the White House while requiring the withdrawal
of all federal troops from the South. This gave free reign to the lynch mobs. Within a few months,
Grant himself complained, the federal troops that had been prevented from tackling the Ku Klux
Klan were sent against the railworkers during the Great Strike of 1877, suppressing it at the cost
of a hundred lives. American workers fought back tenaciously, but often on a regional or state-by-
state basis. To many, syndicalism made more sense than the labor party that Marx and Engels
advocated, though Marx’s penetrating analysis of capitalism still had an impact on people as
diverse as Samuel Gompers (founder of the AFL), Lucy Parsons (syndicalist, feminist, founder of
the IWW), and Eugene Debs (Socialist).

The defeat of Lincoln’s vision of a unified, democratic, and authoritative republic was a defeat for
the socialists too. Not for the last time, the genius of the US Constitution, with its multiple checks
and balances, was to frustrate the plans of progressives.

Why Max Weber matters


DUNCAN KELLY

Peter Ghosh

We hope you enjoy this free piece from the TLS, which is available every Thursday in print and via the TLS
app. This week’s issue also features Schiller’s Joan of Arc, the further exploits of Patrick Leigh Fermor, the
myths of the American frontier, an eighteenth-century manuscript, nineteenth-century Iceland, the many
truths of Italy – and much more.
In the past year of major anniversaries, you might have missed the sesquicentennial of one of the greatest German
scholars of the later nineteenth and early twentieth centuries. Max Weber was born in 1864 and died in 1920, but in a
relatively short life, the sheer bulk of what he wrote about with seriousness, purpose and commitment, from agrarian
history to rationality and music, from abstract methodological pronouncements to the workings of the stock market,
from the major world religions to war and revolution, is staggering. Yet apart from academically necessary early
qualifications that required the production of weighty tomes, he never wrote a big book, neither founded nor had any
interest in founding a school, and never cared about the accoutrements of academic fame even as those around him
recognized his presence and power. He was touchy, often suffered from depression, and was diagnosed as a
neurasthenic. He went through the usual arrays of spa treatments, self-diagnosis and drugs, and even dallied with the
anarchist communities in Ascona; all of the rituals of his self-consciously “bourgeois” class and status. Nevertheless he
remained intellectually uncompromising, so that although the impact of his nervous disorders meant he had to resign
from teaching commitments, the freedom this provided allowed him space to pursue a wide array of interests, without
having to conform to the norms of disciplinary fields (“I am not a donkey and do not have a field”, as he famously
remarked).

Although an intense historical sensibility undergirds Weber’s work, as Peter Ghosh says in this stylish and
extraordinarily detailed new intellectual history, Weber himself was most definitely “not an intellectual historian”. It has
meant (ironically given Weber’s sense of the limited time horizons of academic relevance) that his work has had serious
longevity. For he was an engaged, modern thinker, concerned to illuminate the historical development of “tendencies”
that created new realities, and out of which certain contemporary problems could be put into sharp relief with
appropriate conceptualization. Interested in the power derived from the “economic” way of looking at things, he worked
by a mechanism of “causal regression”, producing “ideal-type” social and historical models or frameworks. These self-
conscious methodological “utopias” would polemically highlight or sideline certain aspects of reality at any one time in
order to permit the construction of specific genealogies. One example was his conceptual use of “asceticism” versus
“mysticism” to highlight the religious foundations that lay behind modern constructions of “rational” conduct,
particularly in the West.

Personality-focused or biographical approaches seem inappropriate to understand a man


so concerned with the power of the impersonal
Personality-focused or biographical approaches seem inappropriate to understand a man so concerned with the power of
the impersonal. So Ghosh reconstructs Weber’s work through a history of his texts. Not for him the “riotous gallimaufry”
of psychosexual explanation, outlined in a recent study by Joachim Radkau (and reviewed by Ghosh in the TLS, June 19,
2009). Nor the intellectually limited if personally interesting biography, such as that published by Weber’s widow
Marianne, shortly after his death. (She had always wanted him to write the sort of “fat” book that she produced,
reckoning it would secure him academic fame and personality. He thought her a “silly goose” in such matters.) Instead,
Ghosh’s answer to Weber’s uniqueness is beguilingly simple, and gloriously revisionist in overturning most established
scholarship. His claim, very baldly restated, is that around 1904, aged forty, Weber had a scholarly annus mirabilis. He
took over joint editorship (with the flashy personality Werner Sombart and the wealthy younger scholar Edgar Jaffé) of
the leading social scientific journal in Germany, theArchiv für Sozialwissenschaft und Sozialpolitik. Ghosh undercuts a
relatively recent scholarly consensus that Weber both played a central role in the drafting of a new editorial policy, and
that this explains something crucial about Weber’s ideas, to show instead how central Weber was to the networks
involved in the sale, purchase and revised editorial and contributory make-up of the journal. From the backroom, so to
speak, he quickly manoeuvred himself into a position where he would become central. Weber’s celebrated essays which
were later published together as The Protestant Ethic and the Spirit of Capitalism first appeared in the Archiv. They form a
pivot around which Ghosh’s narrative turns. For Ghosh this is the supremely Weberian text, a synthetic, polemical essay
bringing together all his major interests to date in capsule form. Despite its avowedly limited focus, it also laid the
foundations for nearly all his future work. It was his “summit”. Unsurprisingly, then, Ghosh’s “twin histories” entail a
game of two halves. The first considers how Weber could have come to write such an account, while the second shows its
presence in nearly everything that came after. It is a study of how the historically minded Weber, whose ideas came as
often through a good cigar in the evening as through anything else, fares when examined by an absolutely historicist
intellectual historian who focuses on a text that was absolutely “not an historicist construction” of the past. Weber might
have disapproved of intellectual biography per se, but he would surely have been amused to hear the many echoes in
Ghosh’s prose of his own pugnacious style and intellectual tenacity.
In 1904, Weber also took up an invitation to speak on rural societies to an audience at the Congress of Arts and Sciences,
affiliated with the World’s Fair at St Louis. It was there that he met W. E. B. Du Bois (whom he tried to recruit for
the Archiv), and, though away for several months, Weber also published a seminal methodological essay (again in
the Archiv) roughly translated as “The Objectivity of Social Scientific and Socio-Political Knowledge”. This was his
original and most fully elaborated claim about the importance of constructing “ideal types” as a solution to the problems
of relativism that historicism might fall prey to. His training in the great traditions of Roman and commercial as well as
private law provided him with exquisite tools for such high-level conceptual abstraction. And conceptual abstractions
like asceticism, vocation (Beruf), even capitalism and rule (Herrschaft), were then pressed into service to structure his
developmental claim about a contemporary Kultur whose “magic” had been progressively stripped away by rational and
impersonal forms of conduct that originated in an earlier “ascetic Protestantism”. Such terms structure Ghosh’s
individual chapters, and help to buttress his account of what Weber meant when he said that Kultur is a “secular
substitute for Christianity”.
Thus outlined, Weber’s twin interests clearly coalesced in a concern with the fate of politics and religion as the two oldest
and noblest “human ideals”, but whose configurations had been changed, and changed utterly, by modern capitalism. In
order to explain their new co-ordinates, Weber tried to fix the position of capitalism in his mind, and his struggles to do
so form another pivot for Ghosh’s text. Here, Weber remains a frontiersman in the quest for a peculiarly “bourgeois”
(read Western) form of capitalism, but it stands on the “uncertain frontier” between religion and politics. Each in turn
has its different types of Herrschaft, but capitalism, like politics and religion, also has no singular “essence”. Early on,
therefore, we find Weber talking about capitalism and “character” inThe Protestant Ethic, and elsewhere engaging
critically with contemporary economic theories of capitalism as rational and acquisitive behaviour. But as a verifiable
“fact”, the existence of something called “capitalism” was intensely insecure – perhaps (Ghosh speculates) indicating a
“failure” of method on Weber’s part. He had discussed the effects of capitalism on agrarian forms of economic
organization much earlier, and whether through familial contact (his uncle Carl David Weber’s linen mill in
Oerlinghausen was close at hand), or through individual projects on the psychophysics of industrial labour, the
psychological dynamics of those effects remained of interest.
The sheer bulk of what Weber wrote about with seriousness, purpose and commitment is
staggering
Similarly, the difference between formal and material “rationality” was implied, if not fully elaborated, in the process.
Weber was trying to move beyond Marx to think of capitalism as something more profound than just a capitalist mode of
production, and this had something to do with the relationship between capitalism and law. Weber’s view of the
formalistic legality of the Puritans and their substantive concern with natural law was that it led them towards a legally
derived conception of “rational” behaviour. That sort of conduct, he concluded, would ultimately find genealogical
affinity with the sort of economic and calculating action necessary for the development of “bourgeois” capitalism.

Weber traced the radical Calvinist origins of such a vision, to show how it was gradually overturned by utilitarianism,
before becoming the sort of fully fledged, modern conduct applicable to Benjamin Franklin’s injunction that “time is
money”. He saw his problem everywhere though, and strict historical accuracy when discussing ideas seemed
unimportant. For him, utilitarianism was the heir to seventeenth-century asceticism, but the latter already had a
“utilitarian character”. Similarly, Calvinist theodicy was an asynchronous, “sacralized version of Adam Smith’s ‘invisible
hand”’, making possible the separation of legal from bureaucratic and economic forms of rationality. Recognizing
practical overlays but conceptually fixing points of transition along the tracks, he used the idea of an “elective affinity”
between different elements to help structure his argument. For Ghosh, Weber was quite the idealist in this regard,
concerned first and foremost with getting ideas straight, even when their material histories were crooked.

The forms of authoritative rule that most interested Weber (traditional, rational-legal, charismatic and, on occasion,
“democratic”) could either be explored in world-historical perspective, or, in tandem with his focus on capitalism. The
latter would come to be seen as historically conditioned by the transformation of religious “sects”, and the importance of
the “sectarian idea” within them. Indeed, their nourishing of radical individualism and heterogeneous values made the
sects distant cousins of modern ideological divisions. Weber also aligned his discussion with a historical account of the
rise of the city in European history, to signify another unique attribute of modern WesternKultur. His account of the
sects was originally supposed to issue in a third essay in the Protestant Ethic series, and although early bits appeared, full
discussion had to wait because Weber allowed himself the luxury of letting his close friend and colleague Ernst Troeltsch,
the historian of religion, complete his voluminous masterwork Die Soziallehren der christlichen Kirchen (The Social
Teachings of the Christian Churches), which appeared in early 1912. Weber was grateful (though his publisher was not)
for the chance to postpone re-engagement with Protestant Ethic-style questions until an acknowledged expert had had a
chance to work out if he was onto something or not. Perhaps unsurprisingly, Troeltsch the expert became his
“authoritative vindicator”.
During this time, Weber’s many letters to his publisher suggest the usual sorts of authorial prevarications in response to
polite cajoling, but also contain hints of his struggle to find out again and again how much he could “bear” or “stand”
both in and of his own personality. Weber took on new and Herculean labours, working towards a new sociology of
religion (broadly complete by 1913) alongside an account of the economic ethics of world religions in comparative
perspective. He had also taken on the task of radically updating and editing a major and massive multi-author handbook
on political economy. Frustrations with that meant that much of his own contribution would end up in the multilayered
treatise that we now know as Economy and Society. And most of this was achieved despite the intervention of a world
war, nervous exhaustion, legal disputes (with colleagues among others) and fraught personal relationships.
In fact, his focus on the relationship between the strictly internal demands of the individual personality ( Persönlichkeit)
and the objective requirements of particular orders of life (Lebensordnungen) that personality might choose and which
would fix its outward ethical form, was immortalized in two panoramic lectures on the vocations of science and politics.
Given at the height of major interest and political unrest in 1917 and again in 1919 in Munich, these public displays were
definitely not “drivel”, as he described his usual student lectures. Though suitably huge, they display breathtaking
command and control. For both scientist and politician, hot passion and the sense of a calling must be aligned with a
cooler demand for objectivity and responsibility. Both combinations take different forms, in part because they have
different means at their disposal, violence and state power versus intellect and scholarly integrity. And both vocations
have deep roots in ideas first presented in The Protestant Ethic, where an originally religious “cosmos” which framed the
Puritan world of ethical conduct, has become a new and steely sort of housing for everyone. Weber’s history of the
present found its denouement in the famous conclusion of The Protestant Ethic, where he wrote that although the Puritan
“wanted” to follow a “calling”, today we are simply forced or compelled to. Years later, in the crucible of war and
revolution, what might such compulsion mean?
In a key interpretation, again contrary to the prevalent view of Weber as a committed German nationalist, Ghosh finds
that the war principally offered Weber a welcome break from debilitating academic work on Economy and Society, and
made little difference to his politics at all. Partly, that’s because in Ghosh’s eyes Weber was never properly a “nationalist”.
Rather, his was a “multicultural” world, governed by conflicts over values, but where resolution was to be found neither
in relativism nor resignation, but rather in a principled “objective” commitment to taking the measure of what we
conventionally render as value pluralism. Yet Weber’s account of the values behind German politics since Bismarck was
deeply unflattering; the Reich was simply bureaucratic politics without either charisma or responsible leadership. Noting
that economic dominance normally, though not automatically, translates into political power, Weber suggested that his
own “bourgeois” class in Germany lacked the political education for power for which its economic predominance should
have been preparing it. This made it difficult to deal with real problems, and Weber attacked Left and Right for trying to
valorize the past, for proposing organic visions of the German nation as a collective personality, and for trying to talk
about the end of realpolitik in the dream world of socialist utopia. War couldn’t and didn’t change the realistic
background of political choice in his mind, and socialism was simply code for the “bureaucratization of the economy”.
Unable to find regular war work in Brussels, where he had travelled in 1915 as an unofficial political consultant following
the German invasion of Belgium, Weber refused to descend into propaganda, like many of his colleagues. He self-
consciously recused himself to his publisher, again, from fully continuing with his epic academic projects under the
circumstances. But while he took up a position as a hospital orderly when he could, he also collected several of his essays
from the Archiv on world religions for quick publication, when copy was much in demand. As the war went on, but
particularly in the last few years of his life, he engaged in several passionate affairs, and went between universities in
Munich and Vienna lecturing on the state, sociology and general economic history. Politics still mattered, and his
engagement with the conditions of a peace settlement was unblinking, as one would expect, but in his work, religion
occupies nearly three times as much space. On Ghosh’s account of Weber’s priorities, the balance sheet is clear.
Weber was trying to move beyond Marx to think of capitalism as something more
profound than just a capitalist mode of production
As Ghosh notes, Weber had always been a superb political analyst. His “probability” theory about when peace would
come and the possibilities of German victory was as ruthlessly realistic as anything else he wrote. Yet if objectivity in the
midst of war set him apart, one of Ghosh’s major points is that politics for Weber, although a grand human ideal, had
been on the decline for centuries as the crucial arbiter of human conduct. Now, it meant nothing more or less than
earthly Herrschaft, so that there were only so many ways one could talk about its development, or think about its
valences. For those who hold fixed ideas about Weber the political animal, Ghosh’s claims will be hard reading. But part
of the problem with seeing him as a straightforward nationalist was that even incandescent rage about national shame
was allied to a profound understanding of geopolitics and political responsibility. This made it clear to him that
“reaction” and public retribution, or power politics without content, were futile modes of engagement. Subtler and more
“responsible” policy was required if long-term success was to be achieved, and that would have to take place in
diplomatic back channels by “responsible” statesmen. Germany was actually quite successful at this when seen in
comparative perspective, a point amplified recently by Adam Tooze in The Deluge: The Great War and the remaking of
global order(2014).
Weber’s well-known refrains about the dangers of a politics of national vanity (Eitelkeit), made most famously in his
lecture on the vocation of politics, were in fact extant in his writings from the start. For example, in a perspicacious essay
realistically assaying the prospects of Germany against the European world powers during the war, he once more stated
his belief that “objective politics” was not a “politics of vanity”, but one whose actions necessarily took place in the
shadows. In the darkness personality could really count, rather than in public, when it was too often just for show.
Whether in The Protestant Ethic or in his later essays and lectures on politics, Weber worried about the fate of personality
and responsibility in what has infamously (if incorrectly) come to be known as the “iron cage” of a rationalized and
bureaucratized modern world. In The Protestant Ethic, he wrote that perhaps the mechanization of life would continue
unchallenged until the last ounces of fossil fuel had been used up, and the danger was that we might simply fail to notice.
In later work, such environmental imagery had turned into a worry about the future as a polar night of icy darkness. In
order to guard against these dangers, inner personality had to be focused, hardened and retooled, made able to resist
superficiality in its own engagement with the impersonal force of fate. In our own age, where borderlands between
environmental crisis, near-pathological boredom and disaffection with mainstream politics, and tensions driven by
religion have if anything become more rigidly crippling than ever, Max Weber looks a more profound guide than we
might care to think.

Vous aimerez peut-être aussi