Vous êtes sur la page 1sur 163

思想与方法

媒介哲学、认知科学与人文精神的未来
( 国际高端学术论坛 )

Ideas and Methods


Media Philosophy, Cognitive Science,
And the Future of the Humanities
- International Summit Forum -

会议论文
〔Conference Manual〕

北京师范大学文学院
School of Chinese Language and Literature
Beijing Normal University

2018 年 10 月 27-29 日
October 27-29, 2018
Arrangements(活动一览)

October 26 October 27 October 28 October 29


10 月 26 日 10 月 27 日 10 月 28 日 10 月 29 日

Breakfast Buffet at the Jian Wei Restaurant (the first floor of Jingshi Hotel)
早餐 兼味轩,京师大厦一楼

9:00-11:30
Workshop 2
8:30-12:00 Forum 8:30-12:30 Forum
Meeting
Conference Hall Conference Hall
Room 5058,
Morning No.6, Jingshi No.6, Jingshi
Free Area C, Main
上午 Hotel Hotel
Building
学术论坛·京师大 学术论坛·京师大
工作坊·文学
厦第六会议厅 厦第六会议厅
院主楼 C 区
5058 会议室

Lan Hui
Lunch Xi Bei Restaurant
Free Restaurant Free
午餐 西北餐厅
兰蕙餐厅

14:00-18:00 14:30-17:30
Forum Workshop 1
Conference Hall Conference Hall
Afternoon
Free No.6 of Jingshi No.6 of Jingshi Free
下午
Hotel Hotel
学术论坛·京师大 工作坊·京师大厦
厦第六会议厅 第六会议厅

Dining Room No.899 of


Tong Chun Yuan
Dinner Jingshi Hotel
Restaurant Free
晚餐 (the second floor)
同春园
京师大厦 899 包厢
Agenda(会议流程)

FORUM
学术论坛

Venue: Conference Hall No.6, Jingshi Hotel, BNU


论坛地点:北京师范大学京师大厦第六会议厅
Form: Representations and Discussions
论坛形式:主题发言与评议对话

October 27, Saturday Morning


10月27日(周六)上午

8:30-8:45 Opening Address (开幕致辞)


Addresser: Fang Weigui Briankle G. Chang
致辞人: 方维规 张正平

Session I 第一场 8:45-9:55


(In all the following sessions, each speaker has 25 minutes for representation, each
discussant 20 minutes. 每人发言25分钟,评议20分钟)

Chair: Briankle G. Chang; Discussant: Mary Ann Doane


主持人:张正平 评议人: 多 恩

Siegfried Zielinski: Generators of Surprise: Diverse Media Thinking


齐林斯基:惊奇制造者:多样的媒介思想
Mark Hansen: How Can the Mind Participate in (Artificial) Communication?: An
Alternate Path Toward Thinking (with) Machines
汉 森:心灵怎样参与(人工)交流?——(以)机器思考的替代路径

9:55-10:15 Discussion and Response(讨论与回应)

10:15-10:30 Tea Break(茶歇)


Session II 第二场 10:30-11:40
Chair: Christina Vagt; Discussant: Shunya Yoshimi
主持人:瓦格特 评议人:吉见俊哉

David J. Gunkel: Other Things: AI, Robots and Society


冈克尔:其他的物:人工智能、机器人和社会
Luo Yuejia: Neural Mechanism for Emotion and Cognitive Function
罗跃嘉:情绪与认知功能的认知与神经基础

11:40-12:00 Discussion and Response (讨论与回应)

October 27, Saturday Afternoon


10 月 27 日(周六)下午

Session III 第三场 14:00-15:10


Chair: Siegfried Zielinski; Discussant: Sybille Krämer
主持人:齐林斯基 评点人:克莱默

Mary Ann Doane: The Concept of Immersion: Mediated Space and the Location of the
Subject
多 恩:沉浸的概念:中介空间和主体位置
Myung-koo Kang: How a Gaze Can Become Violence: Representations of the North
Korean Sports Team to Pyeongchang Olympic
姜明求:凝视何以成为暴力——平昌冬奥会上朝鲜运动队的再现

15:10-15:30 Discussion and Response (讨论与回应)

15:30-15:45 Tea Break(茶歇)

Session Ⅳ 第四场 15:45-17:30


Chair: Mary Ann Doane; Discussant: Tony D. Sampson
主持人:多 恩 评点人:桑普森

Christina Vagt: Outsourcing the Intellect


瓦格特:智力外包
Xu Yingjin: Why Does General Artificial Intelligence Need the Husserlian Notion of
“Intentionality”?
徐英瑾:通用人工智能为何需要胡塞尔的“意向性”理论?
Jiang Yi: The Fuzzy Boundary of Cognitive Science and Humanities
江 怡:认知科学与人文科学的模糊边界

17:30-18:00 Discussion and Response (讨论与回应)

October 28, Sunday Morning


10 月 28 日(周日)上午

Session Ⅴ 第五场 8:30-9:40


Chair: Myung-koo Kang; Discussant: Mark Hansen
主持人:姜明求 评点人:汉森

Sybille Krämer: Media as Cultural Techniques: From Inscribed Surfaces to Digital


Interfaces
克莱默:作为文化技术的媒介——从书写平面到数字界面
Shunya Yoshimi: Cultural Sustainability and the Redefinition of Humanities: The Role
of University in the 21st century Globalized Society
吉见俊哉:文化延续与人文科学再定义——21 世纪全球化社会中大学的作用

9:40-10:00 Discussion and Response (讨论与回应)

10:00-10:15 Tea Break(茶歇)

Session Ⅵ 第六场 10:15-12:00


Chair: Shunya Yoshimi; Discussant: David J. Gunkel
主持人:吉见俊哉 评点人:冈克尔

Liu Chao: Effect of Mortality Salience on Guilt and Shame and Its Neurocognitive
Mechanism
刘 超:死亡凸显对内疚和羞耻的影响及其神经机制
Tony D. Sampson: Transitions in Human–Computer Interaction: From Data
Embodiment to Experience Capitalism
桑普森:人机交互领域的转变——从数据具身化到经验资本主义
Briankle G. Chang: Spectral Media
张正平:幽灵般的媒体

12:00-12:30 Discussion and Response (讨论与回应)


WORKSHOP 工作坊

October 28, Sunday Afternoon, 14:30-17:00


10 月 28 日(周日)下午 14:30-17:00
Workshop 1. Stars and Clouds: Literature, Science, and the Media Philosophy of
Michel Serres
第一场:星与云:米歇尔·塞尔的文学、科学和媒介哲学
Venue: Conference Hall No.6, Jingshi Hotel, BNU
地点:北京师范大学京师大厦第六会议厅
Chair: Christina Vagt
主持人:瓦格特
Participants: Relative Scholars of the International Summit Forum
与谈人:高端论坛相关学者

October 29, Monday Morning, 9:00-11:30


10月29日(周一)上午 9:00-11:30
Workshop 2. Affect and Social Media
第二场:情动与社会媒介
Venue: Meeting Room 5058, Area C, Main Building, School of Chinese Language
and Literature, BNU
地点:北京师范大学文学院,主楼C区5058会议室
Chair: Tony D. Sampson
主持人:桑普森
Participants: Relative Scholars of the International Summit Forum
与谈人:高端论坛相关学者
Contents

Fang Weigui The Configuration of Humanities and Science: Exploring a New


Paradigm………………………………………………………… 1

Siegfried Zielinski Generators of Surprise: Diverse Media Thinking………….…… 6

Mark Hansen How Can the Mind Participate in (Artificial) Communication?:


An Alternate Path Toward Thinking (with) Machines………..... 19

David J. Gunkel Other Things: AI, Robots and Society…………………………. 21

Luo Yuejia Neural Mechanism for Emotion and Cognitive Function……… 42

Mary Ann Doane The Concept of Immersion: Mediated Space and the Location of
the Subject………………………………………………………. 44

Myung-koo Kang How a Gaze Can Become Violence: Representations of the


North Korean Sports Team to Pyeongchang Olympic…………. 62

Christina Vagt Outsourcing the Intellect ………………………….……………. 73

Xu Yingjin Why Does General Artificial Intelligence Need the Husserlian


Notion of “Intentionality”? …………………………………….. 83

Jiang Yi The Fuzzy Boundary of Cognitive Science and Humanities……. 105

Sybille Krämer Media as Cultural Techniques: From Inscribed Surfaces to


Digital Interfaces…………………………. …………………….. 111

Shunya Yoshimi Cultural Sustainability and the Redefinition of Humanities: The


Role of University in the 21st Century Globalized Society…...… 121

Liu Chao Effect of Mortality Salience on Guilt and Shame and Its
Neurocognitive Mechanism…………………………...………… 123

Tony D. Sampson Transitions in Human–Computer Interaction: From Data


Embodiment to Experience Capitalism………………………….. 125

Briankle G. Chang Spectral Media…………………………………………………… 147

Appendix Participants………………………………………………………. 152


目录

方维规 探究人文与科学之关系构型的新范式....................... 1

齐林斯基 惊奇制造者:多样的媒介思想............................. 6

汉 森 心灵怎样参与(人工)交流?

——(以)机器思考的替代路径............... 19

冈克尔 其他的事: 人工智能、机器人和社会....................... 21

罗跃嘉 情绪与认知功能的认知与神经基础......................... 42

多 恩 沉浸的概念:中介空间和主体位置......................... 44

姜明求 凝视何以成为暴力

——平昌冬奥会上朝鲜运动队的再现............ 62

瓦格特 智力外包............................................... 73

徐英瑾 通用人工智能为何需要胡塞尔的“意向性”理论?........... 83

江 怡 认知科学与人文科学的模糊边界...........................105

克莱默 作为文化技术的媒介——从书写平面到数字界面 ..........111

吉见俊哉 文化延续与人文科学再定义

——21 世纪全球化社会中大学的作用..........121

刘 超 死亡凸显对内疚和羞耻的影响及其神经机制.................123

桑普森 人机交互领域的转变:从数据具身化到经验资本主义..........125

张正平 幽灵般的媒体...........................................147

附录 参会学者...............................................152
Fang Weigui

The Configuration of Humanities and Science:

Exploring a New Paradigm


- Opening Address of the Fifth International Summit Forum “Ideas and Methods”-

Ladies and gentlemen,

First of all, I would like to welcome you to Beijing Normal University in the best
season, as we join here to participate in the Fifth International Summit Forum “Ideas
and Methods”. This series of Forums, known for its small scale and high quality, has
been well received by academia at home and abroad, and has widely promoted scientific
in-depth exchange and exploration with regard to many key issues in the contemporary
ideological field. This year, the theme of our forum is “Media Philosophy, Cognitive
Science, and the Future of the Humanities” – a theme that entails the observation and
reflection of profound changes in current life and intellectual discourse. I will briefly
address the topic here, based on my own understanding.

From the perspective of academic history, the separation and entanglement that
we notice with regard to the Humanities and Science has a long history. In the 1950s,
C. P. Snow and F. R. Leavis fought over “the Two Cultures”, kicking off the
confrontation between the Humanities and Science. Since then, the estrangement and
hostility of the two discourses continued. This February, The Chronicle of Higher
Education published an article by Harvard scholar Steven Pinker. Entitled “The
Intellectual War on Science”, this article explicitly attacks the demonizing criticism
leveled against sciences and its utilization by the humanities, and the author regards it
as a “war” launched by humanistic discourse on science. In Pinker’s view, humans must
rely on the progress of modern science in order to solve the emerging problems. Pinker's
defense of science is, in a sense, a response to the criticism raised years ago by Leon
Wieseltier, The New Republic's literary editor. Wieseltier opposed Pinker's scientism,
arguing that the humanities, despite being vilified by science, were always
indispensable.

However, this controversy is not a simple continuation of the debate 60 years ago,
but contains new contexts and ideas. Today, science and technology are unrivaled both

-1-
in terms of their concepts and their practice. On the one hand, frontier explorations in
the field of science, compared with those of the humanities, often tend to attract more
attention from the public and the media. It causes concerns that scientific explorations
are endowed with an independent and undoubted importance while, on their
philosophical premise and ethical requirements, the necessary introspection is lacking.
On the other hand, the accelerating technology of our period has highly shaped the
lifestyle of contemporaries. The revolutionary progress of AI – “artificial intelligence”
– and information technology is constantly updating people's communication methods
and self-awareness. In particular, the rapid expansion of clients in mobile media like
Facebook, Wechat, and WhatsApp has rebuilt the information means that people use to
communicate on a daily basis worldwide. How to understand the enormous impact of
the technological force on human language expression, social relations, cultural habits
and even the evolution of genes is obviously not a subject that can be solved by a single
discipline.

Regrettably, current transdisciplinary research, which is popular in academia, is


often confined to the interior of the humanities or of science, and studies that truly span
the two boundaries remain quite limited. Many humanists, with little knowledge about
the cutting-edge achievements in science, and few concerns for the innovation of
scientific methods, are zealous in talking about the sole and only book, namely Thomas
Kuhn's The Structure of Scientific Revolutions. Their criticism of technological reality
is often a way to simply re-adorn the moral imagination of humanism with a new
philosophy incorporating words of technology, and they poorly comprehend the
connotational change of science and technology. This situation directly affects the
implementation of general education. The general curriculum in many colleges and
universities focuses only on the historical integrity of humanistic knowledge, while it
seldom reflects the importance of scientific awareness with regard to the shaping of the
modern citizen's personality. Therefore, in all the general readings within reach, there
are few classics on the history of science. This sense of general education obviously
does not help to eliminate the isolation and confrontation that exist between the
Humanities and Science. Therefore, the resulting reproduction of humanistic
knowledge is certainly incapable to meet the acute challenges posed by contemporary
science and technology with regard to the meaning of human nature.

In response to this challenge, of course, one cannot simply expect the Humanities
or Science to return to the good-natured ancient times. Instead, it is important to re-
understand the meaning and functional boundary of the two in an transdisciplinary
sense through a mutual perspective. As Derrida puts it, “The future of the humanities
depends on how we decide borders.” So, how to think about the possibility and
significance of boundary changes in new contexts determines the way we imagine the
future of the humanities. Derrida's consciousness of this issue is taken up by Catherine
Malabou who proposes a concept of “plasticity” (plasticité) to revisit the internal and

-2-
external relations of the humanities. In her view, the humanities can only reconstitute
themselves by overstepping boundaries, just as humans create themselves by crossing
Kant's “transcendent” boundary. Thus, she stresses that the future of the humanities is
not simply that they transform themselves into a science, but instead, it matters to
transform the most closely related sciences (such as the neuroscience of brain) into a
part of the humanities. In her research she reflects that through neuroscience
“transcendence” becomes an empirical state, thereby laying an empirical foundation for
the subject-form defined by ontology and metaphysics. Malabou breaks the boundaries
between transcendence and experience, thus expanding the boundaries of the
humanities while further activating the intellectual power reacting from the side of
humanity to reality.

Unlike Malabou’s approach from the perspective of the humanities, Pinker revisits
human nature from another point of departure: scientific research. He hopes to absorb
neuroscience, evolutionary biology, genetics, artificial intelligence and other scientific
fields through psychology, drawing on the achievements of spiritual philosophy to
reinterpret the constitutive principle of human nature. As the title of his best-selling
book The Better Angels of Our Nature shows, he is, from a scientific point of view,
optimistic about the improvement of human nature. If these two ways of thinking are
compared, based on transdisciplinary research, it is easy to see that the theory of human
nature reached by the humanities is quite different from that offered by science.
Malabou does not readily assess the possible improvement of human nature, while
Pinker's optimistic theory of human nature largely weakens the interpretative effect of
the humanistic tradition. Therefore, the critical issue at present is, in these divergent
cognitive situations, how to examine the remolding of traditional human nature through
expanding the interpretative boundaries of the Humanities and Science. And once
implemented in specific social situations, what kind of institutional composition and
ethical life does it really mean?

In modern society, the Humanities and Science, as two worldviews, are put into
practice in varied ways. The presentation of science, in the form of technology in
everyday life, can be measured by the corresponding social value or benefit, whereas
the shaping of secular feelings, by means of ethics, cannot yet be transformed into any
visible social value or benefit. More importantly, the humanities cannot simply be
counted as the opposite of science, since the value of the humanities must transcend the
survival mode which includes the inherent social benefits of science. Therefore, if
science is not existent as a cognitive factor that is intrinsic to the humanities, the
spiritual value of the humanities will be greatly detracted, and it will be struggling to
cope with the actual situation of social life. In this sense, any sentimental appeals for
the humanistic spirit, if not implemented as the reproduction of humanistic knowledge,
will ultimately become unreal and unsustainable.

-3-
探究人文与科学之关系构型的新范式

第五届“思想与方法”国际高端学术论坛开幕致辞

方维规

先生们、女士们,

首先欢迎各位在北京最好的季节,来到北京师范大学参加第五届“思想与方
法”国际高端学术论坛。这个系列学术会议,以小规模、高品质的办会风格,已
在海内外学界获得良好反响,广泛推动了当代思想领域诸多关键议题的深入交流
与探索。今年的论坛主题为“媒介哲学、认知科学与人文精神的未来”。之所以
选择这一议题,乃是基于对当前人类生活形式与思想话语之深刻变化的观察和思
考。下面我结合个人理解,谈谈对这个主题设置的主要认识。

从学术史来看,人文与科学的分离与纠葛由来已久。20 世纪五十年代 C.P.斯


诺与利维斯关于“两种文化”之争,揭开人文与科学正面冲突的序幕。自此,两
种话语的隔膜与敌视持续不断。今年 2 月,美国《高等教育纪事报》 (The Chronicle
of Higher Education)刊发哈佛大学著名学者斯蒂芬·平克(Steven Pinker)的文
章《与科学为战》(the intellectual war on science)。这篇文章旗帜鲜明地抨击人
文学科对科学的妖魔化批评与利用,认为这是当今人文话语对科学发起的“战争”。
在他看来,人类要想解决当前面临的种种问题,必须依靠现代科学的进展。平克
极力为科学辩护的文章,在某种意义上是对数年前利昂·维塞蒂尔(《新共和》编
辑)批评的回应。维塞蒂尔曾不满他的科学主义论调,认为人文学科尽管受到科
学的诋毁,但永远不可或缺。

然而,这场争辩不是 60 年前论争的简单延续,而是包含着新的时代语境与
思想内容。今天,无论在观念还是实践层面,科学技术都占据无可匹敌的主导性
地位。一方面,与人文学术相比,科学领域的前沿探索,往往更容易引发公众和
媒体的注意。在这些关注中,科学探索被赋予独立的、不容置疑的重要性,而对
其哲学前提和伦理要求,则缺少必要的省察;另一方面,日益加速的技术进步高
度形塑了当代人的生活方式,尤其是人工智能和信息技术的革命性进展,不断更
新人们的交往方式和自我认识。尤其是诸如 Facebook、微信、WhatsApp 等移动
媒体客户端的迅速扩张,在世界范围内重塑了人们日常交流的信息手段。如何理
解这种技术力量对人类的语言表达、社会关系、文化习惯乃是基因进化的巨大影
响,显然并非单一学科所能解决的课题。

令人遗憾的是,当前学术界流行的跨学科研究,往往局限于人文或科学的内
部,真正跨越两者界限的研究仍然相当有限。很多人文学者对于科学领域的前沿
成果所知甚少,更不关心科学方法的革新,时常津津乐道的只有一本托马斯·库

-4-
恩的《科学革命的结构》。而他们对于技术现实的批判,往往只是借用技术哲学
的新词重新装饰人文主义的道德想象力,而对于科学技术的内涵变革知之甚少。
这种认识状况,直接影响了大学通识教育的实施方式。许多高校的通识课程设置,
仅止于关注人文知识的历史整体性,很少意识到科学意识对于现代公民人格塑造
的重要性。因此,在我们所能见到的各类通识阅读书目中,鲜有科学史方面的经
典著作。这种意义上的通识教育,显然无助于弥合人文与科学两者的隔离与对峙。
而由此对人文知识的再生产,更无法回应当代科学技术在人性论意义上提出的尖
锐挑战。

当然,要回应这种挑战,不能简单地希求人文与科学回到古代社会的整全人
性状态。重要的是,如何在跨学科的意义上,通过互看的眼光,重新理解两者的
意义与功能边界。如德里达所言:
“人文学科的未来依赖于我们如何决定边界”。
因此,如何在新的语境下思考边界变动的可能及其意义,决定了我们想象人文学
科之未来的思想方式。马拉布对此的思考,承续了德里达的问题意识。她提出“可
塑性”的概念重新思考人文学科之内部与外部的关系。在她看来,人文学科只有
逾越界限才能再造自身,正如人类是通过跨越康德的“超验”界限,才得以创造
自我。因此,她强调,人文学科的未来不是简单地转变为科学,而是将最密切相
关的科学(如研究大脑的神经科学)转化到自身内部。在她的研究中,通过神经
科学,我们可以重新思考“超验”如何变成可经验的状况,从而为本体论形而上
学界定的主体形态赋予经验基础。马拉布由此破除超验与经验的界限,从而扩充
人文学科的边界,激活人文回应现实的思想能量。

与马拉布从人文学科出发的思考方式不同,平克则是从科学研究出发重新思
考人性。他希望通过心理学来吸纳神经科学、演化生物学、遗传学、人工智能等
科学领域,同时借鉴心灵哲学的成果,从而重新解释人性的构成原理。如畅销著
作《人性中的善良天使:暴力为什么会减少》的题目所示,他从科学角度对人性
的改善充满乐观。如果对比两种思考路向,不难明白,从人文与科学两个方向开
展的跨学科研究,最终指向的人性论相去甚远。马拉布并不轻易对人性改善之可
能性作出判断,而平克乐观的人性论则在很大程度上弱化了人文主义传统的解释
效力。因此,重要的问题是,如何在这些充满分歧的认识处境中,审视人文与科
学通过扩展解释边界的方式对人性论传统的重塑,一旦落实在具体的社会处境中,
究竟意味着怎样的制度构成和伦理生活?

在现代社会,人文与科学作为两种世界观,落实在实际生活层面的方式并不
相同。如果说科学以技术形式在日常生活过程的呈现,可以通过相应社会价值或
效益来衡量,那么,人文以伦理方式对世俗人情的形塑,则不能被转化为某种可
见的社会价值或效益。更重要的是,不能简单地将人文视为科学的对立面,人文
的价值乃是对包括科学内在的以社会效益为指标的生存方式的超越。因此,一旦
科学不能作为内在于人文的认识因素存在,那么人文的精神价值势必大为贬损,
难以应对社会生活的实际处境。在这种意义上,任何标举情怀的人文精神呼吁,
如果不能落实为人文学科新的知识再生产,最终都不免凌虚蹈空,难以为继。

-5-
惊奇制造者:多样的媒介思想

齐林斯基(Siegfried Zielinski)

伴随工具与技术对人类的全面征服,我们已经很难绕开媒介理解身处的世界。
无论哪个学科,在面对各自课题时都不得不将媒介的物质性纳入考量。这促使我
们思考:能否为形形色色的媒介研究划定一片专门的学术领域?如何避免媒介研
究在学科化的同时陷入封闭、僵化、自我循环,进而丧失批判性?媒介不应被简
单理解为交际手段,媒介研究也不应以自身为终极目的,而应主动吸取其他领域
的思想方法以校正和反思自身,否则终会陷入空洞的自我循环,切断与他者相遇
的可能。作者的媒介研究奠基于原子论世界观,并深受福柯系谱学(genealogy)
方法的启发。原子论传统将世界的形成归因为原子微小偏斜所导致的偶然相遇。
在这一图景中,万物处于永不停息的流变中,其中,偶然性优先于一切人为建构
的意义,构成世界存在的根本原因。这意味着,历史并非铁板一块,而如一座迷
宫,歧中有歧、小径分岔,充斥着断裂、偏移、意外和转向。人们从当下出发回
溯地建立起稳定而连续的历史叙述,往往掩盖了历史众声喧哗的本来面貌。福柯
的系谱学启迪人们以多重视角观察历史、尊重异于我们的他者。媒介研究并非某
种强硬划分和武断把握,而是与他者相遇、嬉戏,在过去的断壁残垣中发掘未完
成的可能,通过解放那些已逝的当下(by-gone presents)开启未来,不断为人们
带来惊喜。作者回顾了自己的学术生涯,如何从媒介史研究一步步转向媒介考古
学和系谱学、转向对深层时间(deep time)的挖掘,并达致近来对变体学
(variantology)的思考。相比异质之物(the heterogeneous)的概念,变体(the
variant)更加轻盈、富有动态。变体学将那些迥然不同甚至彼此排斥的现象暂时
聚合,却并不生成一个标准化的封闭系统,而是可以根据需要再度散逸。它关注
变化、偏离和差异,却不意味着排斥和歧视。它跨域东西方界限,将一系列可能
的系谱学聚合为一个想象中的整体。最后,作者尝试将二战以来媒介思想的研究
者依照学术理路的不同划分为七代。

-6-
Siegfried Zielinski

Generators of Surprise:Diverse Media Thinking

1.

Past centuries have provided us with plenty of those who prophesize and plenty of
those who warn against the conquest of the last refugees of the anthropos by
instruments and technical systems. Catholic mathematician Johann Zorn (1641-1707)
believed the artificial eye (oculus artificialis) wielded such enormous power that the
optical apparatus—a robust telescope with a projection chamber attached—could even
extract impure spots from the supposedly pure and divine sun, that it could, in other
words, outwit astrophysical reality. Hegelian Ernst Kapp (1808-1896) was urging as
early as 1877 that culture itself must be reconceived from a technological persperctive
as organ projection and that the structure of language was so intimately bound up with
the nature of the state that the development of electronic communications networks and
the kinematic concept of disciplinary full-closure represented the becoming-apparatus
of the actual late 19th century state. Friedrich Nietzsche (1844-1900) insisted toward
the end of his life that, the more his own psycho-physical powers of hand-writing
deteriorated, the more his typewriter would become co-author of his texts. His pencil
was smarter than he was, Albert Einstein (1859-1955) is purported to have joked.
Bertolt Brecht (1898-1956) knew already in the 1920s that art without technology was
sheer absurdity. Walter Benjamin (1892-1940) assumed—if somewhat more
serenely—that the typewriter would alienate the pen-holding hand of the litterateur only
“once the precision of typographical forms were immediately assimilated into the
conception of his books […] and the innervations of the commanding fingers had
replaced the familiar hand.” 1 Catholic iconoclast Marshall McLuhan (1911-1980)
wanted us to seek the agent of our sensibility and our understanding in the medium and
nowhere else, though this first pop star in the global market of media thinking largely
left open what he actually meant by this mysterious portent, the medium. Friedrich
Knilli (*1930), who was raised among the cutting and sewing machines of his uncle’s
garment factory in the Austrian city of Graz and later studied mechanical engineering,

1 Walter Benjamin, “Lehrmittel. Prinzipien der Wälzer oder die Kunst, dicke Bücher zu machen”
(Teaching Aids: The Principles of Tomes, or the Art of Making Thick Books) in Gesammelte
Werke, vol. 4, 1 (Frankfurt/Main: Suhrkamp), p. 105. My translation.
-7-
came to understand the powerful materiality of the medial through the vibrating
membranes of loudspeakers in Austria and Germany’s early radio studios and then
developed from this his own psycho-physical and aesthetic concept of the total sound
spectacle [totales Schallspiel]. That was about the same time that Jacques Lacan (1901-
1981) began insisting emphatically that even the unconscious was structured like
language. It was also at about this time that structure itself began to win the upper hand
over the Subject in disciplines ranging from ethnology to history and literature and even
early theories of cinema. No longer were we Subjects, but projects that in the ideal
circumstance projected worlds of our own—as Vilém Flusser (1920-1991) consistently
emphasized in his own unique way. In an equally eschatological gesture, Friedrich
Kittler (1943-2011) claimed, on the basis of his “technical a priori,” that all that was
expressed and all that our eyes and ears received as symbolic material was first and
foremost technology and that it always would remain technology, even at the vanishing
point of its development.

These diverse concerns, from prophetic and cautionary voices alike, have each
gained acceptance in various ways. What we refer to as our world is no longer thinkable
without the medial. Mathematicians and physicists, medievalists, philologists of all
kinds, theologians, philosophers, biologists and art critics all know that they must deal
with media—or at least with materials that are contingent on media—when they trawl
through the containers, archives and contemporaneous utterances that have been
produced in their respective fields, in their endeavor to understand and to impart. All
of them equally must learn to read, interpret and calculate medial surfaces and
materialities, as well as the metaphysical messages intimately linked with these—
messages that articulate and transport symbolic bodies and their networks.

The urgent question is: do these practices of expanded hermeneutics which


technical objects and medial circumstances require of us demand in turn a separate
academic discipline of their own? Must the hetergeneous, prismatic, aggregate
phenomena of techno-aesthetic media be accomodated under a distinct and specialized
system with delimited epistemic objectives and specific methods? Can we actually
neatly organize the constant transgression of limits that the thinking of medial
circumstances requires? Or must such an endeavor necessarily break down into paradox,
as did for instance in Pier Paolo Pasolini’s (1922-1975) world of poetry and politics?
Or again—with reference to a field adjacent to media thinking—has it ultimately
proved to be good for the arts and the infinite diversity of our ways of seeing that we in
the German-speaking humanities have established a unique compartment for them, one
that now threatens to turn into the massive and highly-ordered curio cabinet of a
hegemonic regime of visual studies, in which the whole of our knowledge of what can
be seen in objective form is filed, sorted and archived?

-8-
Perhaps it is too soon to answer such questions with any certainty. It is however
not too soon to pose them resolutely. For the tendency toward the establishment of
media theory (in Germany, this has even been immoderately propagated as media
science) as its own discipline with its own laws, hierarchies, canons, power structures,
conceptualities and clearly defined origins is quite strong. The disastrous consequence
of such a circumscription, for instruction and increasingly also for research, is that the
loops of self-reflection embarked upon by both apprenticed and established media
experts assume ever more audacious forms and contents. Students now earnestly
believe that the only legitimate content of media can be nothing but other media, and
they write and act accordingly. Ever since the critical thought was cast out from the
humanities, the medial has been confirmed and celebrated – and decelerated – as the
communicative potential: for control and correction but also for culture. The difference
engine has become an engine of management and design, even an engine of careers. To
posit that nothing anymore can exist and thrive independently of mediation-machines
tends to inflate those very machines into all-powerful, self-sufficient centrifuges
positioned right at the center of what journalists still blithely refer to as society, where
they whirl away, organizing their own academic circles around themselves.

Interdiscursivity implemented in experimental practice requires prismatic ways of


seeing, judicious but always moveable viewpoints, artful variants as well as the
development of elegant, multiperspectival narratives. Freestyle thinking—as I
attempted to formulate that of Vilém Flusser2—without banisters to offer provisional
support can become, in the long run, the movement of a prescriptive regulating-machine.

Elaborated media thinking needs in its immediate vicinity the depths and gravity
of other modes of thought that are not oriented toward medial phenomena, with which
it may periodically connect, by which it may be stimulated, urged on, occasionally
reined in and reminded of its place. The study of medial sensations and structures is not
an end in itself, or else it would devolve into the very paradoxy in the void which
Baudrillard never tired of criticizing. Ultimately, technical means of communication
only serve to make encounter impossible. It is the imaginary that saves us in the ongoing
acid test between the real and the symbolic; yet it punishes us at the same time, given
its semblant character. It was Jacques Lacan, borrowing an exceptional media concept
from Lucretius,3 who so admirably formulated this in a number of variants.

Let’s dwell for a moment, then, with this ancient thinker who has been so
tremendously important in my own intellectual passages through medial phenomena.
The clinamen is “the smallest deviation possible” that may take place “we know not

2
See FLUSSERIANA—An Intellectual Toolbox, eds. Siegfried Zielinski, Peter Weibel and Daniel
Irrgang (Minnesota 2015), p. 17. The book is a tri-lingual publication (English, German,
Portuguese).
3 Nam si abest quod ames, praesto simulacra tamen sunt / For if what you love is distant, its

images are present. (Titus Lucretius Carus, De rerum naturae / On the Nature of Things)
-9-
when, we know not where,” as Louis Althusser puts it, citing De rerum naturae, that
incomparable natural historical poem written by Lucretius in the last century prior to
the Common Era. The clinamen causes an atom to “swerve” from its vertical plunge
into the void, where “there occurs an encounter between one atom and another, and this
event becomes advent on condition of the parallelism of the atoms, for it is this
parallelism which, violated on just one occasion, induces the gigantic pile-up and
collision-interlocking of an infinite number of atoms, from which a world is born”—a
world, in other words, as an aggregate of atoms that is created through a chain reaction
set off by the first swerve and the first encounter.4 Althusser, along with ancient Greek
natural philosopher Epicurus, was convinced that the origin of any world, thus any
reality and any meaning, is due to a deviation; that deviation and not reason is the cause
of the origin of the world.5

Considered from the perspective of deep time, my own media research has at its
core been powerfully shaped by those thinkers, poets and naturalists known to the
histories of science and mind as Atomists. Before Socrates and of course beyond the
great dividers, Plato and Aristotle, the Atomists conceive of the world fundamentally
as turmoil, a ceaselessly streaming exchange of the smallest particles, energies and
signals, a world that does not yet require such severings as that between subject and
object, active and passive, matter and mind, between the receiver on one side and the
sender on the other. Anaxagoras, Anaximander, Democritus, Empedocles, Epicurus,
Lucretius and others thought the world as perpetually colliding objects, as the billiard-
reality of interobjectivity, two and a half thousand years before this concept again
acquired effective power under the banner of things becoming independent; they
thought chaos, its complex regularities and its incalculabilities; they thought the world
as porous objects that articulate themselves and thus reveal themselves to our
perception, just as much as we in turn are realized for them, become ecstatic for them
and step out of ourselves. It was Martin Heidegger who rediscovered this world in the
20th century and ontologically fundamentalized it with unnecessary severity. And for
French philosophers of becoming and of energetic dialogue, too—from Gilles Deleuze
to Félix Guattari and, in a different form, from Alain Badiou via Jean-Luc Nancy to
Jacques Rancierre, with whom I had the pleasure of teaching on the same faculty for a
number a years6—this world full of motion and events is the only thinkable one, or
better: the only attractive one with respect to a basic idea: that the world which is known
to us has only a single raison d’être, which consists in the fact that it is changeable and
that it is constantly changing.

4
Louis Althusser, “The Underground Current of the Materialism of the Encounter” in Philosophy
of the Encounter: Later Writings, 1978-87, eds. François Matheron and Oliver Corpet, trans. G.M.
Goshgarian (London, New York: Verso, 2006).
5 Ibid.
6 I am referring here to the European Graduate School (EGS) in Saas-Fee, Switzerland.

- 10 -
2.

Michel Foucault was a master of the kind of writing that makes us operatively
conscious of where what we call our civilization comes from, why and how we have
evolved into powerful beings; and he managed to pose these questions in such a way
that we are able to critically examine what we call history even as we write it. Deriving
it from an anti-historical concept developed by Nietzsche, Foucault designates this
process as genealogy. It enables us to understand developments as labyrinthine, as
movements associated with digressions and impasses, and it assumes a many-eyed
seeing and a many-tongued writing.

After Nietzsche, Foucault expended enormous energy in an attempt to uncover


how a scattered world is assembled to produce a specific world. In so doing, he
increasingly substituted genealogical tactics for the meta-methods of archaeology, with
their utopian and teleological promises. Genealogy “does not seek out and describe the
‘things’ that phenomenology holds to be the world, but rather delineates the manner in
which the ‘things’ are ‘made’ into ‘facts’”7—a variant of early ecological thinking that
seeks to represent relation simultaneous with substance. Nietzsche had himself already
proposed a methodological principle by means of which just such a genealogical
representation could be realized. “Main proposition: no regressive hypotheses! (…)
And as many individual observations as possible.” “Task: to see things as they are!
Means: to look on them from a hundred eyes, from many persons.”8

As media researchers who think materiologically, we opt—in the event that we


must choose—for the particulars we can experience and not the all-binding general
which can only be thought. This I learned first and most profoundly from Friedrich
Knilli, the machine builder. This choice we make is deeply connected with respect for
the artifact, for the technical, biological and cultural other, with respect for that which
is not identical with us.

We do not need a new ontology, neither subject- nor object-oriented, in order to


play together, critically and productively, with the things, facts and circumstances, the
words and concepts that have to do with media or that are constituted and produced
through media. Just as Nietzsche wished to provoke the dull, entrenched and encrusted
mentality of parochial philosophers and historians at the end of the 19th century, and
like the clique surrounding Foucault also attempted to do in the last half of the 20th,

7
Tracy B. Strong, Friedrich Nietzsche and the Politics of Transfiguration (Urbana, Chicago:
University of Illinois Press, 2000), p. 54.
8 Friedrich Nietzsche, Kritische Studienausgabe (Berlin: de Gruyter, 1980), p. 170. My

translation.
- 11 -
now at the start of the 21st century we need another move into the open.9 This is the
title of Berlin-based philosopher Dietmar Kamper’s invitation to unabashed exchange
among architects, artists, musicians, philosophers and media specialists—an invitation
to a debate that need not lead to resolutions but to an intellectual adventure, that may
not necessarily rule out academic chairmanships but that ultimately may not need them.

I have in my arsenal of language no better phrase for describing what this project
toward a genealogy of media thinking is really about. The avant-garde is nothing but
the reinterpretation of by-gone presents, and genealogy proves above all to be an
operation with a lofty aim: namely, that of re-opening the windows and doors onto that
nervous, heterotopic place of possibilities that the thinking of media once occupied, and
organizing passages through the boredom that has taken root there, and recalling the
gardens10 which poachers from the most disparate disciplines and schools of thought
have in passing laid out there and cultivated.

In his book Experimentalsysteme und die epistemischen Dinge (Experimental


Systems and Epistemic Things, 2001), Hans-Jörg Rheinberger, a biologist, philosopher
and historian of science, uses the term Überraschungsgenerator, or generators of
surprise to describe the most important function of the experiment in scientific
laboratory activity. The term originates in the work of molecular biologist Mahlon
Hoagland (1980) and characterizes the epistemic goal of a cultura experimentalis as I
would like to see it. It is a great privilege to be able to write and publish. Use of this
privilege is justified only if the works that we bring into the open, to the public sphere,
are at least approximately capable of calling forth or recalling the peculiar quality
contained in the moment of surprise. To consider texts as generators of surprise is likely
a reliable formulation for the astonishment that one must never unlearn, neither in the
sciences nor in media thinking.

3.

As early as 1965, Kurt W. Marek, using the anagrammatic pseudonym Ceram,


published almost simultaneously in both Germany and the US his explicit Archaeology
of the Cinema. Around this same time, a number of art-historical and culture-historical
texts were also operating with a distinctly archaelogical gesture, like for instance Jurgis

9
Umzug ins Offene is the title of an edited volume: Umzug ins Offene. Vier Versuche über den
Raum (Move into the Open: Four Experiments About Space), eds. Tom Fecht and Dietmar
Kamper (Berlin: Springer, 2000).
10 “…They see an entanglement of spaces that emerges as it would were we somewhere like a

cinema auditorium. But the oldest example of a heterotopia may well be the garden…” Michel
Foucault, Les hétérotopies/Le corps utopique (Heterotopias/The Utopian Body), Two Radio
Lectures, France Culture, December 7th and 21st 1996 (INA, Paris 2004).
- 12 -
Baltrušaitis’s fantastical writings on anamorphic art and the mirror11 or Gustav René
Hocke’s superb 1957 work on mannerism in European art, Die Welt als Labyrinth (The
World As Labyrinth). 12 But archaeology first emerges as a notable thematic and
methodological paradigm in the humanities as a discourse effect of the work of historian,
sociologist and philosopher Michel Foucault. The Birth of the Clinic: An archaeology
of medical perception (Paris: PUF, 1963), The Order of Things: An archaeology of
human sciences (Paris: Gallimard, 1966) and The Archaeology of Knowledge (Paris:
Gallimard, 1969) led a variety of disciplines, some with notable hesitation, to conduct
analyses of historical phenomena which sought to interweave aspects of the political,
the cultural, the technical and the social—to conduct, in other words, interdiscursive
analyses. At the Technical University of Berlin, where I studied, new research projects
were being articulated on topics as diverse as the history of female labor (as in Karin
Hausen’s social history of the sewing machine), the intellectual and social history of
mathematics and the history of computing machines (as in Herbert Mertens and
Hartmut Petzold’s early studies). The periodical Wechselwirkung (Interaction), founded
in 1979 in Berlin, provided a unique platform for this particularly active
interdiscursivity between the sciences of nature and the sciences of mind. Such diverse
archaeologies and genealogies evolved as academic attempts to intervene on the often
encrusted systems of knowledge and organization in the established disciplines and to
aggravate and alter these by means of critical, transdisciplinary reflection.

My first media critical publications emerged from just such a milieu, as did my
early writings on the history of medial attractions like the Arbeiter-Radio-Bewegung
(Workers’ Radio Movement) of the interwar period between 1919 and 1933. In today’s
terms, historicized in relation to hegemonic media apparatuses, one might deem this the
first hacker movement, vested in an aura similar to that which surrounded the self-styled
Guerrilla Television of the electronic avant-garde of the late 1960s and early 1970s.13
The epic gesture of intervening action that we had learned to extrapolate above all from
Bertolt Brecht and his radio heuristic, but also from the hopeful potential in the writings
of Walter Benjamin, was just as important to us as was work on the utopian possibilities
we saw in a collectivity in which, as a rule, there would be no exclusions and no
hegemonic hierarchies in the exchanges among its members. Jürgen Habermas was as

11
See Baltrušaitis, Anamorphoses, ou magie artifielle des effets merveilleux (Paris: Olivier
Perrin, 1955), English translation: Anamorphic Art, trans. W.J. Strachan (Henry N. Adams 1977);
and Le miroir: Essai sur une légende scientifique: révelations, science-fiction et fallacies (The
Mirror: Essay on a Scientific Legend: Revelations, Science Fiction and Fallacies) (Paris: Éditions
du Seuil, 1978).
12 Hocke, Die Welt als Labyrinth. Manier und Manie in der europäischen Kunst. Beiträge zur

Ikonographie und Formgeschichte der europäischen Kunst von 1520 bis 1650 und der Gegenwart
(The World as Labyrinth: Manner and mania in European art. Contributions to the iconography
and formal history of European art from 1520 to 1650 and the present) (Reinbek: Rowohlt, 1957).
13 See the part history, part instruction manual by Michael Schamberg and Raindance Corporation

(New York: Holt, Rinehart and Winston, 1971).


- 13 -
much on our minds as were the protagonists of critical theory, returned from their exile
in the United States.

The opposite of this—the concentration of all power in relations of communication


through ideology, stupefaction, hate and envy—we knew too well from fascism. We
were also able to observe, again and at close range, how the language of propaganda
functioned in other manifestations in East Berlin. Critical engagement with one
particular television event—a US import called “Holocaust,” as it appears here in one
essay—was a central component of a comprehensive teaching and research project on
the media of the Nazi machinery of murder and stupefaction which kept us busy for
more than five years (1978-1982) and from which three books resulted.

In my own development, I underwent an interim phase in this meta-


methodological shift away from the study of media history toward the (an)archaeology
and genealogy of media. As is so often recounted in the biographies of professional
intellectuals, it was in the course of working on my dissertation that a whole new range
of diverse research possibilities opened up for me. While still very much shaped by the
philological tradition of the humanities faculty at my alma mater, and driven by a
powerful curiosity about the world of apparatuses, physics, electrotechnology and
machine construction, I devoted myself, in a manuscript of over 600 pages, to a single
artifact which I attempted to read and comprehend at the intersection of media-
materiological, technical, temporal-philosophical, economic and cultural perspectives.
Today, I would describe a methodological endeavor like this as expanded hermeneutics,
coupled with a particular philology: an almost exact philology of almost precise
things.14 The object of my investigation was an apparatus that could record audio and
visual signals on an electromagnetically laminated tape so as to immediately reproduce
them: the video recorder. The artifact and the technical system in which it was
integrated fascinated me as an early audiovisual time machine. With this apparatus, it
was not only possible to submit filmic and televisual programs to an analytical reading
just like one would read a book, but one could also intervene in their temporal structures
and change them. Deeply influenced by the materialist variants of the Birmingham
School of Cultural Studies on the one hand and by German systems theorists and
historians of technology like Günther Ropohl on the other, in the closing chapter of my
history of the video recorder (Berlin: Wissenschaftsverlag Volker Spiess, 1985) I
termed this ensemble of machine and potential for action Kulturtechnik—or, cultural
technique.

My own explicit (an)archaeologies of media began in close proximity with


experimental practice and expanded hermeneutics. In the early 1990s, the University of
Salzburg in Austria provided me with the wonderful opportunity to teach and research

14
There is a chapter dedicated to this in my book […After the Media]: News from the Slow-
Fading Twentieth Century (Minnesota: Univocal, 2013), pp. 173ff.
- 14 -
as a professor of audiovisions, which was the original title of a book I published 1989.
This context resulted in 1991 in the project we called “One Hundred—20 Short Films
on the Archaeology of Audiovision,” which was our contribution to a celebration of the
first one hundred years of cinema history. In tandem with this, I prepared essays in the
form of theses for the Austrian magazine Eikon, one of which is presented here for the
first time in American English.

As these early media-archaeological miniatures began to appear, with the usual


interval between completion and publication, the new, large-format technological
project of establishing comprehensive telematic communications via the internet and
the world wide web was well underway. Our writing was racing with the machines, as
they rotated faster and faster. During the time I spent as founder and director of the
Kunsthochschule for Media in Cologne (1993-2000), I began to publish more and more
essays on the arts and artists that were engaging with these new circumstances, flanked
by short manifestos or proclamations, provocations.

In the extreme rush of networked bustle, which also incorporated critique into it, I
began to discover, in an indispensable counter-movement, ever more of that dimension
of the medial that I would go on to tamper with intensively for a good 20 years to come,
much to my great intellectual pleasure: the deep time of the nexus of art, science and
technology. In variantology I came up with a new thinking and playing field, one in
which I was able to investigate this exhilarating context as a unique poetics of relations.

The concept of variantology is a neologism that is ill-suited to the purposes of


standardization. There is clearly a paradox contained in it, one we are familiar with
from other semantic iterations like Georges Bataille’s “heterology” or the “heterotopias”
of Michel Foucault. These too are indebted to a logic of diversity, of multiplicity.
Contrary, divergent, mutually conflicting or even mutually repellent phenomena that as
a rule evade unification are gathered under a provisional roof in such a way that they
may nevertheless drift apart again as needed. Variantology has to do with compounds
or mixtures of a kind whose unmixing always remains within the realm of imagination.
The invocation of the logos in the concept serves less to produce a closed systematic
relationship than it does to perpetually irritate the concept’s inventor and those who
engage themselves with it. The international conferences on variantology that took
place in Cologne, Berlin and Naples between 2004 and 2008 were simply an invitation
to a kind of collaboration in which there were no contracts to sign or programs to
subscribe to; they represented an offer of hospitality that obliged the guests to nothing
but an increased presence of mind in their actual physical presence.15

15 This resulted in the five volumes of Variantology which I was able to publish through Walther
König in Cologne between 2005 and 2011, in collaboration with a rotating pool of editors and on
the basis of a worldwide economy of friendship with the contributing authors.
- 15 -
In contrast to the heterogeneous, with its heavy inflections of ontology and biology,
the variant is more interesting, in methodological and epistemological respects, as a
mode of lightness and movement. As such, the variant is equally at home in
experimental science as it is in diverse artistic practices,16 most forcefully in music.
Variation, versioning, digression—in playing and interpretation—are an obvious part
of the vocabulary as well as the everyday practice of composers and interpreters alike.
In a narrower sense, the variant designates a modulation, say from minor to major tonal
series, brought about by a change in the interval.

The semantic field that I am trying to open by means of this concept has a primarily
positive connotation. To be different, to diverge, to shift, to alternate are themselves
alternative translations for the Latin verb variare. Its connotation topples over into the
negative only when used by the speaking subject as a means of exclusion and
discrimination—which the word itself does not actually abide. To vary something that
is present is an alternative to its destruction, an alternative that played a remarkably
sustaining role in the diverse avant-gardes of the 20th century, in politics as well as art.
And, of course, an attractive medial format also inheres in the concept, a format one
relates to as one would to a sensation. Long before the cinema, the variety show was
experimenting with combining diverse stage practices into a colorful whole that would
come together only in the time of a given performance.

The heterogeneity of variantological research among various concepts of


modernity, between the occident and the orient and among the multiplicity of forms of
European culture, expresses itself in this volume in a series of individual genealogies:
of seeing and of visual perception, of sound and of musical mood, of the electric
theologians and of “Allah’s automata.” These are the earliest texts from this collection.
Research into variantology is ongoing.

4.

In a flat temporal dimension17—which is by now rather alien to the activities of


those who think deep time—the evolution of concepts of media thinking has been
underway for hardly more than a century. It has only been since the end of the second
world war, in other words about seven decades, that scientific, theoretical, philosophical,
semiological and philological engagements with and through media have been
articulated and processed as a distinct discursive field of their own—albeit ever more
unmistakably and increasingly louder.

16
For a powerful contemporary example in the visual arts, see Allen Ruppersberg, One of
Many—Origins and Variants (Cologne: Buchhandlung Walther König, 2005).
17 We have elsewhere dealt extensively with the dimensions of deep time. See, for instance, Deep

Time of the Media (Boston: MIT Press, 2006); the German original was published in 2002.
- 16 -
I have attempted a thought experiment in operationally grouping past and present
media researchers and protagonists by generation—not least in order to temporally
locate my own position in the context of this still fledgling genealogy of our field of
intellectual energies. I have started from the presumption that we are presently well into
the seventh generation of explicit media thinkers.18 Given the accelerated development
of the interdiscursive field in the second half of the 20th century, I decided to scale the
shift in generations following decade markers. The generational groupings are not
determined by the age of the thinker but rather on the basis of important differences
each one has individually contributed to this heterogeneous field of knowledge. I have
paid special attention to intelligible discourse effects that have been observed in Europe
and that have also had an intelligible impact in Germany, for instance.

Early thinkers through the end of WWII: Theodor W. Adorno, Rudolf Arnheim,
W. Ross Ashby, AndréBazin, Walter Benjamin, Henri Bergson, Bertolt Brecht, Karl
Bühler, Claude Cahun, Ernst Cassirer, Germain Dulac, Sergei Eisenstein, Gisèle
Freund, René Fülöp-Miller, Aleksei Gastev, Siegfried Giedion, Fritz Heider, Max
Horkheimer, Harold Innis, Ernst Kapp, Siegfried Kracauer, Lev Kuleshov, Harold
Lasswell, Kazimir Malevich, Filippo T. Marinetti, Solomon Nikritin, John von
Neumann, Charles S. Peirce, Luigi Russolo, Ferdinand de Saussure, Hermann
Scherchen, Claude Shannon, Wilbur Schramm, Alan Turing, Dziga Vertov, Paul
Watzlawick, Hermann Weyl, Fritz Winckel…

First mid- and post-war generation (explicitly active since the 1940s/50s): Günther
Anders, Peter Bächlin, Roland Barthes, Max Bense, John Berger, Maya Deren, Jean-
Luc Godard, Richard Hoggart, Danièle Huillet, E. Katz/J.G. Blumler, Harry Kramer,
Marshall McLuhan, Werner Meyer-Eppler, Abraham Moles, Raymond Queneau,
Gilbert Simondon, Hans Heinz Stuckenschmidt, Wolf Vostell, Roman Wajdowicz, The
Whitney Brothers, Norbert Wiener…

Second generation (explicitly active since the 1960s): Dieter Baacke, Nanni
Balestrini, Gianfranco Baruchello, Konrad Bayer, Gilbert Cohen-Séat, Guy Debord,
Umberto Eco, Vilém Flusser (in Brazil), Otto F. Gmelin, Jürgen Habermas, Helmut
Heißenbüttel, Walter Höllerer, Friedrich Knilli, Ferdinand Kriwet, Gerhard Maletzke,
Denis McQuail, Christian Metz, Franz Mon, Frieder Nake, Georg Nees, Ted Nelson,
Nam June Paik, Pier Paolo Pasolini, Wolfgang Ramsbott, Jasia Reichardt, Gerhard
Rühm, Marc Vernet, Paul Virilio, Peter Weibel, Oswald Wiener, Raymond Williams…

Third generation (explicitly active since the 1970s): Jean-Louis Baudry, Hans
Belting, René Berger, Gábor Bódy, Jean-Louis Comolli, Gilles Deleuze, Mary Ann
Doane, Franz Dröge, Hermann Klaus Ehmer, Thomas Elsaesser, Hans Magnus

18 For the distinction between explicit and implicit media thinkers, see Zielinski, […After the
Media]: News from the Slow-Fading Twentieth Century (Minnesota: Univocal: 2013), esp. ch. 3,
p. 173ff. The implicit media thinkers are not contained in the list.
- 17 -
Enzensberger, VALIE EXPORT, Friede Grafe, Félix Guattari, Hans Ulrich Gumbrecht,
Stuart Hall, Stephen Heath, Knut Hickethier, Horst Holzer, Stuart Hood, Eberhard
Knödler-Bunte, Gerhard Lischka, Laura Mulvey, Friederike Pezold, Marcelin Pleynet,
Hans Posner, Erwin Reiss, Michel Serres, Kristin Thompson, Sven Windahl, Peter
Wollen…

Fourth generation (explicitly active since the 1980s): Anne-Marie Duguet, Peter
Bexte, Friedrich Kittler, Teresa de Lauretis, Vilém Flusser (in Europe), Florian Rötzer,
Dietmar Kamper, Avital Ronell, Jean Baudrillard, Sybille Krämer, Arthur and
Marilouise Kroker, Werner Künzel, Miklòs Peternák, Jean-François Lyotard, Pierre
Lévy, Hartmut Petzold, Hans-Ulrich Reck, Irit Rogoff, Gerburg Treusch-Dieter, Georg
Christoph Tholen, Michael Wetzel, Hartmut Winkler, Christina von Braun, Joachim
Paech, Siegfried Zielinski…

Fifth generation (explicitly active since the 1990s): Marie-Luise Angerer, Peter
Berz, Manuel Castells, Régis Debray, Manuel DeLanda, Bernhard Dotzler, Timothy
Druckrey, Lorenz Engell, Wolfgang Ernst, Matthew Fuller, Ulrike Gabriel, Miriam
Hansen, Donna Haraway, N. Katherine Hayles, Hans-Christian von Herrmann, Erkki
Huhtamo, Brenda Laurel, Thomas Y. Levin, Geert Lovink, Lev Manovich, Dieter
Mersch, Brian Massumi, Alla Mitrofanova, Claus Pias, Nils Röller, Henning
Schmidgen, Bernhard Siegert, Andrey Smirnov…

Sixth generation (explicitly active in the 2000s and beyond): Arianna Borrelli,
Knut Ebeling, Alexander Galloway, Erich Hörl, Ute Holl, Yuk Hui, David Link, Mara
Mills, Jussi Parikka, Matteo Pasquinelli, Patricia Pisters, Raqs Media Collective, Gao
Shiming, Hito Steyerl, Frederik Stjernfelt, Eugene Thacker, Tiqqun, Joanna Zylinska,
et al.

***

- 18 -
心灵怎样参与(人工)交流?

——(以)机器思考的替代路径
汉 森(Mark Hansen)

我的论文将首先讨论目前军事无人机升级项目中与日俱增的决策自动化趋
势,细审此一趋势可以看出,它并未能够为自动决策提供技术支撑,反将此类项
目推向僵局,凸显出计算机器及其运算过程缺乏应变性。简言之,应变性才是仿
真过程的关节点。我将此种僵局归因于人工智能研究乃至于机器学习算法研究中
的个人主义,进而试图借助法国哲学家吉尔伯特·西蒙东的相关理论,寻求一种
替代性的趋近于(以)机器思考的路径,藉由研究人机联合过程中适度的科技个
体性生成之可能性,补充完善西蒙东的思考。实现此种个性生成的关键在于“关
系化的自主性”这一概念,关系自主异于实质自主,指机器凭借自身的可操作性
而获得自主性,就当前的算法系统(超乎所有具体网络之上)而言,关系自主指
通过处理社会与情感数据活动获得自主性。为探明这一发展的潜力,助益目前愈
发广泛的人机集合协同现象之理论化,更贴切地预测未来真正意义上的机器智能,
我主张机器的关系自主来自于其对人类应变性数据的处理,其实质是人类出借给
机器的虚拟应变性,我将借助对 HBO 科幻连续剧《西部世界》中一些场景的分
析具体论证此一观点。

- 19 -
Mark Hansen

How Can the Mind Participate in (Artificial)


Communication?: An Alternate Path Toward
Thinking (with) Machines

Abstract: My paper will begin with the escalation of the drive to autonomize
decision-making in contemporary military drone development programs.By submitting
this drive to critical inspection, I will suggest that, far from forming a technical fix for
the problem of automating decision, this line of development isolates the impasse to
any such project: the fact that computational machines and processes lack contingency.
Put simply, contingency constitutes a cog in the process of simulation. I shall link this
impasse to the individualist ontology of artificial intelligence research, up to and
including contemporary work on machine learning algorithms, and with the help of
French philosopher, Gilbert Simondon, seek to suggest an alternative path toward
thinking (with) machines, one that expands on Simondon’s work by investing in the
possibility of a joint human-machinic process of properly technological individuation.

The key to such a process is, I shall argue, the concept of relational autonomy; in
contrast to all notions of substance autonomy, relational autonomy stipulates that
machines acquire autonomy through their operationality, and that in the case of
contemporary computational systems (above all the web), this means they acquire
autonomy by processing social and affective data. To explore the potential of this
development, both for contemporary theorization of the co-functioning of humans and
machines in larger, technically-distributed assemblages, and for future speculation
regarding truly machinic intelligence, I will argue that relational autonomy of machines
is obtained by their processing of human contingency – a virtual contingency that we
humans lend to them. I will try to make these issues concrete by analyzing some scenes
from the recent HBO science fiction series, Westworld.

- 20 -
其他的事:人工智能、机器人和社会

冈克尔(David J. Gunkel)

我们正处于被机器人入侵的时代。现在,机器无处不在,几乎可以做任何事
情。我们在网络上和他们聊天,和他们一起玩数码游戏,并且依赖他们日益见长
的本领来组织和管理我们日常生活的方方面面。 面对机器入侵,最关键的问题
是我们如何理解并应对由此带来的全新的社会机遇和挑战。本研究将分成三个步
骤或行动。第一步将重新评估我们定义和理解事物的典型方式。这将以工具理论
为目标,并且重新审视这一理论,因为工具理论将事物,特别是技术产品,仅仅
视为服务于人类利益和目标的工具而已。第二步将探讨人工智能、学习算法和社
交机器人的最新进展给这种标准的默认理解带来怎样的机遇和挑战。最后,作为
结语,第三部分将说明后果,阐述这一发展对我们意味着什么,还有哪些我们可
以与之交流和互动的实体,以及哪些新的社会现状和环境开始规定 21 世纪的生
活方式。

- 21 -
David J. Gunkel

Other Things: AI, Robots and Society

We are it seems in the midst of a robot apocalypse. The invasion, however, does
not look like what we have been programmed to expect from decades of science fiction
literature and film. It occurs not as a spectacular catastrophe involving a marauding
army of alien machines descending from the heavens with weapons of immeasurable
power. Instead, it takes place, and is already taking place, in ways that look more like
the fall of Rome than Battlestar Galactica, with machines of various configurations and
capabilities slowly but surely coming to take up increasingly important and influential
positions in everyday social reality. “The idea that we humans would one day share the
Earth with a rival intelligence,” Philip Hingston (2014) writes, “is as old as science
fiction. That day is speeding toward us. Our rivals (or will they be our companions?)
will not come from another galaxy, but out of our own strivings and imaginings. The
bots are coming: chatbots, robots, gamebots.”

And the robots are not just coming. They are already here. In fact, our
communication and information networks are overrun, if not already run, by machines.
It is now estimated that over 50% of online traffic is machine generated and consumed
(Zeifman 2017). This will only increase with the Internet of things (IoT), which is
expected to support over 26 billion interactive and connected devices by 2020 (by way
of comparison, the current human population of planet earth is estimated to be 7.4
billion) (Gartner 2013). We have therefore already achieved and live in that future
Norbert Wiener (1950) had predicted at the beginning of The Human Use of Human
Beings: Cybernetics and Society: “It is the thesis of this book that society can only be
understood through a study of the messages and the communication facilities which
belong to it; and that in the future development of these messages and communication
facilities, messages between man and machines, between machines and man, and
between machine and machine, are destined to play an ever-increasing part” (p. 16).

What matters most in the face of this machine incursion is not resistance—insofar
as resistance is already futile—but how we decide to make sense of and respond to the
new social opportunities or challenges that these things make available to us. The
investigation of this matter will proceed through three steps or movements. The first
part will critically reevaluate the way we typically situate and make sense of things. It
will therefore target and reconsider the instrumental theory, which characterizes things,

- 22 -
and technological artifacts in particular, as nothing more than tools serving human
interests and objectives. The second will investigate the opportunities and challenges
that recent developments with artificial intelligence, learning algorithms, and social
robots pose to this standard default understanding. These other kinds of things challenge
and exceed the conceptual boundaries of the instrumental theory and ask us to reassess
who or what is (or can be) a legitimate social subject. Finally, and by way of conclusion,
the third part will draw out the consequences of this material, explicating what this
development means for us, the other entities with which we communicate and interact,
and the new social situations and circumstances that are beginning to define life in the
21st century.

1. Standard Operating Presumptions

There is, it seems, nothing particularly interesting or extraordinary about things.


We all know what things are; we deal with them every day. But as Martin Heidegger
(1962) pointed out, this immediacy and proximity is precisely the problem. Marshall
McLuhan and Quentin Fiore (2001) cleverly explained it this way: “one thing about
which fish know exactly nothing is water” (p. 175). Like fish that cannot perceive the
water in which they live and operate, we are, Heidegger argues, often unable to see the
things that are closest to us and comprise the very milieu of our everyday existence. In
response to this, Heidegger commits considerable effort to investigating what things
are and why things seem to be more difficult than they initially appear. In fact, “the
question of things,” is one of the principal concerns and an organizing principles of
Heidegger’s ontological project (Benso, 2000, p. 59), and this concern with things
begins right at the beginning of his 1927 magnum opus, Being and Time: “The Greeks
had an appropriate term for ‘Things’: πράγματα [pragmata]—that is to say, that which
one has to do with in one's concernful dealings (πραξις). But ontologically, the specific
‘pragmatic’ character of the πράγματα is just what the Greeks left in obscurity; they
thought of these ‘proximally’ as ‘mere Things’. We shall call those entities which we
encounter in concern 'equipment' [Zeug]” (Heidegger, 1962, p. 96-97).

According to Heidegger’s analysis, things are not, at least not initially,


experienced as mere entities out there in the world. They are always pragmatically
situated and characterized in terms of our involvements and interactions with the world
in which we live. For this reason, things are first and foremost revealed as “equipment,”
which are useful for our endeavors and objectives. “The ontological status or the kind
of being that belongs to such equipment,” Heidegger (1962) explains, “is primarily
exhibited as 'ready-to-hand' or Zuhandenheit, meaning that some-thing becomes what
it is or acquires its properly 'thingly character' when we use it for some particular
purpose” (p. 98). According to Heidegger, then, the fundamental ontological status, or
mode of being, that belongs to things is primarily exhibited as “ready-to-hand,”

- 23 -
meaning that something becomes what it is or acquires its properly “thingly character”
in coming to be put to use for some particular purpose. A hammer, one of Heidegger's
principal examples, is for building a house to shelter us from the elements; a pen is for
writing an essay like this; a shoe is designed to support the activity of walking.
Everything is what it is in having a “for which” or a destination to which it is always
and already referred. Everything therefore is primarily revealed as being a tool or an
instrument that is useful for our purposes, needs, and projects.1

This mode of existence—what Graham Harman (2002) calls “tool-being”—


applies not just to human artifacts, like hammers, pens, and shoes. It also describes the
basic ontological condition of natural objects, which are, as Heidegger (1962) explains,
discovered in the process of being put to use: “The wood is a forest of timber, the
mountain a quarry of rock, the river is water-power, the wind is wind ‘in the sails’” (p.
100). Everything therefore exists and become what it is insofar as it is useful for some
humanly defined purpose. Things are not just out there in a kind of raw and naked state
but come to be what they are in terms of how they are already put to work and used as
equipment for living. And this is what makes things difficult to see or perceive.
Whatever is ready-to-hand is essentially transparent, unremarkable, and even invisible.
“The peculiarity,” Heidegger (1962) writes, “of what is proximally ready-to-hand is
that, in its readiness-to-hand, it must as it were, withdraw in order to be ready-to-hand
quite authentically. That with which our everyday dealings proximally dwell is not the
tools themselves. On the contrary, that with which we concern ourselves primarily is
the work” (p. 99). Or as Michael Zimmerman (1990) explains by way of Heidegger's
hammer, “In hammering away at the sole of a shoe, the cobbler does not notice the
hammer. Instead, the tool is in effect transparent as an extension of his hand…For tools
to work right, they must be ‘invisible,’ in the sense that they disappear in favor of the
work being done” (p. 139).

This understanding of things can be correlated with the “instrumental theory of


technology,” which Heidegger subsequently addresses in The Question Concerning
Technology (1970). As Andrew Feenberg (1991) has summarized it, “the
instrumentalist theory offers the most widely accepted view of technology. It is based
on the common sense idea that technologies are 'tools' standing ready to serve the
purposes of users” (p. 5). And because a tool or an instrument “is deemed 'neutral,'
without valuative content of its own” a technological thing is evaluated not in and of
itself, but on the basis of the particular employments that have been operationalized by
its human designer, manufacturer, or user. Following from this, technical devices, no
matter how sophisticated or autonomous they appear or are designed to be, are typically
not considered the responsible agent of actions that are performed with or through them.
"Morality, "as J. Storrs Hall (2001) points out, "rests on human shoulders, and if
machines changed the ease with which things were done, they did not change
responsibility for doing them. People have always been the only 'moral agents'" (p. 2).

- 24 -
To put it in colloquial terms (which nevertheless draw on and point back to Heidegger’s
example of the hammer): “It is a poor carpenter who blames his tools.”

This way of thinking not only sounds level-headed and reasonable, it is one of the
standard assumptions deployed in the field of technology and computer ethics.
According to Deborah Johnson’s (1985) formulation, "computer ethics turns out to be
the study of human beings and society—our goals and values, our norms of behavior,
the way we organize ourselves and assign rights and responsibilities, and so on" (p. 6).
Computers, she recognizes, often "instrumentalize" these human values and behaviors
in innovative and challenging ways, but the bottom-line is and remains the way human
agents design and use (or misuse) such technology. Understood in this way, computer
systems, no matter how automatic, independent, or seemingly intelligent they may
become, "are not and can never be (autonomous, independent) moral agents" (Johnson,
2006, p. 203). They will, like all other things, always be instruments of human value,
decision making, and action.

2. Other Kinds of Things

This instrumentalist way of thinking not only sounds reasonable, it is obviously


useful. It is, one might say, instrumental for parsing and responding to questions
concerning proper conduct and social responsibility in the age of increasingly complex
technological devices and systems. And it has a distinct advantage in that it locates
accountability in a widely-accepted and seemingly intuitive subject position, in human
decision making and action. At the same time, however, this particular formulation also
has significant theoretical and practical limitations, especially as it applies (or not) to
recent innovations. Let’s consider three examples that not only complicate the operative
assumptions and consequences of the instrumental theory but require new ways of
perceiving and theorizing the social challenges and opportunities of things.

2.1 Things that Talk

From the beginning, it is communication—and specifically, a tightly constrained


form of conversational interpersonal dialogue—that provides the field of artificial
intelligence (AI) with its definitive characterization and test case. This is immediately
evident in the agenda-setting paper that is credited with defining machine intelligence,
Alan Turing's "Computing Machinery and Intelligence," which was first published in
the journal Mind in 1950. Although the term "artificial intelligence" is a product of the
Dartmouth Conference of 1956, it is Turing's seminal paper and the "game of imitation"
that it describes—what is now routinely called "the Turing Test"—that defines and
characterizes the field. “The idea of the test,” Turing (2004) explained in a BBC
interview from 1952, “is that the machine has to try and pretend to be a man, by

- 25 -
answering questions put to it, and it will only pass if the pretense is reasonably
convincing. A considerable proportion of a jury, who should not be experts about
machines, must be taken in by the pretense. They aren’t allowed to see the machine
itself—that would make it too easy. So the machine is kept in a faraway room and the
jury are allowed to ask it questions, which are transmitted through to it” (p. 495).
According to Turing's stipulations, if a machine is capable of successfully simulating a
human being in communicative interactions to such an extent that human interlocutors
(or “a jury” as Turing calls them in the 1952 interview) cannot tell whether they are
talking with a machine or another human being, then that device would need to be
considered intelligent (Gunkel 2012b).

At the time that Turing published the paper proposing this test-case, he estimated
that the tipping point—the point at which a machine would be able to successfully play
the game of imitation—was at least half-a-century in the future. "I believe that in about
fifty years’ time it will be possible to programme computers, with a storage capacity of
about 109, to make them play the imitation game so well that an average interrogator
will not have more than 70 per cent chance of making the right identification after five
minutes of questioning" (Turing, 1999, p. 44). It did not take that long. Already in 1966
Joseph Weizenbaum demonstrated a simple natural language processing (NLP)
application that was able to converse with human interrogators in such a way as to
appear to be another person. ELIZA, as the application was called, was what we now
recognize as a “chatterbot.” This proto-chatterbot2 was actually a rather simple piece of
programming, “consisting mainly of general methods for analyzing sentences and
sentence fragments, locating so-called key words in texts, assembling sentence from
fragments, and so on. It had, in other words, no built-in contextual framework of
universe of discourse. This was supplied to it by a 'script.' In a sense ELIZA was an
actress who commanded a set of techniques but who had nothing of her own to say"
(Weizenbaum, 1976, p. 188). Despite this rather simple architecture, Weizenbaum's
program demonstrated what Turing had initially predicted:

ELIZA created the most remarkable illusion of having understood


in the minds of many people who conversed with it. People who know
very well that they were conversing with a machine soon forgot that fact,
just as theatergoers, in the grip of suspended disbelief, soon forget that
the action they are witnessing is not “real.” This illusion was especially
strong and most tenaciously clung to among people who know little or
nothing about computers. They would often demand to be permitted to
converse with the system in private, and would, after conversing with it
for a time, insist, in spite of my explanations, that the machine really
understood them (Weizenbaum, 1976, p. 189).

- 26 -
Since the debut of ELIZA, there have been numerous advancements in chatterbot
design, and these devices now populate many of the online social spaces in which we
live, work, and play. As a result of this proliferation, it is not uncommon for users to
assume they are talking to another (human) person, when in fact they are just chatting
up a chatterbot. This was the case for Robert Epstein, a Harvard University PhD and
former editor of Psychology Today, who fell in love with and had a four month online
“affair” with a chatterbot (Epstein, 2007). This was possible not because the bot, that
went by the name “Ivana,” was somehow intelligent, but because the bot’s
conversational behavior was, in the words of Byron Reeves and Clifford Nass (1996),
“close enough to human to encourage social responses” (p. 22). And this approximation
is not necessarily “a feature of the sophistication of bot design, but of the low bandwidth
communication of the online social space,” where it is much easier to convincingly
simulate a human agent (Mowbray, 2002, p. 2).

Despite this knowledge—despite educated, well-informed experts like Epstein


(2007) who has openly admitted that “I know about such things and I should have
certainly known better” (p. 17)—these software implementations can have adverse
effects on both the user and the online communities in which they operate. To make
matters worse (or perhaps more interesting) the problem is not something that is unique
to amorous interpersonal relationships. “The rise of social bots,” as Andrea Peterson
(2013) accurately points out, “isn't just bad for love lives—it could have broader
implications for our ability to trust the authenticity of nearly every interaction we have
online” (p. 1). Case in point—national politics and democratic governance. In a study
conducted during the 2016 US Presidential campaign, Alessandro Bessi and Emilio
Ferrara (2016) found that “the presence of social media bots can indeed negatively
affect democratic political discussion rather than improving it, which in turn can
potentially alter public opinion and endanger the integrity of the Presidential election”
(p. 1).

But who or what is culpable in these circumstances? The instrumental theory


typically leads such questions back to the designer of the application, and this is
precisely how Epstein (2007) made sense of his own experiences, blaming (or crediting)
“a very smug, very anonymous computer programmer” who he assumes was located
somewhere in Russia (p. 17). But things are already more complicated. Epstein is, at
least, partially responsible for “using” the bot and deciding to converse with it, and the
online community in which Epstein met Ivana is arguably responsible for permitting
(perhaps even encouraging) such “deceptions” in the first place. For this reason, the
assignment of culpability is not as simple as it might first appear to be. As Mowbray
(2002) argues, interactions like this "show that a bot may cause harm to other users or
to the community as a whole by the will of its programmers or other users, but that it
also may cause harm through nobody's fault because of the combination of
- 27 -
circumstances involving some combination of its programming, the actions and mental
or emotional states of human users who interact with it, behavior of other bots and of
the environment, and the social economy of the community" (p. 4). Unlike artificial
general intelligence (AGI), which would presumably occupy a subject position
reasonably close to that of another human agent, these ostensibly mindless but very
social things simply muddy the water (which is probably worse) by complicating and
leaving undecided questions regarding agency and instrumentality.

2.2 Things that Think for Themselves

Standard chatterbot architecture, like many computer applications, depends on


programmers coding explicit step-by-step instructions—ostensibly a set of nested
conditional statements that are designed to respond to various kinds of input and
machine states. In order to have ELIZA, or any other chatterbot, “talk” to a human user,
human programmers need to anticipate everything that might be said to the bot and then
code instructions to generate an appropriate response. If, for example, the user types
“Hi, how are you.” The application can be designed to identify this pattern of words
and to respond with a pre-designated result, what Weizenbaum called a “script.”
Machine learning, however, provides an alternative approach to application design and
development. “With machine learning,” as Wired magazine explains, “programmers do
not encode computers with instructions. They train them” (Tanz, 2016, p. 77). Although
this alternative is nothing new—it was originally proposed and demonstrated by Arthur
Samuel as early as 1956—it has recently gained popularity by way of some highly
publicized events involving Google DeepMind’s AlphaGo, which beat one of the most
celebrated players of the notoriously difficult board game Go, and Microsoft’s
Twitterbot Tay.ai, which learned to become a hate spewing neo-Nazi racist after
interacting with users on the Internet.

Both AlphaGo and Tay are AI systems using connectionist architecture. AlphaGo,
as Google DeepMind (2015) explains “combines Monte-Carlo tree search with deep
neural networks that have been trained by supervised learning, from human expert
games, and by reinforcement learning from games of self-play.” In other words,
AlphaGo does not play the game of Go by following a set of cleverly designed moves
described and defined in code by human programmers. The application is designed to
formulate its own instructions from discovering patterns in existing data that has been
assembled from games of expert human players (“supervised learning”) and from the
trial-and-error experience of playing the game against itself (“reinforcement learning”).
Although less is known about the exact inner workings of Tay, Microsoft explains that
the system “has been built by mining relevant public data,” i.e. training its neural
networks on anonymized data obtained from social media, and was designed to evolve
its behavior from interacting with users on social networks like Twitter, Kik, and

- 28 -
GroupMe (Microsoft 2016a). What both systems have in common is that the engineers
who designed and built them have no idea what these things will eventually do once
they are in operation. As Thore Graepel, one of the creators of AlphaGo, has explained:
“Although we have programmed this machine to play, we have no idea what moves it
will come up with. Its moves are an emergent phenomenon from the training. We just
create the data sets and the training algorithms. But the moves it then comes up with
are out of our hands” (Metz, 2016, p. 1). Consequently, machine learning systems, like
AlphaGo, are intentionally designed to do things that their programmers cannot
anticipate or completely control. In other words, we now have autonomous (or at least
semi-autonomous) things that in one way or another have “a mind of their own.” And
this is where things get interesting, especially when it comes to questions of social
responsibility and behavior.

AlphaGo was designed to play Go, and it proved its ability by beating an expert
human player. So who won? Who gets the accolade? Who actually beat the Go
champion Lee Sedol? Following the dictates of the instrumental theory, actions
undertaken with the computer would be attributed to the human programmers who
initially designed the system and are capable of answering for what it does or does not
do. But this explanation does not necessarily hold for an application like AlphaGo,
which was deliberately created to do things that exceed the knowledge and control of
its human designers. In fact, in most of the reporting on this landmark event, it is not
Google or the engineers at DeepMind who are credited with the victory. It is AlphaGo.
In published rankings, for instance, it is AlphaGo that is named as the number two
player in the world (Go Ratings, 2016). Things get even more complicated with Tay,
Microsoft’s foul-mouthed teenage AI, when one asks the question: Who is responsible
for Tay’s bigoted comments on Twitter? According to the standard instrumentalist way
of thinking, we would need to blame the programmers at Microsoft, who designed the
application to be able to do these things. But the programmers obviously did not set out
to create a racist Twitterbot. Tay developed this reprehensible behavior by learning
from interactions with human users on the Internet. So how did Microsoft answer for
this? How did they explain things?

Initially a company spokesperson—in damage-control mode—sent out an email


to Wired, The Washington Post, and other news organizations, that sought to blame the
victim. “The AI chatbot Tay,” the spokesperson explained, “is a machine learning
project, designed for human engagement. It is as much a social and cultural experiment,
as it is technical. Unfortunately, within the first 24 hours of coming online, we became
aware of a coordinated effort by some users to abuse Tay’s commenting skills to have
Tay respond in inappropriate ways. As a result, we have taken Tay offline and are
making adjustments” (Risely, 2016). According to Microsoft, it is not the programmers
or the corporation who are responsible for the hate speech. It is the fault of the users (or
some users) who interacted with Tay and taught her to be a bigot. Tay’s racism, in other

- 29 -
word, is our fault. Later, on 25 March 2016, Peter Lee, VP of Microsoft Research,
posted the following apology on the Official Microsoft Blog: “As many of you know
by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the
unintended offensive and hurtful tweets from Tay, which do not represent who we are
or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to
bring Tay back only when we are confident we can better anticipate malicious intent
that conflicts with our principles and values” (Microsoft, 2016b). But this apology is
also frustratingly unsatisfying or interesting (it all depends on how you look at it).
According to Lee’s carefully worded explanation, Microsoft is only responsible for not
anticipating the bad outcome; it does not take responsibility for the offensive tweets.
For Lee, it is Tay who (or “that,” and words matter here) is named and recognized as
the source of the “wildly inappropriate and reprehensible words and images” (Microsoft,
2016b). And since Tay is a kind of “minor” (a teenage AI) under the protection of her
parent corporation, Microsoft needed to step-in, apologize for their “daughter’s” bad
behavior, and put Tay in a time out.

Although the extent to which one might assign "agency" and "responsibility" to
these mechanisms remains a contested issue, what is not debated is the fact that the
rules of the game have changed significantly. As Andreas Matthias (2004) points out,
summarizing his survey of learning automata:

Presently there are machines in development or already in use


which are able to decide on a course of action and to act without human
intervention. The rules by which they act are not fixed during the
production process, but can be changed during the operation of the
machine, by the machine itself. This is what we call machine learning.
Traditionally we hold either the operator/manufacture of the machine
responsible for the consequences of its operation or "nobody" (in cases,
where no personal fault can be identified). Now it can be shown that
there is an increasing class of machine actions, where the traditional
ways of responsibility ascription are not compatible with our sense of
justice and the moral framework of society because nobody has enough
control over the machine's actions to be able to assume responsibility for
them (p. 177).

In other words, the instrumental theory of things, which had effectively tethered
machine action to human agency, no longer adequately applies to mechanisms that have
been deliberately designed to operate and exhibit some form, no matter how
rudimentary, of independent action or autonomous decision making. Contrary to the
usual instrumentalist way of thinking, we now have things that are deliberately designed
- 30 -
to exceed our control and our ability to respond or to answer for them.

2.3 Things that are More than Things

In July of 2014 the world got its first look at Jibo. Who or what is Jibo? That is an
interesting and important question. In a promotional video that was designed to raise
capital investment through pre-orders, social robotics pioneer Cynthia Breazeal
introduced Jibo with the following explanation: “This is your car. This is your house.
This is your toothbrush. These are your things. But these [and the camera zooms into a
family photograph] are the things that matter. And somewhere in between is this guy.
Introducing Jibo, the world’s first family robot” (Jibo 2014). Whether explicitly
recognized as such or not, this promotional video leverages a crucial ontological
distinction that Jacques Derrida (2005) calls the difference between “who” and “what”
(p. 80). On the side of “what” we have those things that are mere instruments—our car,
our house, and our toothbrush. According to the usual way of thinking, these things are
mere instruments or tools that do not have any independent status whatsoever. We
might worry about the impact that the car’s emissions has on the environment (or
perhaps stated more precisely, on the health and well-being of the other human beings
who share this planet with us), but the car itself is not a socially significant subject. On
the other side there are, as the video describes it “those things that matter.” These things
are not things, strictly speaking, but are the other persons who count as socially and
morally significant Others. Unlike the car, the house, or the toothbrush, these Others
have independent status and can be benefitted or harmed by our decisions and actions.

Jibo, we are told, occupies a place that is situated somewhere in between what are
mere things and those Others who really matter. Consequently Jibo is not just another
instrument, like the automobile or toothbrush. But he/she/it (and the choice of pronoun
is not unimportant) is also not quite another member of the family pictured in the
photograph. Jibo inhabits a place in between these two ontological categories. It is a
kind of “quasi-other” (Ihde, 1990, p. 107). This is, it should be noted, not unprecedented.
We are already familiar with other entities that occupy a similar ambivalent social
position, like the family dog. In fact animals, which since the time of Rene Descartes
have been the other of the machine (Gunkel, 2012a, p. 60), provide a good precedent
for understanding the changing nature of things in the face of social robots, like Jibo.
“Looking at state of the art technology,” Kate Darling (2012) writes, “our robots are
nowhere close to the intelligence and complexity of humans or animals, nor will they
reach this stage in the near future. And yet, while it seems far-fetched for a robot’s legal
status to differ from that of a toaster, there is already a notable difference in how we
interact with certain types of robotic objects” (p. 1). This occurs, Darling continues,
because of our tendencies to anthropomorphize things by projecting into them cognitive
capabilities, emotions, and motivations that do not necessarily exist in the mechanism
per se. But it is this emotional reaction that necessitates new forms of obligation in the
- 31 -
face of things. “Given that many people already feel strongly about state-of-the-art
social robot ‘abuse,’ it may soon become more widely perceived as out of line with our
social values to treat robotic companions in a way that we would not treat our pets”
(Darling, 2012, p. 1).

Jibo, and other social robots like it, are not science fiction. They are already or will
soon be in our lives and in our homes. As Breazeal (2002) describes it, “a sociable robot
is able to communicate and interact with us, understand and even relate to us, in a
personal way. It should be able to understand us and itself in social terms. We, in turn,
should be able to understand it in the same social terms—to be able to relate to it and
to empathize with it…In short, a sociable robot is socially intelligent in a human-like
way, and interacting with it is like interacting with another person” (p. 1). In the face
of these socially situated and interactive entities we are going to have to decide whether
they are mere things like our car, our house, and our toothbrush; someone who matters
like another member of the family; or something altogether different that is situated in
between the one and the other. In whatever way this comes to be decided, however,
these things will undoubtedly challenge the way we typically distinguish between who
is to be considered another social subject and what remains a mere instrument or tool.

3. Between a Bot and a Hard Place

Although things are initially experienced and revealed in the mode of being
Heidegger calls Zuhandenheit (e.g. instruments that are useful or handy for our
purposes and endeavors), things do not necessarily end here. They can also, as
Heidegger (1962) explains, be subsequently disclosed as present-at-hand, or
Vorhandenheit, revealing themselves to us as objects that are or become, for one reason
or another, un-ready-to-hand (p. 103). This occurs when things, which had been
virtually invisible instruments, fail to function as they should or are designed to get in
the way of their own instrumentality. “The equipmental character of things,” Silvia
Benso (2000) writes, “is explicitly apprehended via negativa when a thing reveals its
unusability, or is missing, or ‘stands in the way’” (p. 82). And this is what happens with
things like chatterbots, machine learning applications, and social robots insofar as they
interrupt or challenge the smooth functioning of their instrumentality. In fact, what we
see in the face of these things is not just the failure of a particular piece of equipment—
e.g. the failure of a bot like “Ivana” to successfully pass as another person in
conversational interactions or the unanticipated and surprising effect of a Twitterbot
like Tay that learned to be a neo-Nazi racist—but the limit of the standard
instrumentalist way of thinking itself. In other words, what we see in the face
chatterbots, machine learning algorithms, and social robots are things that intentionally
challenge and undermine the standard way of thinking about and making sense of things.

- 32 -
Responding to this challenge (or opportunity) leads in two apparently different and
opposite directions.

3.1 Instrumentalism Redux

We can try to respond to these things as we typically have, treating these


increasingly social and interactive mechanisms as mere instruments or tools.
"Computer systems," as Johnson (2006) explains, "are produced, distributed, and used
by people engaged in social practices and meaningful pursuits. This is as true of current
computer systems as it will be of future computer systems. No matter how
independently, automatic, and interactive computer systems of the future behave, they
will be the products (direct or indirect) of human behavior, human social institutions,
and human decision" (p. 197). This argument is persuasive, precisely because it draws
on and is underwritten by the usual understanding of things. Things—no matter how
sophisticated, intelligent, and social they are, appear to be, or may become—are and
will continue to be tools of human action, nothing more. If something goes wrong (or
goes right) because of the actions or inactions of a bot or some other thing, there is
always someone who is ultimately responsible for what happens with it. Finding that
person (or persons) may require sorting through layer upon layer of technological
mediation, but there is always someone—specifically some human someone—who is
presumed to be responsible and accountable for it. According to this way of thinking,
all things, no matter how sophisticated or interactive they appear to be, are actually
“Wizard of Oz technology.”4 There is always “a man behind the curtain,” pulling the
strings and responsible for what happens. And this line of reasoning is entirely
consistent with current legal practices. “As a tool for use by human beings,” Matthew
Gladden (2016) argues, “questions of legal responsibility…revolve around well-
established questions of product liability for design defects (Calverley 2008, 533;
Datteri 2013) on the part of its producer, professional malpractice on the part of its
human operator, and, at a more generalized level, political responsibility for those
legislative and licensing bodies that allowed such devices to be created and used” (p.
184).

But this strict re-application of instrumentalist thinking, for all its usefulness and
apparent simplicity, neglects the social presence of these things and the effects they
have within the networks of contemporary culture. We are, no doubt, the ones who
design, develop, and deploy these technologies, but what happens with them once they
are “released into the wild” is not necessarily predictable or completely under our
control. In fact, in situations where something has gone wrong, like the Tay incident,
or gone right, as was the case with AlphaGo, identifying the responsible party or parties
behind these things is at least as difficult as ascertaining the “true identity” of the “real
person” behind the avatar. Consequently things like mindless chatterbots, as Mowbray

- 33 -
(2002) points out, do not necessarily need human-level intelligence, consciousness,
sentience, etc. to complicate questions regarding responsibility and social standing.
Likewise, as Reeves and Nass (1996) already demonstrated over two decades ago with
things that were significantly less sophisticated than these recent technological
innovations, we like things. And we like things even when we know they are just things.
“Computers, in the way that they communicate, instruct, and take turns interacting, are
close enough to human that they encourage social responses. The encouragement
necessary for such a reaction need not be much. As long as there are some behaviors
that suggest a social presence, people will respond accordingly… Consequently, any
medium that is close enough will get human treatment, even though people know it’s
foolish and even though they likely will deny it afterwards” (p. 22). For this reason,
reminding users that they are just interacting with “mindless things,” might be the
“correct information,” but doing so is often as ineffectual as telling movie-goers that
the action they see on the screen is not real. We know this, but that does not necessarily
change things. So what we have is a situation where our theory concerning things—a
theory that has considerable history behind it and that has been determined to be as
applicable to simple devices like hand tools as it is to complex technological systems—
seems to be out of sync with the actual experiences we have with things in a variety of
situations and circumstances. In other words, the instrumentalist way of thinking may
be ontologically correct, but it is socially inept and out of touch.

3.2 Thinking Otherwise or the Relational Turn

As an alternative, we can think things otherwise. This other way of thinking


effectively flips the script on the standard way of dealing with things whereby, as
Luciano Floridi (2013) has describes it, what something is determines how it is treated
(p. 116). Thinking otherwise deliberately inverts and distorts this procedure by making
the “what” dependent on and derived from the “how.” The advantage to this way of
thinking is that it not only provides an entirely different method for responding to the
social opportunities and challenges of all kind of things—like chatterbots, learning
algorithms, and social robots—but also formulates an entirely different way of thinking
about things in the face of others, and others forms of otherness. Following the contours
of this alternative way of thinking, something’s status—its social, moral and even
ontological situation—is decided and conferred not on the basis of some pre-determined
criteria or capability (or lack thereof) but in the face of actual social relationships and
interactions. “Moral consideration,” as Mark Coeckelbergh (2010) describes it, “is no
longer seen as being ‘intrinsic’ to the entity: instead it is seen as something that is
‘extrinsic’: it is attributed to entities within social relations and within a social context”
(p. 214). In other words, as we encounter and interact with others—whether they be
other human persons, other kinds of living beings like animals or plants, the natural
environment, or a socially interactive bot—this other entity is first and foremost situated
- 34 -
in relationship to us. Consequently, the question of something’s status does not
necessarily depend on what it is in its essence but on how she/he/it (and the pronoun
that comes to be deployed in this circumstance is not immaterial) supervenes before us
and how we decide to respond (or not) “in the face of the other,” to use terminology
borrowed from Emmanuel Levinas (1969). In this transaction, “relations are prior to
the things related” (Callicott, 1989, p. 110), instituting what Anne Gerdes (2015),
following Coeckelbergh (2012, p. 49) and myself (Gunkel, 2012), has called “the
relational turn.”

This shift in perspective, it is important to point out, is not just a theoretical game,
it has been confirmed in numerous experimental trials and practical experiences with
things. The computer as social actor (CASA) studies undertaken by Reeves and Nass
(1996), for example, demonstrated that human users will accord computers social
standing similar to that of another human person and this occurs as a product of the
extrinsic social interaction, irrespective of the actual composition (or “being” as
Heidegger would say) of the thing in question. These results, which were obtained in
numerous empirical studies with human subjects, have been independently verified in
two recent experiments with robots, one reported in the International Journal of Social
Robotics (Rosenthal-von der Pütten et al, 2013), where researchers found that human
subjects respond emotionally to robots and express empathic concern for machines
irrespective of knowledge concerning the actual ontological status of the mechanism,
and another that used physiological evidence, documented by electroencephalography,
of the ability of humans to empathize with what appears to be “robot pain” (Suzuki et
al, 2015). And it appears that this happens not just with seemingly intelligent artifacts
in the laboratory setting but with just about any old thing that has some social presence,
like the very industrial-looking Packbots that are being utilized on the battlefield. As P.
W. Singer (2009, p. 338) has reported, soldiers form surprisingly close personal bonds
with their units’ Packbot, giving them names, awarding them battlefield promotions,
risking their own lives to protect that of the machine, and even mourning their “death.”
This happens, Singer explains, as a product of the way the mechanism is situated within
the unit and the social role that it plays in field operations. And it happens in direct
opposition to what otherwise sounds like good common sense: They are just things—
instruments or tools that feel nothing.

Once again, this decision sounds reasonable and justified. It extends consideration
to these other socially aware and interactive things and recognizes, following the
predictions of Wiener (1950, p. 16), that the social situations of the future will involve
not just human-to-human interactions but relationships between humans and machines
and machines and machines. But this shift in perspective also has significant costs. For
all its opportunities, this approach is inevitably and unavoidably exposed to the charge
of relativism—“the claim that no universally valid beliefs or values exist” (Ess, 1996,
p. 204). To put it rather bluntly, if the social status of things is relational and open to

- 35 -
social negotiation, are we not at risk of affirming a kind of social constructivism or
moral relativism? One should perhaps answer this indictment not by seeking some
definitive and universally accepted response (which would obviously reply to the
charge of relativism by taking refuge in and validating its opposite), but by following
Slavoj Žižek’s (2000) strategy of “fully endorsing what one is accused of” (p. 3). So
yes, relativism, but an extreme and carefully articulated version of it. That is, a
relativism (or, if you prefer, a “relationalism”) that can no longer be comprehended by
that kind of understanding of the term which makes it the mere negative and opposite
of an already privileged universalism. Relativism, therefore, does not necessarily need
to be construed negatively and decried, as Žižek (2006) himself has often done, as the
epitome of postmodern multiculturalism run amok (p. 281). It can be understood
otherwise. “Relativism,” as Robert Scott (1976) argues, “supposedly, means a
standardless society, or at least a maze of differing standards…Rather than a
standardless society, which is the same as saying no society at all, relativism indicates
circumstances in which standards have to be established cooperatively and renewed
repeatedly” (p. 264). In fully endorsing this form of relativism and following through
on it to the end, what one gets is not necessarily what might have been expected, namely
a situation where anything goes and “everything is permitted.” Instead, what is obtained
is a kind of socially attentive thinking that turns out to be much more responsive and
responsible in the face of other things.

These two options anchor opposing ends of a spectrum that can be called the
machine question (Gunkel 2012a). How we decide to respond to the opportunities and
challenges of this question will have a profound effect on the way we conceptualize our
place in the world, who we decide to include in the community of socially significant
subjects, and what things we exclude from such consideration and why. But no matter
how it is decided, it is a decision—quite literally a cut that institutes difference and
makes a difference. We are, therefore, responsible both for deciding who counts as
another subject and what is not and, in the process, for determining the way we perceive
the current state and future possibility of social relations.

Notes
1
A consequence of this way of thinking about things is that all things are initially
revealed and characterized as media or something through which human users act. For
more on this subject, see Heidegger and the Media (Gunkel and Taylor, 2014).

2
Identification of these two alternatives have also been advanced in the phenomenology
of technology developed by Don Ihde. In Technology and the Lifeworld, Ihde (1990)
distinguishes between “those technologies that I can take into my experience that

- 36 -
through their semi-transparency they allow the world to be made immediate” and
“alterity relations in which the technology becomes quasi-other, or technology “as”
other to which I relate” (p. 107).

3
Although the term “chatterbot” was not utilized by Weizenbaum, it has been applied
retroactively as a result of the efforts of Michael Mauldin, founder and chief scientist
of Lycos, who introduced the neologism in 1994 in order to identify a similar NLP
application that he eventually called Julia.

4
“Wizard of Oz” is a term that is utilized in Human Computer Interaction (HCI) studies
to describe experimental procedures where test subjects interact with a computer system
or robot that is assumed to be autonomous but is actually controlled by an experimenter
who remains hidden from view. The term was initially introduced by John F. Kelly in
the early 1980s.

References

Benso, S. (2000). The Face of Things: A Different Side of Ethics. Albany, NY:
State University of New York Press.

Bessi, A. and E. Ferrara (2016). Social Bots Distort the 2016 U.S. Presidential
Election Online Discussion. First Monday 21(11).
http://firstmonday.org/ojs/index.php/ fm/article/view/7090/5653

Breazeal, C. L. (2004). Designing Sociable Robots. Cambridge, MA: MIT Press.

Callicott, J. B. (1989). In Defense of the Land Ethic: Essays in Environmental


Philosophy. Albany, NY: State University of New York Press.

Calverley, D. J. (2008). Imaging a Non-Biological Machine as a Legal Person. AI


& Society 22(4): 523-537.

Coeckelbergh, M. (2010). Robot Rights? Towards a Social-Relational


Justification of Moral Consideration. Ethics and Information Technology 12:
209-221.

Coeckelbergh, M. (2012). Growing Moral Relations: A Critque of Moral Status


Ascription. New York: Palgrave Macmillan.

- 37 -
Darling, K. (2012). Extending Legal Protection to Social Robots. IEEE Spectrum,
10 September 2012. http://spectrum.ieee.org/automaton/robotics/artificial-
intelligence/extending-legal-protection-to-social-robots

Datteri, E. (2013). Predicting the Long-Term Effects of Human-Robot


Interaction: A Reflection on Responsibility in Medical Robotics. Science
and Engineering Ethics 19(1): 139-160, 2013.

Derrida, J. (2005). Paper Machine Trans. by R. Bowlby. Stanford, CA: Stanford


University Press.

Epstein, R. (2007). From Russia, With Love: How I Got Fooled (And Somewhat
Humiliated) by a Computer. Scientific American Mind Oct/Nov: 16-17.

Ess, C. (1996). The Political Computer: Democracy, CMC, and Habermas. In C.


Ess (ed.), Philosophical Perspectives on Computer-Mediated
Communication (pp. 197-232). SUNY Press, Albany, NY,

Feenberg, A. (1991). Critical Theory of Technology. New York: Oxford


University Press.

Floridi, L. (2013). The Ethics of Information. Oxford: Oxford University Press.

Gartner (2013). Press Release. http://www.gartner.com/newsroom/id/2636073

Gerdes, A. (2015). The Issue of Moral Consideration in Robot Ethics. ACM


SIGCAS Computers & Society 45(3): 274-280.

Gladden, M. E. (2016) The Diffuse Intelligent Other: An Ontology of


Nonlocalizable Robots as Moral and Legal Actors. In M. Nørskov, editor,
Social Robots: Boundaries, Potential, Challenges, pp. 177-198. Burlington,
VT: Ashgate.

Go Ratings (20 November 2016). https://www.goratings.org/

Google DeepMind (2016). AlphaGo. https://deepmind.com/alpha-go.html

Gunkel, D. J. (2012a). The Machine Question: Critical Perspectives on AI,


Robots and Ethics. Cambridge, MA: MIT Press.

Gunkel, D. J. (2012b). Communication and Artificial Intelligence: Opportunities


and Challenges for the 21st Century. Communication +1 1(1): 1-25.
http://scholarworks.umass.edu/cpo/vol1/iss1/1/

Gunkel, D. J. and P. A. Taylor (2014). Heidegger and the Media. Cambridge:


Polity.

Hall, J. S. 2001. Ethics for Machines. KurzweilAI.net.


http://www.kurzweilai.net/ethics-for-machines

- 38 -
Harman, G. (2002). Tool Being: Heidegger and the Metaphysics of Objects. Peru,
IL: Open Court Publishing.

Heidegger, M. (1962). Being and Time. Trans. by J. Macquarrie and E. Robinson.


New York: Harper and Row.

Heidegger, M. (1977). The Question Concerning Technology and Other Essays.


Trans. by William Lovitt New York: Harper & Row.

Hingston, P. (2014). Believable Bots: Can Computers Play Like People? New
York: Springer.

Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth.


Bloomington, IN: Indiana University Press.

Jibo. (2014). https://www.jibo.com

Johnson, D. G. (1985). Computer Ethics. Upper Saddle River, NJ: Prentice Hall.

Johnson, D. G. (2006). Computer Systems: Moral Entities but not Moral Agents.
Ethics and Information Technology 8: 195-204.

Levinas, E. (1969). Totality and Infinity: An Essay on Exteriority. Trans. by


Alphonso Lingis. Pittsburgh, PA: Duquesne University.

Matthias, A. (2004). The Responsibility Gap: Ascribing Responsibility for the


Actions of Learning Automata. Ethics and Information Technology 6: 175-
183.

McLuhan, M. and Q. Fiore (2001). War and Peace in the Global Village.
Berkeley, CA: Ginko Press.

Metz, C. (2016). Google’s AI Wins a Pivotal Second Game in Match with Go


Grandmaster. Wired. http://www.wired.com/2016/03/googles-ai-wins-
pivotal-game-two-match-go-grandmaster/

Microsoft. (2016a). Meet Tay—Microsoft A.I. Chatbot with Zero Chill.


https://www.tay.ai/

Microsoft. (2016b). Learning from Tay’s introduction. Official Microsoft Blog.


https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/

Mowbray, M. (2002). Ethics for Bots. Paper presented at the 14th International
Conference on System Research, Informatics and Cybernetics. Baden-
Baden, Germany. 29 July-3 August.
http://www.hpl.hp.com/techreports/2002/HPL-2002-48R1.pdf

Peterson, A. (13 August 2013). On the Internet, No one Knows You’re a Bot.
And That’s a Problem. The Washington Post.

- 39 -
https://www.washingtonpost.com/news/the-switch/wp/2013/08/13/on-the-
internet-no-one-knows-youre-a-bot-and-thats-a-
problem/?utm_term=.b4e0dd77428a

Reeves, B. and C. Nass. (1996). The Media Equation: How People Treat
Computers, Television, and New Media Like Real People and Places.
Cambridge: Cambridge University Press.

Risely, J. (2016). Microsoft’s Millennial Chatbot Tay.ai Pulled Offline After


Internet Teaches Her Racism. GeekWire.
http://www.geekwire.com/2016/even-robot-teens-impressionable-
microsofts-tay-ai-pulled-internet-teaches-racism/

Rosenthal-von der Pütten, A. M., N. C. Krämer, L. Hoffmann, S. Sobieraj and S.


C. Eimler (2013). An Experimental Study on Emotional Reactions Towards
a Robot. International Journal of Social Robotics 5: 17-34.

Scott, R. L. (1976). On Viewing Rhetoric as Epistemic: Ten Years Later. Central


States Speech Journal 27(4): 258-266.

Singer, P. W. (2009). Wired for War: The Robotics Revolution and Conflict in the
Twenty-First Century. New York: Penguin Books.

Suzuki, Y., L. Galli, A. Ikeda, S. Itakura and M. Kitazaki (2015). Measuring


Empathy for Human and Robot Hand Pain Using Electroencephalography.
Scientific Reports 5: 15924. http://www.nature.com/articles/srep15924

Tanz, J. (2016). The End of Code. Wired 24(6): 75-79.


http://www.wired.com/2016/05/the-end-of-code/

Turing, A. (1999). Computing Machinery and Intelligence. In P. A. Meyer (ed.),


Computer Media and Communication: A Reader, 37-58. Oxford: Oxford
University Press.

Turing, A. (2004). Can Automatic Calculating Machines Be Said to Think? The


Essential Turing, 487-505. Oxford: Oxford University Press.

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment


to Calculation. San Francisco, CA: W. H. Freeman and Company.

Wiener, N. (1950). The Human Use of Human Beings: Cybernetics and Society.
Boston: Ad Capo Press.

Zeifman, I. (2017). Bot Traffic Report 2016. Incapsula.


https://www.incapsula.com/blog/bot-traffic-report-2016.html

Zimmerman, M. E. (1990). Heidegger's Confrontation with Modernity:


Technology, Politics, Art. Bloomington, IN: Indiana University Press.

- 40 -
Žižek, S. (2000). The Fragile Absolute or, Why Is the Christian Legacy Worth
Fighting For? New York: Verso.

Žižek, S. (2006). The Parallax View. Cambridge, MA: MIT Press.

- 41 -
情绪与认知功能的认知与神经基础

罗跃嘉

情绪是复杂的心理生理学现象,反映了心智状态与个体内在与外部环境影
响的相互作用。认知功能是人类认识和获取知识的智能加工过程,涉及学习、
记忆、语言、思维、精神、情感等一系列随意、心理和社会行为。本报告将回
顾课题组近年来在情绪与注意、工作记忆、冲突、抑制、决策等方面的系列研
究工作。情绪与认知是重点讲述的内容,二者是相互依存、相互作用的,例
如,表情对于人类的社会行为具有重要意义,研究结果提出了不同表情的三阶
段加工假说;情绪具有注意的负偏向,发生在早期知觉阶段、晚期刺激评价阶
段以及动作准备阶段;情绪对执行功能的影响体现在冲突和抑制等方面;消极
情绪选择性影响空间工作记忆的皮层区;FRN 与 P3 的改变表明了焦虑情绪对
于决策的影响过程。上述研究揭示了情绪与执行功能的相互作用及其潜在的神
经基础,以力图对现有理论进行补充、修改,或提出新观点,将有助于加深对
于情绪与认知脑机制的进一步认识。

- 42 -
Luo Yuejia

Neural Mechanism for Emotion and Cognitive Function

Abstract: Neural correlates of the reciprocal interaction between emotion and


cognitive function are related to psychology, neurobiology, neurology, biomedical
engineering and sociology, as well as other cutting-edge interdisciplinary research of
the major scientific issues related to the National Mental Health. The presentation will
to summarize the neuroimaging studies of our group in recently years, and to explore
neural correlates of the reciprocal interaction between emotion and cognitive function.
The neuroimage results showed that: (1) Emotional negativity bias could occur in
several temporal stages distinguished by attention, evaluation and reaction readiness
periods; the fronto-central scalp distribution of the neonatal brain could discriminate
fearful voices from angry voices. (2) The differential effects of negative emotion on
special and verbal working memory; and the differential effects of negative emotion on
spatial and verbal WM mainly take place during information maintenance processing.
(3) We found significantly different FRN responses between high-anxious and low-
anxious participants in ambiguous outcome condition, as well as in negative outcome
condition. Moreover, the HTA group’s FRN responses under the ambiguous outcome
condition were larger than the negative outcome condition. Our work investigated to
clarify the relationships among emotion and attention, memory, and decision-making,
as well as the neural mechanism of the production of the emotion, in order to provide
scientific theoretical basis and technological means for the clinical diagnosis and
treatments of the emotion disorders.

- 43 -
沉浸的概念:中介空间和主体位置

多 恩 (Mary Ann Doane)

“沉浸”一词越来越广泛地用以描述图像和声音的新科技,以及这些新科
技与主观性、空间性和位置的关系。“沉浸”一词的使用将主体移入电影,并表
达了一种电影的泄放,从银幕至电影院,使空间与位置的关系变得模糊不清。本
文以 IMAX(最大影像)和数字环绕声这两种电影技术为研究对象,借此探讨沉浸
的话语。IMAX 一直与表示崇高的美学范畴保持一致,从康德的观点来看,它也与
无限的概念化并行不悖。这一点表明,无限的概念已经不再局限于表达人文视阈
里的深度和后退,而开始与规模、延展和难以觉知的网络息息相关。IMAX 崇高的
逻辑——也许是卓越的技术崇高——在沉浸的话语体系下运行。这制造了一种幻
觉,使主体以为同时既享有深刻和又能即刻感知自己的身体。这也隐藏了对主体
的猛烈的移位和错位,使主体仿佛得以面向一个延展至无限的世界。本文研究的
数字环绕声与电影“转弯”的比喻有关——空间的转弯必须交由角色或者摄像
机来完成,以避免让观众转弯,因为那会牺牲掉电影中的世界/空间视角。声音
一旦进入电影院,便会在声音的定位和观看的方向之间产生一种张力。但以上技
术能唤起吸纳和包围的隐喻,扩大电影(叙事)的范围,从而超越屏幕,渗透进
入观众的空间。通过将观众定位在另一空间的任何地点,沉浸减弱了位置的强制
性。这种强制性来自于主体身体的局限,又或如列斐伏尔(Henri Lefebvre)所
言,来自于在特定历史和社会背景下划分和分析的空间,更不用说来自于制造的
空间了。

- 44 -
Mary Ann Doane

The Concept of Immersion:

Mediated Space and the Location of the Subject

One of the most remarkable achievements of Classical Hollywood narrative was


undoubtedly its construction of a plausible space that the spectator could understand as
inhabitable, navigable, homogeneous and continuous. This was a full space, within
which events could take place—any events, a plethora of stories. But despite this
cinema’s extensive deployment of offscreen space and the suggestion of infinite depths,
the inescapable limit was the fact of the screen, a clearly localizable screen housed in a
theater that incarnated the destiny of the moviegoer. Roland Barthes attempted to come
to grips with the pleasures of cinema by proposing an oscillation between two spaces—
that provided by the film and that associated with all the marginalia of the theatrical
setting: “letting oneself be fascinated twice over, by the image and by its surrounding—
as if I had two bodies at the same time: a narcissistic body which gazes, lost, into the
engulfing mirror, and a perverse body, ready to fetishize not the image but precisely
what exceeds it: the texture of the sound, the hall, the darkness, the obscure mass of the
other bodies, the rays of light, entering the theater, leaving the hall…”. But what he
does not acknowledge explicitly is that one cannot inhabit both spaces at the same time.
To the extent that one is “engulfed” in the film’s diegesis one cannot be aware of the
space of the theater, no matter how ornamental or alluring. And to the extent that one
is attentive to and delights in the supplemental detail of the theatrical setting, one blinds
oneself to/loses sight of the diegesis. Barthes would like to make this a dialectical
movement but, instead, it is an either/or.1

Today, one feels a whiff of nostalgia reading Barthes’s description of his relation
and nonrelation to the screen in the movie theater, at the idea of a designated place for
the viewing of moving images that might constitute in itself a distraction. In contrast,
screen culture now has become strikingly heterogeneous and pervasive. Screen sizes
now range from the miniature touch-screen of the iPhone and iPad to the immense scale
of IMAX. Images are mobile and transportable, savable and recyclable, called up at

1
Rustle of Language, p. 239.
45
will and often ephemeral. They can be viewed virtually anywhere. And to that extent,
where they are viewed becomes less and less significant, even in the case of IMAX,
which although it requires a specialized theater, projection and screen, heavily weights
Barthes’s first form of fascination with the image—that of engulfment. In IMAX, the
bloated image exceeds the screen, swells, distends and infiltrates the space of the
spectator. If Hollywood’s promise was that of taking the viewer to another place by
denying his or her own location in the theater, IMAX holds out the allure of annihilating
that location, hence the pervasive and persistent discourse of “immersion.” The concept
of immersion suggests a transport of the subject into the image but also a bleeding of
the image beyond the screen into the auditorium so that the very question of place or
location becomes nebulous. References to immersion are ubiquitous in advertising for
IMAX, 3-D, digital sound surround and virtual reality. The concept is symptomatic of
larger questions concerning subjectivity, spatiality and mediation. I will focus here on
two technologies that have been consistently and emphatically allied with the discourse
of immersion: IMAX and digital sound surround systems.

IMAX and the Sublime

With IMAX, size is the central and defining characteristic, so much so that the
films themselves must entail subjects of a certain grandeur and ungraspability, self-
reflexively conjuring up narratives of magnitude. IMAX seems to have fulfilled the
early cinematic aspirations associated with the phrase, “Bigger Than Life.” The
emergent history of IMAX was hence dominated by nature and exploration films,
seemingly transcending the comparatively minute human scale of characters and plots.
The earliest IMAX films were significantly shorter than traditional feature length films,
ranging from 17 minutes to half an hour, at least partially determining the avoidance of
fiction and the classical narrative, whose norms at that point in cinematic history
required a certain duration. IMAX emerged from and found a home in world fairs and
expositions as a performance of the capabilities of image technologies—the films were
less about subjects than the very fact of the technology. Migrating to specialized venues
associated with museums and science centers, the films were presented as an
educational experience, often touristic (and imperialistic).2

The advertising rhetoric for IMAX reiterates and refashions that for widescreen in
the 1950s and focuses on the concept of “immersion.” “You,” i.e. the spectator, are not
observing the space revealed on the screen—you are inside of it. For John Belton, the
“illusion of limitless horizontal vision” in Cinerama and Cinemascope intensified the

2
See Charles Akland, “IMAX Technology and the Tourist Gaze,” Cultural Studies 12, no. 3
(1998)
46
spectator’s sense of immersion or absorption in the space of the film (much of the
advertising for these processes emphasized the spatial relocation of the spectator from
his/her seat to the world provided by the cinema).3 [Figure 1] IMAX ads as well insist
that the spaces of film and spectator are confused and entangled. Objects or persons in
the film reach out of the screen into the space of the audience or the spectator is sucked
into the world of the film, erasing all borders between representational space and the
space of the viewer. [Figures 2, 3] In this scenario, there is no “offscreen space.” All of
the world has become media and as a consequence, there is no mediation.

The paradox of IMAX is that its development and expansion in theaters coincided
with the accelerating minimization of screen size—on computers, laptops, notepads and
culminating in handheld mobile devices such as the iPhone. Films are now viewable on
the smallest of screens as well as the largest. Although David Lynch, in defense of the
large screen, has categorically insisted (using various expletives) that if you view a film
on an iPhone, you simply haven’t seen the film,4 the mobility of images is a pervasive
cultural phenomenon that must be confronted. Perhaps it is not so much a question of
whether it is the “same image,” but how technologies with such extreme differences of
scale can inhabit the same media network. What is the work of “scale” in contemporary
media and how does it configure or reconfigure space, location and subjectivity?

At first glance, the iPhone, unlike IMAX , would not seem to provide an immersive
experience. Immersion connotes a transport of the subject into the image and the iPhone
appears to give its user an unprecedented control over the screen. But if immersion,
with its alliance with water, fluids, liquidity, indexes an absorption in a substance that
is overwhelming and all-encompassing, there is a sense in which the user of the iPhone
could be described as immersed. In fact, this has been the social anxiety concerning
iPhones—young people, absorbed in their iPhones, are lost to the world. They no longer
have face-to-face conversations; they are no longer where they are. They have fled the
real. This fear of the danger of iPhones is reminiscent of historical diatribes against the
movies for their irresistible influence on young and malleable minds, particularly in
relation to images of sex and violence. In the case of the iPhone, what is feared is a
form of temporal and spatial immersion, absenting oneself from a specific time and
location. The geography of the iPhone is that of “elsewhere,” the elsewhere of an
unmappable, uncognizable network.

Yet, immersion is a very vague, imprecise analytical concept and we should be


suspicious of its easy transfer between advertising and journalistic discourses and
critical theoretical discourses on the media. While a number of scholars have noted that
the concept of immersion is deployed in advertising for widescreen and IMAX, there
also seems to be an infiltration of its symbolic penumbra, a contagion of its dream

3
(Belton, 1992, p. 197).
4
on YouTube, paid for by Apple
47
within their own critical language. Haidee Wasson describes the experience of IMAX
in these terms: “With IMAX you find yourself moving into and out of great heights and
depths, traveling downward to the bottom of the sea or upward to the stars” or “IMAX
engulfs its spectators, stretching the limits of human vision through its expansive screen
and immersive aesthetic.”5 For Charles Acland, “IMAX films soar. Especially through
the simulation of motion, they encourage a momentary joy in being placed in a space
shuttle, on a scuba dive, or on the wing of a fighter jet.” Immersion is used not only to
describe the experience of IMAX, but of new technologies such as virtual imaging. It
is the lure, the desire, the alleged fascination of the industry itself. But what does it
mean to be immersed? And why is it the focus of a contemporary desire? Obviously
figural, the tropology denies the physical location of the spectator. I propose to read the
concept of immersion as symptomatic, as a claim that points to a work of spatial
restructuring in a screen-saturated social economy.

IMAX is about excess—one of its movie theater intros deploys the traditional
movie countdown from 10 to 1 (which gradually enlarges the numbers until they
become gigantic) and inserts the words “See more, hear more, feel more,” ending with
the IMAX slogan, “Think Big.” The largest IMAX screen is in Sydney, Australia and
is approximately eight stories high. IMAX screens can be ten times the size of a
traditional cinema screen. The clarity and resolution of the image is made possible by
a frame size that dwarfs that of conventional 70mm film (three times larger). With the
perforations placed horizontally rather than vertically, the film must run through the
projector at extremely elevated speeds. The very high resolution of the image allows
spectators to be positioned closer to the screen. In a typical IMAX theater, the seats are
set at a significantly steeper angle and all rows are within one screen height whereas,
in a conventional movie theater, all rows can be within eight to twelve screen heights.
As Allan Stegeman points out in an article claiming that IMAX and other large screen
formats can compete effectively with high definition television, “An Imax image
occupies 60° to 120° of the audience’s lateral field of vision and 40° to 80° of the
vertical field of view, and an Omnimax image occupies approximately 180°in the
audience’s horizontal field of vision, and 125° vertically—the large-screen format
effectively destroys the viewer’s awareness of the film’s actual frame line.”

It is this annihilation of the frame line that I would like to focus on here. While
Cinemascope claimed to compete with the spectator’s peripheral vision, IMAX and
other large formats exceed the eye in all dimensions so that the image appears to be
uncontained. The frame in cinema is not only a technical necessity adjudicating the
relation to temporality (24 frames per second) and the production of an illusion of

5
IMAX engulfs its spectators, stretching the limits of human vision through its expansive screen
and immersive aesthetic. Marchessault, Janine; Lord, Susan (2011-07-07). Fluid Screens,
Expanded Cinema (Digital Futures) (Kindle Locations 1661-1662). University of Toronto Press.
Kindle Edition.
48
motion, but also a link between cinema and the history of Western painting, particularly
in its inscription of perspective as a rule of space. The frame demarcates the space of
the representation as a special place, one which obeys different dictates for legibility.
Or, as Jacques Derrida has pointed out, the frame as parergon, is neither part of the
work nor outside the work but gives rise to the work (23). The frame is the condition
of possibility of representation. In the history of cinema, the frame lends to the film’s
composition a containment and a limit that rivaled the limit of the two-dimensional
surface of the screen. Both could be contested but the frame and the screen were
themselves activated to produce the concepts of off-screen space and depth of field as
domains of the imaginary.

If the frame constitutes a limit—a fully visible limit--in the experience of the
spectator in conventional cinema, what does it mean to remove that limit by using
technology to exceed the physiological limits of the spectator’s vision? IMAX clearly
has limits, but they are not of a visible order in the spectator’s experience. It strives
against limits, as seen in this ad from the IMAX corporation:[Figure 4] “People say our
screen curves down at the edges. It doesn’t. That’s the earth.” The limit of the IMAX
screen merges with that of the earth, which is to say that it has no artificial or cultural
limit. What is the lure of this idea of boundlessness?

In the history of aesthetic theory, this concept has been most frequently associated
with that of the sublime in its philosophical formulation. In Edmund Burke’s analysis,
in which “sublime objects are vast in their dimensions,” (113) the eye is given a
privileged position, standing in metonymically for the entire body (“as in this discourse
we chiefly attach ourselves to the sublime, as it affects the eye”). 6 For Burke, the
sublime is associated with passion, awe, and terror and with a pain that proves to be
pleasurable. And this abstraction of pain from pleasure is in many instances a bodily
phenomenon—both terror and pain “produce a tension, contraction or violent emotion
of the nerves.” (120) This is the sublime, as long as any possibility of actual danger is
removed.

Hence the sublime, in one of its earliest formulations, is conceptualized as an


assault on the eye. Paul Virilio has referred to IMAX as “cataract surgery,” designed to
rescue the cinema from the proliferation of small screens by, in effect, welding the eye
to the technology. From another point of view, the visual field of the IMAX film,
overwhelming that of the spectator, is an assault on the eye, exceeding its capacities in
a sheer demonstration of imagistic power. But why should pain and even terror produce
the particular pleasure associated with the sublime? For Kant, it is a pleasure that can

6 This is true even though, for Burke, language retains its superiority over figurative painting,
which is restricted by its mimeticism. Edmund Burke. A Philosophical Enquiry into the Origin
of our Ideas of the Sublime and Beautiful (Oxford World's Classics) (p. 128). Kindle Edition.
49
only be produced through a detour and it is the detour that causes pain preparatory to
the pleasure of discovering the power and extension of reason.

Pain is produced as a result of a striking consciousness of human inadequacy,


finitude. Infinity, in the mathematical sense, is not sublime because it is dependent upon
a notion of endless progression, each moment of which annihilates the preceding ones
so that the mathematical infinite is abstracted from any true intuition of totality.
Nature—the ocean, a vast mountainous landscape, a tremendous thunderstorm—may
be the occasion for the sense of the sublime, according to Kant, but none of these can
be designated as a “sublime object” because the sublime is an attribute of subjectivity.
And it is, ultimately, a correlative of the realization of the simultaneous possibility and
impossibility of a finite representation of the infinite. Apprehension falls short and
while the subject cannot comprehend the notion of the infinite—imagination is
inevitably inadequate—it grasps its own sensuous and imaginative inadequacy as a
failure that it nullified by reason—the ability, that is, to form a concept of infinity as
totality. It is the faculty of the subject that is unlimited, so that infinity resides not in
the world—which would be threatening and incomprehensible—but instead as a power
within the subject. This is entirely consistent, as I will try to demonstrate later, with the
representation of the subject’s relation to infinity within the system of Quattrocento
perspective. Hence, the sublime is produced under the pressure to hold the infinite in
thought, to conceptualize it as a totality. The fact that this is possible is for Kant a
validation of the superiority of reason, of its movement beyond the sensuous—it is
“supersensual.” This, in turn, is a validation of the human, of the ability of human
reason to exceed the boundaries or limitations of its spatio-temporal localization.
Infinity, in a sense, resides within the subject. But the “unlimited faculty” (i.e. reason)
is based upon lack/inadequacy.

Hence, the concept of the sublime grapples with the notion of infinity and its
representability, although this is not the term Kant would have used. Yet, there is
another way of thinking and representing infinity that is not usually articulated with the
sublime. Renaissance perspective, inherited by the cinema, constitutes infinity as a
point—a perpetually receding point—the vanishing point--which mirrors the position
of the subject contemplating the painting. Like Kant’s reason in at least one respect, it
acts as an imprimatur of a mastery that takes form by going beyond, even annihilating,
the subject’s sensory and spatio-temporal localization, all the
singularities/particularities of embodiment in a finite body limited by the reach of its
senses. At least this reading of perspective is that of apparatus theory in film studies,
the legacy of Jean-Louis Baudry, Jean-Louis Comolli, and others in the 1970s. And it
is that of Erwin Panofsky as well. Panofsky analyzed Renaissance perspective as the
symptom and instanciation of a new concept–that of infinity, embodied in the vanishing

50
point.7 Yet, this was a representational infinity that confirmed and reassured the human
subject, replacing a theocracy with an individualizing humanism. In a way, it could be
seen as a secularization of the sublime.

Perspective produces an illusion of depth in the image—potentially endless depth


guaranteed by the vanishing point marking the “place” of infinity. It allows for the
simulation of the 3-dimensional on a 2-dimensional surface. However, both modernity
and postmodernity have been characterized as a regime of the surface, a decimation of
depth. As Fredric Jameson has famously written: “A new kind of flatness or
depthlessness, a new kind of superficiality in the most literal sense [is] perhaps the
supreme formal feature of all the postmodernisms…”8 How, or is, the infinite thought
or represented in such a context? Where is the sublime? In a provocative essay entitled
“Notes on Surface: Toward a Genealogy of Flatness,” David Joselit has argued that, in
the case of painting, illusionistic recession has been transposed into lateral extension.
He cites Clement Greenberg who claims that the abstract expressionists utilized huge
canvases in order to compensate for the spatial loss of illusionistic depth. And, indeed,
this lateral extension can be seen in the movement toward larger and larger screens
culminating in IMAX, but also in the embedding of smaller screens such as the iPhone
in complex and extensive networks whose scope and scale are challenges to individual
comprehension. The intricacy of these networks contributes to what Jameson has
labeled the problem of cognitive mapping. This suggests that infinity is no longer
conceptualizable in relation to depth and recession, as in a humanist perspectival system,
but instead in relation to questions of scale, extension, and uncognizable networks. For
a network, in theory, has no closure. This does not herald a break with the
disembodiment or delocalization of perspectival illusionism, but a shifting or
displacement of the subject’s relation to space, scale, and location that shares with
Kant’s sublime a lack in relation to knowledge and imagination.

The Oxford English Dictionary defines the sublime as “Set or raised aloft, high up”
and traces its etymology to the Latin sublimis, a combination of sub (up to) and limen
(lintel, literally the top piece of a door). The sublime is consistently defined by
philosophers in relation to concepts of largeness, height, greatness, magnitude. For
Burke, visual objects of “great dimension” are sublime. Kant claims, “Sublime is the
name given to what is absolutely great” and “the infinite is absolutely (not merely
comparatively) great.” The sublime is associated with formlessness, boundlessness, and
excess beyond limit. It is not surprising in this context that IMAX has been analyzed
by invoking the concept of the sublime (Haidee Wasson and Alison Griffiths refer to
Burke’s sublime in particular), especially insofar as the terror associated with the
sublime, for both Burke and Kant, must be experienced from a position of safety. The

7 See Erwin Panofsky, Perspective as Symbolic Form, trans. Christopher S. Wood (New York:
Zone Books, 1997).
8 Joselit, p. 293

51
sublime is an aesthetic category and it is inevitably chained to affect—whether awe,
terror, pleasure, or fear—and most frequently a combination of these. The advertising
and the analysis of IMAX are obsessed with its involvement of the subject in a gripping
experience—hence, the discourse of immersion. IMAX is described as above all a
visceral experience, requiring a form of bodily participation. Unlike the disembodiment
of the classical perspectival system, the body seems to be what is above all at stake in
discourses on IMAX. The IMAX sublime, if there is such a thing, here deviates from
Kant’s, for whom the sublime was sublime only on condition that it exceed the sensuous,
proclaim the irrelevance of the subject’s spatio-temporal presence in favor of the
infinite grasp of reason. The discourse of immersion would seem to rescue the body
from its nullification by both Renaissance perspective and the Kantian sublime, making
us once again present to ourselves.

But I would like to argue that immersion as a category is symptomatic and one has
to ask what this body is. The body here is a bundle of senses—primarily vision, hearing
and touch. But this appeal to the body as sensory experience, as the satiation of all the
claims for its pleasure, does not revive an access to spatio-temporal presence or
localization. Instead, it radically delocalizes the subject once again, grasping for more
to see, more to hear, more to feel in an ever expanding elsewhere. IMAX emerged from
the world fairs and expos that constituted exhibitionistic displays of the ever expanding
powers of technology (what David Nye has called the “technological sublime”). It is
telling that one of the works of this early tendency toward magnification of the scale of
the image and proliferation of screens was the Eames’s iconic Powers of Ten. This
pedagogical film illustrates a movement from a couple having a picnic in Chicago to
the edge of the universe and back to the interior of the body by exponentially increasing
the “camera’s” distance from the couple, reversing the trajectory, and decreasing that
distance to the point of inhabiting the body itself. (Figures 5 and 6—clips) The human
body would seem to be central to this demonstration, primarily as a marker of scale and
as the threshold of a trajectory from the gigantic to the infinitesimally small. Yet, the
film is instead an allegory of the nullification of the body and its location, acting only
as a nostalgic citation of a time when the human body was the ground and reference for
measurement, replacing it with a mechanical mathematical formula for the progressions
of scale. The limits of the “camera’s” trip in both directions are, of course, the limits of
human knowledge—at the moment. But the film suggests that this movement is
infinitely extendable and it is not accidental that technologies of knowledge and
technologies of the image are inseparable here. Human vision, with the aid of imaging
technologies, is infinitely extendable and knowledge is embedded in that vision. But I
have spoken only of the represented body, not of the spectatorial body. The spectatorial
eye is fully aligned with the technological eye—not with the vision of the represented
“characters”—the man and the woman, and its travels are limited only by the current
state of technologies of imaging/knowledge. Yet, there is not only a sense that it is
disembodied or delocalized but that it is potentially everywhere. The logic of the IMAX
52
sublime—perhaps the technological sublime par excellence--operates under the
umbrella of the discourse of immersion, producing an illusion that depth and ready
access to the body are still with us, and concealing its radical delocalization and
dislocation of a subject seemingly empowered in the face of a world defined as infinite
extension.

The Trope of the Turn and the Production of Sound Space

While both classical and contemporary film theory have productively dissected
the relation between the visible and the invisible in cinema through a concentration on
off-screen space as the preeminent “blind space,” much less attention has been paid to
that other dimension of invisibility—that which is behind, the “other side” of bodies
and of things. Because the film image is two-dimensional, the activation of perspective
and overlapping figures are clearly involved in the production of the effect of three-
dimensionality but this is true of a painting or a photograph as well. The cinema has an
added advantage—movement, which aids in carving out the space of the diegesis. The
“turn”—both of character’s bodies and the body of the camera—is a crucial trope in
this respect. The “turn” in classical Hollywood films is often activated in the service of
scenes of misrecognition, where it reveals a mistaken identity. For the turn makes
visible that which was concealed--the “other side”--an other side that does not
materially exist in the two-dimensional realm of cinema but is continually evoked,
imagined, assumed. The turn is a constant reiteration of otherness and the limits of
knowability, a denial of the sufficiency of the screen as surface. Knowledge resides
somewhere else--behind, on the other side. But the turn also confirms that there is
another side, in what could be labeled a “virtual dimension.” Nevertheless, given the
physical immobility of the spectator, the necessity of facing forward to see the screen,
that turn must be delegated to someone or something else—character, figure, camera.
Navigable space is on the side of the screen. What are the effects of this delegation to
figure or camera of a bodily gesture that is critical to the subject’s relation to space, of
a body’s fundamental capability, as Henri Lefebvre has pointed out, “of indicating
direction by a gesture, of defining rotation by turning round, of demarcating and
orienting space.” (170) In Lefebvre’s analysis, space cannot be conceived of as an
empty container, ready and able to accept any content. Space is, above all, occupied:
“there is an immediate relationship between the body and its space, between the body’s
deployment in space and its occupation of space… each living body is space and has
its space: it produces itself in space and also produces that space.”9 Yet, in the context
of the cinema, the spectator’s body is incapacitated, rendered useless, deprived of its
role of demarcating space through gesture and movement. As has so often been pointed

9
Lefebvre, 1991, pp. 170-171
53
out, the spectator must become immobilized, bodiless, his or her senses reduced to those
characterized by distance—vision and hearing. Space is not lived—at least in the sense
of the ordinary or everyday experience of space in its relation to the body—but
abstracted, alienated. The turn that helps to demarcate and define space is, in the cinema,
a represented turn, and the space is a represented space. But there is another turn at
issue here, one which must be prohibited. One thing the spectator must not do is turn
around to look at the back of the auditorium. The turn that demarcates and orients space
must be relocated on the side of the screen.

Increasingly today we are confronted with the delocalizing effects of


contemporary media networks. The subject’s relation to space, the sense of “where”
one is, has been corroded by the proliferation of virtual spaces and the displacement of
the question “where” to that of “who” one is (e.g. the Facebook phenomenon)
(Sloterdijk). But delocalization has been an aspect of many if not all modern
technologies of representation and communication—the telephone, the railroad, the
telegraph, radio, cinema, television. Modern media have systematically worked to
disengage “place” from a specific site and make it transportable, exchangeable,
commodifiable. Do recent digital technologies—the iPod, IMAX, digital surround
sound—simply intensify this general tendency or can they be seen as different,
historically discontinuous? Their promise is that of the expansion of space to envelope
the spectator, to surround him or her in the production of a vicariously lived space. This
would seem to be a commodification without object—the commodification of
environment.

The turn in classical cinema—of the character, of the camera—can be seen as


compensatory. It works to reduce the sense of film as a two-dimensional medium and
buttress the spectatorial experience of volume, of depth, of a full space. It was often
markedly absent in the earliest silent cinema, where characters were positioned
theatrically, facing the camera/audience. With the advent of sustained narrative, this
position morphed into its opposite—the taboo against looking at the camera, the
insistence upon an autonomous space of the narrative, completely disengaged from the
space of the viewer. Since the viewer was no longer “there,” this opened up the
possibility of the character’s turning away from the camera, seemingly inconceivable
in much early silent film. However, even the earliest films played with this frontality
and its relation to the “back” space—sometimes explicitly. In In My Lady’s Boudoir
(1903), the female character’s back is to the camera but this receives compensation
through the fact that her face is reflected in the mirror for the benefit of the spectator.
(Figures 7,8,9) Here there is a concerted attempt to grapple with the inevitable two-
dimensionality of film, its flatness and material limits as a surface.

But again, this conceptualization of front and back and the turn concerns the space
of the diegesis and not that of the spectator. Both the turning character and the turning
camera mark out the space of the diegesis and delineate its volume. Yet, the spectator’s
54
space is defined differently. Vision is directional and the spectator who turns around
and no longer faces the film will miss a part of it, making that particular turn taboo,
prohibited. Nevertheless, the space behind the spectator has not been entirely neglected.
Often it has been activated by theorists in intriguing ways. Baudry, deploying Plato’s
allegory of the cave in which the prisoners are chained since infancy, allowed only to
look ahead at the screen of shadows, cites Plato’s imaginary scenario of turning around:
“Suppose one of them were set free and forced suddenly to stand up, turn his head, and
walk with eyes lifted to the light; all these movements would be painful, and he would
be too dazzled to make out objects…”.In Baudry’s analogy, it is the turn toward the
projector that breaks the illusion of the apparatus but it also connotes a certain violence,
a dazzlement of vision. And Christian Metz’s transcendental identification with the
camera and with the pure act of perception becomes in the screening an identification
with that other part of the apparatus—the projector, “an apparatus the spectator has
behind him, at the back of his head, that is, precisely where fantasy locates the ‘focus’
of all vision.” (253) In a discussion of the way in which Renaissance perspective, from
the outset, was linked to the concept of infinity, Hubert Damisch refers to infinity as
“an idea of what’s behind one’s head.” (121) (Figure 10) Hence, the non-place of this
“behind” in the theater is not empty, but instead replete with the subject’s relations to
illusion, the real, fantasy and infinity as well as answerable to a certain taboo against
the gaze in support of representation. The separation between “front” and “back” spaces
in relation to media has also been conceptualized as a structure of the social availability
of knowledge and ignorance by Anthony Giddens. Giddens claims that the “front”
space of society constitutes an open, accessible space for the general public, a place of
transparency and visibility. But the “back” space is “the locus of social information that
is hidden.” (qtd Sterne, 151) According to Jonathan Sterne, “Giddens and John
Thompson both argue that the rise of the mass media has coincided with the growth of
forms of communication that entail very small front spaces (relatively little available
information) in relation to relatively large back spaces (lots of unknown factors).” (p.
151) All of the arguments of 1970s film theory about concealing the apparatus and
hiding the work of the production of a film would seem to confirm this assertion. It is
arguable that the “back spaces” of digital media are larger still. The spatial categories
of front and back are aligned with a form of social engineering of the availability of
information. The back spaces are those which are withheld, secret, deliberately opaque.

But 1970s film theory was primarily interested in cinema as a visual medium, with
only occasional references to sound. Although one cannot see what is behind one’s head,
one can hear it. And this three-dimensionality of sound is increasingly referenced by
film theorists. For instance, in a consideration of cinema and the ear, Thomas Elsaesser
and Malte Hagener claim that “hearing is always a three-dimensional, spatial perception,
i.e. it creates an acoustic space, because we hear in all directions” and quote Mirjam
Scaub, “the main “anthropological“ task of hearing […] [is] to stabilize our body in
space, hold it up, facilitate a three-dimensional orientation and, above all, ensure an all-
55
round security that includes even those spaces, objects and events that we cannot see,
especially what goes on behind our backs. Whereas the eye searches and plunders, the
ear listens in on what is plundering us. The ear is the organ of fear.” (131) The ear is
associated with a sense of balance and with contributing strongly to the apprehension
of the body’s location in space. Cinematic space is molded as much by sound as by the
dialectic of onscreen and offscreen space. Sound, as the material displacement or
vibration of airwaves, affects the entire body and not just the ears. Michel Chion
similarly stresses the fact that hearing “is omnidirectional. We cannot see what is
behind us, but we can hear all around.” (Acousmetre, 17) Although all of these
considerations allude to a phenomenological conceptualization of hearing and are part
of what Jonathan Sterne terms the “audio-visual litany,” that is, the string of
characteristics that are supposed to be natural to sound and hence dehistoricized, it is
significant that these specific traits are becoming more fundamental in recent years to
our understanding of cinematic sound. This is partially a function of the increasing
mobility of sound—it accompanies us everywhere and, in the theater, it has begun to
invade the space previously erased or at least reduced by classical cinema, the space of
the auditorium. But what does it do there?

One of the major debates in 1930s attempts to grapple with sound circulated
around the question of sound perspective. Sound perspective refers to the spectator’s
sense of a sound’s location in space and is determined by a number of factors including
volume, frequency, the balance with other sounds and the amount of reverberation. It
can be an effect of microphone placement or of post-production manipulations. In
conflict in the debate were the values of spatial realism (the localizability of an event,
the matching of image and sound) and the intelligibility of dialogue (which would be
lost at a certain distance if strict sound perspective were maintained). As Rick Altman
has shown, intelligibility of dialogue generally won out (except in very special cases),
undermining the perceived necessity of spatial fidelity of sound to image. What was
lost were all the qualities, including reverberation, that might be used to spatialize a
sound. The debate was settled according to James Lastra, by “close-miking and a certain
‘frontality.’” (82) As Emily Thompson has pointed out, radio and other modern
deployments of sound, including soundproofing and the use of a directional flow of
sound in theaters, were a crucial reference point: “ …this kind of sound was everywhere.
In its commodified nature, in its direct and nonreverberant quality, in its emphasis on
the signal and freedom from noise, and in its ability to transcend traditional constraints
of time and space, the sound of the sound track was just another constituent of the
modern soundscape.” (284) The technical possibility of producing reverberation in the
studio, independently of the space of the original recording, freed sound from “any
architectural location in which a sound might be created: it was nothing but an effect, a
quality that could be meted out at will and added in any quantity of any electrical signal.”
(283) In a sense, sound was both everywhere and nowhere. What was at stake in these
debates were the limits of acceptability of the spacelessness of sound. A spaceless
56
sound is one that can be more easily disengaged from its specific geographical,
historical and political location and subjected to circulation as a commodity.

The sound perspective debates of the 1930s have somewhat uncannily re-emerged
with the production of new multi-channel systems, sound surround, digital sound and
the consequent proliferation of speakers throughout the auditorium. With respect to
questions of sound space, there are at least two ramifications of these changes. One
would be the accelerated annihilation of the sense of the specific space of the
auditorium in which a film is projected. Michel Chion claims that the choice of
architecture and building plans for new movie theaters have “mercilessly vanquished”
reverberation—“the result is that the sound feels very present and very neutral, but
suddenly one no longer has the feeling of the real dimensions of the room, no matter
how big it is.” (100) It is arguable that, perhaps with the exception of ostentatious
picture palaces that called attention to themselves, movie theaters have always been
designed to reduce a sense of their own specific spatial properties in order to “host” any
number of diegetic spaces proposed by a stream of ever-changing films. In order to
allow audiences to “go elsewhere,” theaters must become nonspaces or “nonplaces” to
adopt Marc Augé’s term to describe airports, shopping malls and any institutional space
that is eminently recognizable in a generic sense that has nothing to do with its specific
location. But for Chion, this process has intensified—theatrical sound has become so
“pure” and neutral that it has reduced any distinction between cinema sound and a good
home stereo system. Collective sound has been displaced by personal sound. This
pursuit of spatial anonymity characterizes the space of cinematic exhibition. But the
second ramification of the proliferation of multi-channel systems and sound-surround
concerns the space produced by the film, its diegetic space. For the multiplication of
potential sound sources exacerbates the issue of the localizability of sound. It appears
to demand a greater precision in matching sound and space and hence, in a sense, to
respatialize sound. Chion defines as the “superfield” the space produced in multi-
channel films by ambient sounds that surround the visual space and that “can issue from
loudspeakers outside the physical boundaries of the screen.” (150) According to Chion,
the fact that these sounds are more precisely located spatially releases contemporary
narrative film from the classical obligation of providing an establishing shot (typically
used to orient the spectator in relation to the use of close-ups and medium shots that
fragment that space). This results in a contemporary filmic style of fast editing and more
insistent use of close-ups because the “superfield provides a continuous and constant
consciousness of all the space surrounding the dramatic action.” (151) Modern
soundtracks endow the image track with a greater recognizability. Yet, echoing the
sound perspective debates of the 1930s, many sound technicians have been reticent
about “too much” sound realism (spatialization), about overuse of the speakers spread
over the auditorium, due to the potential distraction of the spectator’s attention away
from the screen. If sound has traditionally been used to tell us where to look, what is
visually important, its leakage into the auditorium presents a potential difficulty. Again,
57
harking back to the 1930s debates, this is particularly true in the case of dialogue, which
must be both intelligible and “present,” intimately bound to the image of the person,
whether visible or invisible, i.e. just over there, on the other side of the frame, in what
has traditionally been specified as the most significant form of off-screen space in
narrative film. Current sound practice tends to locate dialogue in the speakers behind
the screen, just as classical practice dictated. Ambient noise—leaves rustling, train
whistles in the distance, birds, rain, etc.—and music, forms of sound that can be more
easily dissociable and independent of the image, are those more likely to be channeled
to the speakers in the auditorium.

While many contemporary films restrict the use of surround speakers in


conventional ways, there are others using digital surround systems that reject classical
norms, leading Mark Kerins to suggest that Chion’s concept of the “superfield” is
already outdated. The superfield is continuous and autonomous in relation to the image
and its stability allows a heightened discontinuity in the image track. Kerins claims,
however, that there are many recent films in which the soundtrack is discontinuous,
precisely matching the image track by changing sound perspective with each newly
spatialized image (the first scene in Saving Private Ryan is a privileged example).
Kerins labels this new 360 degree space the “ultrafield.” The discontinuity in classical
terms of this space is so extreme that it even encourages the violation of the 180 degree
rule. The 180 degree rule, in Kerins’ account, assumes that the space behind the camera
(and by extension, the space behind the spectator) is irrelevant to the narrative and can
be suppressed. This places the viewer “outside of the world” and “not in it.” (138) By
violating the 180 degree rule, these films imply that “the camera cannot capture all the
action without turning around [my emphasis] to shoot ‘behind’ the audience.” (138)
There are a number of issues here, perhaps first and foremost the idea that the camera
might be able to “capture all the action.” Since the camera is the construction of the
possibility of “seeing” the “action,” this assertion implies that we are witnessing a
documentary event, of which we can see more or less. But because what we see—in
both documentary and fiction—is a function of the camera’s vison, there is no “all,” a
portion of which the camera can see. Granted, that “behind” is in quotation marks here,
nevertheless, the language of immersion and being “in the middle of the action” recurs
frequently in Kerins’ discourse, which echoes that of the advertising for new film
technologies—not only DSS but IMAX, 3D, etc.—as discussed earlier.

While even the classical film attempted to absorb its audience, bring the spectator
into the diegesis, this rhetoric seems to have become more insistent with each “new”
technology. According to Kerins, in films using immersive sound, “the audience is
literally placed in the dramatic space of the movie, shifting the conception of cinema
from something ‘to be watched from the outside’—with audience members taking in a
scene in front of them—to something ‘to be personally experienced’—with audience
members literally placed in the middle of the diegetic environment and action.” (130)

58
The problem is that, unlike the characters, the spectators continue to face forward. The
true blind space is still behind them. The taboo nature of this space is indicated very
clearly by the fact that sound designers continue to be wary of over-localizing or over-
spatializing sounds to the extent that the spectator is distracted and pulled away from
the image/screen. This is evidenced most tellingly in what they refer to as the “exit door
effect” or “the exit sign effect,” in which, hypothetically, the spectator would try to
localize a sound and turn away from the screen in order to identify its source. Kerins
suggests that the exit door effect is no longer as pressing a concern after more than two
decades of multichannel sound and the “training” or “recalibration” of audiences, but
he does so in the context of a discussion about why, despite the potential of surround
sound, directors continue to be extremely conservative in their use of it. Outside of a
few instances, the rear speakers are generally used for ambient effects that do not call
out for a specific localization.

Dolby’s website introduces Dolby Atmos (short for atmosphere), its most recent
technical development in sound, with the promise to the movie-going public that it will
“Feel Every Dimension,”—not just hear every dimension but feel its bodily impact.
Dolby Atmos is based on Audio Objects governed by metadata rather than on channels,
more precisely locating and scaling sounds and purportedly capable of working with
any theater’s configuration of speakers. [Figures 11 and 12, clips] The examples of
sounds are those taken, tellingly, from a sublime nature—birds, a waterfall, a
thunderstorm, etc. and the sublimity of the cinematic image corresponds to that of the
sound. Sound is “seen” as its source is pinpointed in the movement from speaker to
speaker in the auditorium, tracing the path of a helicopter seed. In the scene from Life
of Pi, the directionality of the sounds of fish flutters is reversed from left-right to right-
left on the cut from the tiger’s POV to Pi’s, violating the classical sound editing rule of
staggering sound cuts and image cuts to conceal the fact of the cut. “You,” according
to Dolby are the subject of a constant movement—“you” are “propelled into the story”
and “you” are “transported into a powerfully moving cinema experience”—a reiteration
of the discourse of immersion characterizing the advertising of IMAX and 3-D. In fact,
immersion now has a technical definition in relation to sound: immersive sound is “the
term used to describe sound that emanates from sources beyond the horizontal plane by
means of enhanced spatial properties such as additional height and overhead speakers
and localized apparent sound sources within the auditorium.” (This is from an article
entitled “The Spectrum of Immersive Sound” appearing in Film Journal International
in 2014 [Bill Cribbs and Larry McCrigier]).While this definition strikes one as dry and
technical, without the affective valence of the usual discussions of immersion, the
article begins with the description of an immersive sound experience: “Imagine
stepping from life and being totally immersed in the story during your next cinema
experience. Hearing everything as if you were actually there in the scene. Close your
eyes. You're at a cafe in Paris, around you dishes are clanking and patrons are engaged
in conversations. A woman is shouting from a third-floor window and birds are chirping
59
in the trees. High overhead, a jet cruises by, and you subconsciously note that it's
departing to the east. You hear the familiar footsteps of your date approaching behind
you. You hear all these details exactly where they belong. This is the goal of Immersive
Sound, the next big advance in cinema technology.” The fact that “you” are asked to
close your eyes is symptomatic of the continuing tensions between 3D sound and 2D
image localization. To “hear all these details exactly where they belong” requires
denying the visual space that does not support (or supports only figuratively) the sound
space. The Dolby Atmos website situates the difference of this technology in a more
powerful bass and overhead sound that “heightens the realism of your cinematic
experience.” And, finally, your own location is made irrelevant: “no matter where you
sit in the theatre…”, you will have access to this moving experience.

Why this insistent rhetoric refining and insisting upon the immersion of the
spectator in the diegesis? Why does it, beyond the promises of classical cinema,
produce a contract that pledges the film will enter the space of the auditorium and
envelop the spectator? Why the insistence upon “enlarging” the diegesis (the space of
fantasy), as if it were not large enough already? Why deny the crucial (and necessary)
incommensurability of the space of the spectator and that of the diegesis? It would be a
mistake to try to understand surround sound as separate from IMAX and 3-D, other
attempts to expand the space of the diegesis in as many directions as possible. Surround
sound and multi-channel systems, by moving sound into the space of the auditorium,
assist in this annihilation of the frame line.

By locating us everywhere, in an other space, the rhetoric of immersion reduces


the compelling nature of location, of the body’s ability, as Lefebvre points out, of
demarcating and analyzing, not to mention producing, space, in a particular historical
and social context. 70s film theory’s emphasis upon disembodiment was perhaps,
always, more about delocalization, about the erasure of the spectator’s space in favor
of another space. For the body has been increasingly in play in terms of the number of
senses the cinema seeks to activate (perhaps impacting a recent trend in film theory
toward haptic cinema?) The spectator is a body but a body defined entirely by its senses
and the heightening of those senses. Jonathan Crary has delineated a historical process
producing the separability and isolation of the senses. Modern media purport to reunite
them, but only at the expense of their derealization and dislocation/relocation. This
delocalization is, of course, not specific to cinema, which can’t be understood outside
a network of new media configurations. Elsaesser emphasizes mobility—the movement
from speaker to speaker in digital surround sound as a reflection of the increasing
mobility of sound systems since the Sony Walkman of the 1970s. And this is, assuredly,
correct. But the Sony Walkman (and the iPod, smart phone, etc.) not only move us but
they extract us and abstract us from the space we are navigating. The space we enter
with these devices is more commodifiable, less apt to make us question where we are—
not necessarily only geographically, but historically, socially, politically.

60
Perhaps the most striking delocalization masquerading as localization is the map
posted in urban space that specifies “You are here.” [Fig. 13] This is a rather large point
representing “you,” but it is still a point. A point that in mathematics is without
extension, takes up no space. And the same is true, despite its mobility, of the point that
represents “you” in Google Maps as you navigate an unknown territory. This is not only
surveillance—“they” know where you are, but the reduction of your spatiality, the fact
that the body is itself a space, to a point. This only seems “natural” because we are
accustomed to thinking of ourselves as points within a network. Social media
ameliorate this by purportedly giving “you” an identity—but where are you when you
post on social media?

So, why have I emphasized the “turn” and its function both in classical cinema and
the cinema of today? The turn in classical cinema had a quite precise effect—that of
indicating the lost dimension of the image—within the diegesis. There was no question
of the spectator herself turning, looking away from the screen. That turn invokes the
possibility of another space, the missing space, behind the spectator. Sound surround,
in its most current uses, hopes to make this space palpable, to conquer the otherwise
and formerly taboo space of the rear of the theatre, the back of the spectator. Perhaps
the “last” territory. But it must do so very carefully, with restraint. For the spectator,
turning and looking behind is not just a refusal of the screen but an acknowledgement
of the existence of an exit.

61
凝视何以成为暴力

——平昌冬奥会上朝鲜运动队的再现

姜明求(Myung-koo Kang)

本文旨在解释韩国主流媒体如何在暴力框架里看待朝鲜的问题。人们可以选
择看待事物的框架,但韩国的主流媒体选择排斥和仇恨的视角来看待朝鲜,这种
方式导致了现实世界权威的争论,迫使韩国人民保持沉默。因此本文试图通过回
顾以朝鲜代表队名义访韩,参加平昌冬奥会的朝鲜体育队、代表团和欢迎队的新
闻报道展现韩国主流媒体如何展示和再现对其的仇恨。

首先,对战争的预期是未实现的未来的焦虑,并非是依据事实与证据做出的
合理推断的愿望反而加深了焦虑。这种对战争的预期是真实还是幻想? 既可能
是这种预期产生了一种真实的,规律的并超出我们感受和表现方式的力量,也能
说这是一种幻想,因为它不存在现实中,但这是真实的,只是这种战争预期产生
了内心的恐惧,超出了集体背景下个人的心理界线,使一个虚构的情况发生在一
个现存的“危险”中。

其次,审查一系列日常声明和报纸、观看不同的见解方式被用来煽动仇恨,
比如朝鲜代表团的出现,特别是女性领导人的外貌,在镜头中被物化了。例如,
该报告强调特使金与正的怀孕和雀斑,玄松月与金正恩的浪漫关系的谣言,以及
朝鲜欢呼队的俘虏尸体等。

除此以外,奥运会上朝鲜欢呼队由那些已经被政权选中、用于展示目的的人
组成,他们知道自己会成为观看的对象,并且假设他们经常意识到这一事实。他
们在朝鲜政权的凝视下,也不停地注视着相机,无论是在音乐会期间,还是去海
滩休息,朝鲜代表团和欢呼队受到这种偷窥者的注视,尽管欢呼队知道他们会接
触到韩国的摄像机,但他们仍然处于无助状态,只能暴露在镜头前作为被捕的身
体。

如上所述,凝视的暴力是基于观众的焦虑和对一个人生存条件的危机感,因
此有必要将目前的敌对共存转变为和平与团结共存,只有这样才能保障朝鲜半岛
人民的安全生活。欢呼队不得不存在于异常和非法的领域的原因正是他们是朝鲜
的宣传战略,自然行为和品行,随意的笑声和对话是主流媒体难以接受的“正常”,
就好像他们通过隐藏摄像机的镜头凝视着他们,发现他们异常,偏离,尴尬和奇
怪的行为。

62
Myung-koo Kang

How a Gaze Can Become Violence:


Representations of the North Korean Sports Team to
Pyeongchang Olympic

Introduction
This essay aims to explain the question of how the mainstream media in South
Korea views North Korea from a gaze of violence framework. One can look through
eyes of love and consideration of others, or have a gaze of exclusion and disgust. To
begin with the conclusion, mainstream media in South Korea views North Korea
through a lens of exclusion and hate. Such ways of seeing others results in an argument
from authority in the real world, which is collective punishment because it mobilizes
the South Korean people into tamed agreement and forces silence.
Ways of seeing objects is not merely a way of looking at things and people other
than oneself, but is also the result of how the individual and society internalizes and
acts on that object. The way one sees things does not just reveal an individual’s desires,
but also that constructed by society as a group. This paper attempts to reveal how
mainstream South Korean media produces and reproduces hatred of North Korea by
reviewing the various journalistic reports made about the North Korean sports team,
representative delegation, and cheer squad during their visit to South Korea to join the
Pyeongchang Winter Olympics as a unified Korea team.

1. Anticipation of War: Anxiety of a Future that Does Not Actualize


Anticipation sets into motion upon the nonoccurrence of an approaching, future
time. Instead of fulfilling the expectation of allowing us to make a rational deduction
based on truth and evidence, it instead stimulates further anxiety. A North Korean
nuclear attack (be it towards the US or South Korea), preemptive strike by the U.S.,
war on the Korean Peninsula, among others are situations many people believe will not
actually occur, but there remains the affect of a faint, continued anxiety. Anticipation
of war can cause feelings that “there really is no way it will actually happen,” or “this
is just a phase,” or even the anxiety that “war may break out tomorrow” in those living
in the Korean Peninsula today. In addition to traditional forms of media such as TV and
newspapers, social media platforms such as Twitter and Facebook also warn about the
dangers of war and proclaim war “is imminent” as if “the breakout of war is desired.”

63
Is this anticipation of war real or fantasy? It is possible to say that this anticipation
exerts a real, regulatory power over how we feel and behave. However, we can also say
it is a fantasy because it does not exist in reality, only through the words of the media
and people. Yet, it is real because the anticipation of war creates an internal fear,
exceeding its reach beyond psychological states of individuals to that of group settings.
Mainstream South Korean media’s perspective of North Korea’s team of athletes to the
Pyeongchang Olympics was made possible through the fantasy of an anticipation of
war, and demonstrated an undeniably real effect.
It is not unheard of that the continued state of division between South Korea and
North Korea has been based on an antagonistic coexistence. It is the most heavily
militarized place in the world, and it is fact that now more than ever, the possibility of
military altercations has increased because of North Korea’s nuclear missiles testing.
Furthermore, it is also true that a geopolitical structure was formed between maritime
powers (U.S., Japan, South Korea) and continental powers (China, Russia, and North
Korea) after Obama’s policy shift declaration, “pivot to Asia.” The antagonistic
coexistence of the South and the North surpasses a state of confrontation to the scene
of the post-Cold War.
Such an antagonistic view wields the effect of truth when it is expanded to a global
level of antagonistic coexistence between sea and land powers. After North Korea’s
nuclear testing, it has become common to hear claims that an American preemptive
strike is imminent or that it is necessary. These claims are developed on the rationale
that “the responsibility lies in North Korea’s belligerent behavior, and the South has
continually pointed this out and sent warnings.”
The deployment of THAAD in South Korea has seen a continued stream of TV
news and mainstream newspaper articles warning of the imminent threat of war. After
the Pyeongchang Olympics commenced, and troupe leader Hyun Song Wol and Special
Envoy Kim Yo Jong visited, this type of reporting became more frequent and
widespread.

[Table 1] Yonhap News “News Focus” (February 1) List of Statements Regarding


“Preemptive strikes Against North Korea”

Panelist Statement

“When we claim one as an enemy, we may say ‘die you.’ However,


Kim when America considers one an enemy, it is a target that must be beaten,
Jeong-bong broken and destroyed. So it is a very scary thing when the U.S. determines
one to be an enemy.

“There is no other way to see this than U.S. efforts to accumulate


justification to attack North Korea.”

64
It has become highly likely that the U.S. will make decisions
unilaterally, be it military or diplomatic, regardless of the South Korean
government’s intentions.

“I think we should have the view that war could happen at any time
regardless of the intentions of our government.”

Shin In- “Doesn’t it really feel like the U.S. will attack North Korea right
kyun now?”

“The U.S. is at the phase of strategically gathering justification to


attack North Korea at any time.

“There is an increasing potential for the realization of the US military


option.”

“The U.S. Airforce has removed the reflectors off of F-22s so that they
will not show up on the radar. This suggests this is a real battle.”

Park Sang-
“Is it possible to suggest that preparation is being made for the next
ryul
step with the real intention for immediate action if necessary?”
(Panel Host)

“We can interpret that the possibility of a preemptive strike if really


need be is not a falsehood.”

Park Ga-
“They have prepared as many military options as we expected, and
young
perhaps more.”
(Panel Host)

Source: Citizens’ Coalition for Democratic Media Weekly Monitoring Report,


February 2018

On its February 1 broadcast of “News Focus”, Yonhap News TV, an affiliate of


the government-invested and sole news agency Yonhap News Agency, reported on
North Korea’s military parade and claimed the possibility of war through multiple news
points (refer to Table 1). All expert panelists were currently or formerly employed at
defense related organizations. Kim Jeong-bong, former Department Head of the
National Intelligence Service and Director Shin In-kyun of Korea Defense Network are
both permanent panelists for Yonhap News TV. Kim Jeong-bong repeatedly made
unsubstantiated claims such as “there is anticipation that North Korea’s military parade
will display weaponry that will aggravate the US and the international community” and
“who can win a war against the U.S.? North Korea also knows that it would face
annihilation”. “News Focus” is a real-time live broadcast.
The live broadcast of news about war or the oncoming threat of war actualizes this
in the sphere of time as an actual situation to happen in the future. Therefore it makes
an imaginary situation yet to occur into an existing “danger”.

65
If we transition to a “peace regime” as they (North Korea) desire,
it is unknown how much longer the North Korean people will have to
suffer through a hell of human rights abuses. If the U.S.-R.O.K alliance
is broken and a North Korea-led unification occurs, just like Kim Jong
Un seems to want given his 12-time use of the word “unification” during
his New Year’s Address, our daughters may face the same fate and
sacrifice of the majority of North Korean defector women who
experienced trafficking and prostitution in China. Yet in this country,
the leftist camp that cries out for human rights generally remain silent
when it comes to North Korean human rights. It is a waste of breath to
state that this is hypocrisy.
(Dong-A Ilbo, Kim Soon-deok Column, February 12, 2018, emphasis
added by author)

We call the warning and danger of an event that has not occurred as
“premediation”. Grusin has previously conceptualized it as premediated terrorism that
is witnessed in person and news that reports on the warning and dangers of terrorism.1
In a way, this kind of warning has become habitual in South Korean society, and even
the average person living on the Korean Peninsula has become accustomed to threats
of war. Events that symbolically “do not occur” go unreported in the media. But war,
which should not happen in the real world, is understood as something that could
immediately actualize. This real possibility has been a constant staple in news reporting
since the Korean War. Reporting around North Korean nuclear development is based
on the hypothetical situation of North Korea launching a nuclear attack on the U.S.
mainland (very similar to the assumption that convicted Hussein’s regime of possessing
WMD through the Iraq War), and action plans, countermeasures, and strategies are
constantly discussed with this as a premise. It is noteworthy that a situation that has not
realized (though, anything is possible at any time) has had the affect of warning and
anxiety now rooted in the Korean people, and it is upon this affect that disgust of North
Korea exerts its influence.

2. North Korean Olympic Team that Directed a “Pyongyang” Olympic: Belittling,


and Careless Treatment

Daily statements and reports by the New Korea Party, Chosun Ilbo, and multiple
news channels mocked the Pyeongchang Olympics as the Pyongyang Olympics. A

1 In the book Premediation: Affect and Mediality After 9/11 (Pargrave, 2010), Grusin uses the concept
of “premediation” to critically explain how predictions and warnings about terror or dangers lead to
actual situations, and how viewers become accustomed to these predictions, resulting in an
international phenomenon of helplessness and lethargy.

66
selection of these will be examined to review the various ways of seeing, and how they
instigate hatred.

1)Special Envoy Kim Yo Jong

[Figure 1]A special envoy, Yo Jong Kim, a screen captured from TV Chosun

In Figure 1 is a screenshot of the February 9 TV Chosun news which sent out the
following subtitle, “Freckles and Moles Visible on Skin,” and aired under the news
subheading “All-Black Fashion and Color Makeup” seen on the top left corner.
Additionally, a video clip of Vice Department Director Kim shaking hands with Hyun
Song Wol was played repeatedly while close commentary was made about Kim’s
clothing and body. The reporter’s voice read the following, “Many experts have noticed
that Kim’s stomach is slightly enlarged, as seen under her coat. The Korean National
Intelligence Service has previously revealed that Kim Yo Jong gave birth in 2015, and
she wore a similar coat during that time. Judging by her posture of leaning her back out
and waist in, it appears that she is five months pregnant. An obstetrician specialist
confirmed this. However, the specialist could not be certain without a direct
examination.”2

2
At the time of the reporting, Kim Yo Jong’s pregnancy had not been confirmed. It was
confirmed by the speaker of the Blue House a few days later. However, it was clearly beyond
rational reporting guidelines to report on such a private issue such as the pregnancy of the North
Korean special envoy when she herself had not mentioned anything of it.
67
2)Reporting on Hyun Song Wol

[Figure 2] Above photo captures troupe leader Hyun Song Wol at the inter-Korean
talks on January 15

Hyun Song Wo, head of the Moranbong Band, enters the meeting
location with a light smile on her lips. She wore a two-piece business
attire in navy with black high heels. It is a different look from the
military uniform she wore when she canceled the performance in Beijing.
“She dressed up her hair with a flower pin.”
Despite rumors that had circulated about romantic involvement
with Kim Jong Un, today she wore a ring on her left ring finger. The
green leather purse seen when she took out her notebook is a product of
the famed European luxury brand “H,” and is estimated to be 20 million
Korean Won if an original.
Today’s meeting, a “working-level contact of art troupes” took
place as the North’s counter proposal to our “senior working-level talks”.
(TV Chosun, January 15, 2018. “Hyun Song Wol Appears with Luxury
Bag and Wedding Ring”)

As seen in the quote above, the focus on the rumor of her romantic relationship
with Kim Jong Un, ring on her left hand, hair pin, and luxury alligator bag put Hyun
Song Wol in the sole context of a being woman instead of as a delegate of the art troupe
working-level meetings. Statements belittled and treated her lightly, and reporting on
the ring seemed to express a certain disappointment about the falsity of the rumor
involving her with Kim Jong Un.

68
3) Captured Bodies of the North Korean Cheer Squad

(North Korean cheer squad waving the Korean Unification Flag, Yonhap News,
January 18)

Citizens’ Coalition for Democratic Media report on broadcast monitoring

The two figures above were common images after the start of the Pyeongchang
Olympics. First, the four different screenshots in the lower figure are the first start scene
that main broadcasters such as KBS, MBC, JTBC, and Yonhap News used to report the
first day the North Korean cheer squad arrived in the South. Every broadcaster began
their report with the first shot as the North Korean cheer squad’s legs or calves. Without
being said, it is clearly a view that objectifies women’s bodies. They were first
commodified by North Korea which only selected beautiful women in order to be the
object of exhibition in the South, and they were commodified a second time in the South
by the South Korean media. It seems these images were not strange at all, and were
reproductions of repeated images.
What must be questioned in the reappearance of the above three images is how it
was possible for such news broadcasters to report so carelessly on the North Korean
delegation, despite ongoing efforts to decrease war threats via the Pyeongchang
Olympics. It is possible to criticize and place blame on commercialized journalism or
the ways of seeing that objectify women, but they are difficult to accept as sufficient
explanations for such journalism practices. There is a need to clarify and shed light on
the different devices ingrained the ways mainstream media sees things.

69
3. Arrangement of Normality and Abnormality
As evident in the three examples of above, the North Korean delegation’s
appearance and especially that of women leaders’ physical appearance were objectified
from a disciplinary lens. Sophisticated fashion and appearance, beauty, and smiling
faces are the normal. Most media in general tend to have views that objectify and
display women. However, the way the South Korean media cameras looked at the North
Korean delegation this time surpassed the usual level of objectification, and did not
hide views expressing confusion or inability to accept what was being seen. Firstly,
Kim Yo Jong and Hyun Song Wol’s manners, facial expressions, and fashion were
rather normal relative to the expectations of the South Korean media. The media
searched for deviation and anomalies in their smiling faces, sophisticated and calm
movements and answers but found them to be very normal.
Kim Yo Jong and Hyun Song Wol’s ways of behavior and speech should have
been accessories of the abusive North Korean regime symbolized by missiles and
military assemblies, like robots ready to shout “Great Leader” at any time. Instead they
gave normal handshakes and greetings, laughing and conversing. When those who are
expected to be abnormal act normally, one must look even closer to find abnormalities.
This is the reason for mainstream media’s focus on Kim Yo Jong’s pregnancy and
freckles, Hyun Song Wol’s romance rumors, luxury bag, and scarf, among others. In
fact, Yonhap News even aired a recording of cheer squad members waiting in line
inside the women’s restroom.3 When something breaks expectation and is abnormally
normal, one tends to rationalize it by pushing it past the borders of normalcy. Luxury
goods, sophisticated fashion, but juxtaposed with freckles and a pregnant female body
(not as a productive body but rather as a body having conceived a dangerous child of
the Baekdu bloodline) positions the information as abnormal. And with this, an
irrational report is justified.
The normal behaviors and manners of the North Korean delegation, the Blue
House and ruling party treating them as normal diplomatic partners, and welcoming
South Korean citizens were all abnormal actions causing instability to South and North
Korea’s symbiotic antagonism. This can only be accepted as a threat for South Korea’s
conservative powers, when its existence is one of the axes resulting from this
antagonistic relationship. Symbiotic antagonism is maintained not only through
military and political power, but also the mentality of a divided system(anti-
communism).

4. Voyeuristic Gaze: Subjects Thrown Into Defenseless Circumstances


The North Korean cheer squad was comprised of those who had already been
selected by the regime for display purposes. They knew that they would become the
objects to be watched in the South, and it is assumed that they were constantly aware
of this fact. They were under the gaze of the North Korean regime, but also the constant

3 The reporter’s action of following the cheer squad into the women’s restroom with a hidden
camera to take pictures is what is strange and abnormal. What sort of abnormality was it that the
camera hoped to capture?
70
gaze of the camera, be it during the concert or even when they visited the beach for a
break. One’s actions and words cannot be very natural or free when one is conscious of
being under someone else’s gaze.4 They cannot laugh freely, nor can they not laugh. It
is difficult to speak, but it is also not possible to not speak. The media continued to
maintain a voyeuristic gaze at this group while then were in defenseless circumstances.

Why is the North Korean cheer squad watching South Korean TV in their
accommodations something strange and newsworthy? TV Chosun narrated the
following while broadcasting this point.

“At night, they watch our TV shows. This act doesn’t seem to be
secretive, as two people sit side by side watching TV.” “(When those
people turn the TV on, they see our channels, right?) When I checked
the rooms the day before yesterday, all the (South Korean) channels
worked. They work fine but who knows what happened.”

The report states the strangeness of North Koreans watching South Korean TV
when it should be prohibited. The report itself reveals that the photo was caught by
zooming-in and peeping through an open window using a telephoto lens. There is a
need to discuss the significance of a voyeuristic gaze at this point.

4 One wonders what would have happened if the cheer squad had prohibited cameras and rejected
interviews for their privacy (one should have the right to lay on the beach without the watchful
eyes of others) while on break at Gyeongpodae Beach?
71
Methods to look at a counterpart who is difficult to face directly include sneaking
peeks and taking glances. The most representative is the gaze of misogyny. As
misogynists, men select physically weak women as the target of their hate and disgust
in order to disguise and hide their vulnerabilities. These men generally cannot face
women as equal agents. They threaten and attack others in order to hide their
weaknesses and deficiencies. Consideration of others and helping others is not possible
for them. It is unimaginable to help and be considerate of others when I myself am
empty and vulnerable. They hide their vulnerabilities and ignore those of others. And
because they must hide their deficiencies and position, they sneak peeks at others.
Those that sneak peeks believe their targets are abnormal and impure.
The North Korean delegation and cheer squad were subject to this kind of
voyeuristic gaze, and though the cheer squad knew they would be exposed to South
Korean cameras, were left in a defenseless condition. They had no other option but to
be exposed in front of the camera as captured bodies that had to laugh but not mindlessly,
could not get caught watching TV, made sure not to have in possession a luxury bag,
and had to put on makeup without the choice of not putting it on. This is how the South
Korean mainstream media’s gaze of the North Korean delegation and cheer squad
became abusive and violent.

5. Concluding remarks
As examined above, the violence of gazes is based on the viewer’s anxiety and
feelings of danger to one’s conditions of existence. It is necessary to change the current
antagonistic coexistence to a coexistence of peace and togetherness because only then
will a safe and secure life be guaranteed for those living in the Korean Peninsula. North
Korea’s nuclear threat has been a favorable condition providing political soil for the
vested interests in North Korea’s governing system and South Korea’s conservative
powers that have depended on antagonistic coexistence to maintain their power. The
Abe administration in Japan also safely overcame political crises thanks to North
Korea’s missile launches.
The joint entrance under the Korean Unification Flag, a unified ice hockey team,
and the cheer squad at the Pyeongchang Olympics had to dwell in the spheres of
abnormality and illegality because they were North Korea’s propaganda strategy. The
natural actions and behaviors, casual laughter and conversations were a difficult
“normal” to accept by the mainstream media. It was as if they had gazed at them through
the lens of a hidden camera to find their abnormal, deviant, awkward, and strange
actions.

72
智力外包

瓦格特(Christina Vagt)

在法国哲学家布鲁诺·拉图尔(Bruno Latour )看来,后现代理论是一个自我


削弱的运动,其危险根植于西方传统思想中对不变事实(immutable facts)的怀
疑。本文认为,包括后现代理论在内的当代批判主义哲学的问题并不在此,而在
于批判主义哲学对人的理性的怀疑。早在 19 世纪,叔本华首先从“生存的意
志”(the will to live)的视角阐述了他对人类理性的怀疑。意志(will)与智慧(intellect)
在叔本华哲学思想中处于复杂的矛盾关系之中,而智慧本身即是可怀疑的。在叔
本华看来,智慧只不过是人体组织某种不可靠的功能,且不断受到意志的干扰。
因此,对人类智慧的怀疑先于对事实的怀疑。20 世纪 50 年代,人工智能的发展
为对人类智慧的怀疑提供了新的依据。赫伯特·西蒙在人工智能和行为经济学等
领域的开拓,在让他收获 1978 年诺贝尔经济学奖的同时,也使得电子计算机在
替代人作出最优化决策等领域的运用达到新的高度,人类智慧面临来自机器的严
峻挑战。在西蒙等行为经济学者看来,所谓“理性经济人”(homo economicus)等
说法并无存在的必要,因为人本身就是缺乏理性的,唯有依靠系统的决策理论才
能做出最“令人满意”(satisficing)的决策。目前,人工智能已经从赫伯特·西蒙时
代的秘密研究项目发展为一种大规模的商业项目,蔚为壮观。然而,令人担忧的
是,批判主义哲学的悲哀不在于像拉图尔所谓的缺乏经验主义,而在于人工智能
系统被某种政治思想所操控,完全按照经济学的成本效益模式运作以作出政治决
策。

73
Christina Vagt

Outsourcing the Intellect

Die Welt ist meine Vorstellung.1

1. Mistrust of intellect
In times of climate change denial and other conspiracy theories circulating in
nationalist politics and media, French philosopher Bruno Latour cautions against a
(postmodern) criticism that eliminates its offspring. Postmodern theory appears in the
eyes of Latour, himself one of its key protagonists, as a self-diminishing movement,
and its danger lies in the old (Western) tradition to distrust immutable facts by
presenting them as ideological biased.2 Is it criticism itself that produces these effects?3
Latour’s answer to this stated misery of criticism is to turn towards a new
empiricism, an empiricism that is located somewhere between Martin Heidegger’s
Thing romanticism and Alfred North-Whitehead’s process ontology; an empiricism
that promises to return agency to the things and quasi-objects that they have lost through
criticism and politics.
Latour’s text is still relevant today, even 15 years after being first published,
maybe today even more than when he wrote it. But in my opinion, the actual problem
of criticism today is less rooted in a mistrust of immutable facts, but has something to
do with a much more profound mistrust of human rationality, and derived from that, a
mistrust can be traced back to a structural much more profound mistrust of the
rationality of human action and derived from that a deep mistrust of the
comprehensibility and governability of a human-made world.
Over the course of the 19th Century, the world that predates all asserted facticity
transforms into a whimsical hybrid of organisms and symbols, of will and intellect.
Arthur Schopenhauer is one of the first to articulate his mistrust of human reason from
the perspective of the will to live. The will belongs to life, to the organism, it sits in
every network of roots as much as in the seeds that it drives towards the surface of the
earth. But the will is “unanschaulich”, nondescript, timeless, and discreet, without any
representation. But even though it only knows affirmation and negation, it founds all

1
Schopenhauer, Die Welt als Wille und Vorstellung ()World as Will and Presentation, book1, §1)
2
Cf. The marketing text of Diaphanes Press for the German translation of Bruno Latour’s text...
3 Latour, “Why has critique run out of steam?” (2004)

74
community, because the will governs all cooperation. Whereas the intellect is self-
centered, oriented only towards the individual which has to form it. The intellect is as
much modifiable as it is limited: Unaware of itself, calculating, it insists on taming the
will, but at the same time it is limited to the sphere of visibility and enlightenment. The
dilemma of the intellect is already present in the philosophy of Schopenhauer, before
the third narcissistic blow in form of Freudian psychoanalysis hit the human subject.
Already in Schopenhauer, it entertains a distorted relationship with the will, just like
the will is being hindered in its drive by the intellect. The brain is simply a parasite of
the living organism, and only a genius can almost suppress the will entirely. The
mistrust of the intellect therefore precedes the mistrust of the facts. Since at least
Schopenhauer, the intellect is under the suspicion, at least within a certain European
tradition of thought, to be nothing more than an erratic function of the organic, and
constantly interrupted by the ‘true’ continuum of life.

2. Technological Enhancement of Intelligence


Since the 1950’s, this mistrust of the intellect has a new cause that gives rise to
both hope and critique. When Herbert A. Simon, founding figure both of artificial
intelligence and behavioral economics, states in 1969 that the world we live in is rather
an artificial than one of natural causes, he could be quoting Schopenhauer’s The World
as Will and Presentation: “The world we live in today is much more a man-made, or
artificial, world than it is a natural world.” 4
In the 1930’s and 1940’s, Simon studied at the University of Chicago mathematics,
economics, and political sciences, and with Rudolf Carnap logic and philosophy of
sciences, before starting to work for the RAND corporation in 1952.5 There, Simon
found ideal conditions to advance his research. The think tank dedicated to advising the
US military offered vast economic and human resources and owned the JOHNNIAC,
one of the few computers that in principle was big and fast enough to learn chess. 6
Together with Allen Newell, Simon developed the first computer program that solved
non-numerical problems through selective search. It ran on the architecture of the
JOHNNIAC and is recognized today as the computer technological beginning of
artificial intelligence. To Simon the world with all its artifacts appears to be rather an
artificial than a natural one, in which there appears to be no significant difference
between artifacts and natural beings in the first place, “for those things we call artifacts
are not apart from nature. They have no dispensation to ignore or violate natural law.
At the same time, they are adapted to human goals and purposes. They are what they
are in order to satisfy our desire to fly or to eat well. As our aims change, so too do our
artifacts and vice versa, as well.” 7 Simon as well as Schopenhauer understand the

4
Herbert A. Simon: The Sciences of the Artificial, Boston 1969, S. 3.
5
Carnap was the first philosopher to present the philosophy of mind as a computational program
(Android Epsitemology, Glymour, Ford, Hayes, The Prehistory of Android Epistemology, MIT
Press 1995, 3-23, hier18)
6 Herbert A. Simon: Models of My Life, Boston 1991, S. 202.
7 Simon: The Sciences of the Artificial (wie Anm. 4), S. 3.

75
intellect as being characterized trough education and artificiality, rather than through
living beings that bring it forth, and they share the insight in its limitations. For
Schopenhauer, it is the task of philosophy to present the concrete world in abstract terms,
to summarize the complexity in which it appears to the individual in abstract and
general terms:

“Thus it will on the one hand separate and on the other hand unite,
in order to deliver, for the sake of knowledge, any and all of the manifold
things in the world (…). Philosophy will be, accordingly a summa of the
most general judgements whose immediate cognitive ground is the
world itself in its totality, without the exclusion of anything: thus
everything that is to be found within human consciousness. It will be a
complete replication, as it were a mirroring, of the world in abstract
concepts, which is only possible by uniting the essentially identical
within one concept and separating out that which is different in
another.”8

While Schopenhauer’s model lingers within the antagonistic relation between will
and intellect of 19th century philosophy, Simon builds his model of “bounded rationality”
after World War 2 in the new medium of computer simulations and with the declared
goal to dispel any suspicion of an élan vital (life force, vitalistic force) in the heart of
intellectual processes. 9 Simulation itself was nothing new, Simon writes, but the
spectrum of systems that can be simulated grew by large through digital computers and
their degree of abstraction. No other simulation technique like thought experiments or
wind tunnel set ups is as “protean”, as adaptive, and as capable when it comes to
functional description, and therefore as mathematical.10
Simon and Newell call their computer program Logic Theorist, and unlike
traditional Operation Analysis programs, it does not search for the optimal solution of
a decision problem by running through all possibilities, but it discards the majority of
possibilities at the beginning without exhaustive testing them, and runs the remaining
possibilities only as long as it takes to find a satisfactory solution. Facing complex
problems, the Logic Theorist reaches its goal faster. Within one year, the computer is

8
»daher wird sie teils trennen, teils vereinigen, um alles Mannigfaltige der Welt überhaupt (...)
dem Wissen zu überliefern. Die Philosophie wird demnach eine Summe sehr allgemeiner Urteile
sein, deren Erkenntnisgrund unmittelbar die Welt selbst in ihrer Gesamtheit ist, ohne irgend etwas
auszuschließen; sie wird sein eine vollständige Wiederholung, gleichsam Abspiegelung der Welt
in abstrakten Begriffen, welche allein möglich ist durch Vereinigung des wesentlich Identischen in
einen Begriff und Aussonderung des Verschiedenen zu einem andern.«( Schopenhauer: Die Welt
als Wille und Vorstellung I, S. 104.)
9 Newell, Simon, Simulation of Human Thought, p. 7.
10 Simon: The Sciences of the Artificial (wie Anm. 4), S. 18.

76
able to solve the first 25 theorems of the Principia Mathematica, and in some cases
even in a more elegant way than its human predecessors.11
Simon and Newell present the program 1956 at the founding conference for
artificial intelligence at Dartmouth College to John McCarthy, Claude Shannon, Oliver
Selfridge, Marvin Minsky and others. The Logic Theorist is considered the first
computer program able to imitate human problem-solving behavior, and therefore as
the beginning of artificial intelligence. According to Simon and Newell, with the Logic
Theorist it became clear, that the computer is not just a metaphor or analogy for the
brain:

“We are not talking of a crude analogy between the nervous system
and computer ›hardware‹. The insight of a computer does not look like
a brain any more than it looks like a missile when it is calculating its
trajectory. There is every reason to suppose that simple information
processes are performed by quite different mechanisms in computer and
brain. […] However, once we have devised mechanisms in a computer
for performing elementary information processes that appear very
similar to those performed by the brain (albeit the quite different
mechanisms at the next lower level), we can construct an explanation of
thinking in terms of these information processes that is equally valid for
a computer so programmed and for the brain.”12

According to Simon, both computers and human brains operate goal oriented in
their information processing, because they serve the adaptation of a system to its outer
environment, a crucial distinction: The inner environment is represented by a group of
alternative, defined actions, while the outer environment is represented by known or
unknown parameters – just like a human decision makers who will never have 100
percent of the information of their environments. As an example, Simon mentions the
optimization of nutrition: Which foods can guarantee a desired amount of calories,
taking both dietary guidelines and cost efficiency into account?13
While the inner environment is bounded by food prices, nutrition rates and
requirements, the system’s relation to its outer environment can be optimized through
the cost-benefit- function. Hypothetically there is an unlimited amount of possible
foods to choose from, but the Logic Theorist reaches its goal fast by means of linear
programming. Obviously, planning a menu based on this kind of optimization will only
take characteristics like taste or sustainability of foods into account if they are counted
in as parameters. The more parameters are taken into account, the longer the calculation
will take.

11 Vgl. Benjamin Seibel: Cybernetic Government. Informationstechnologie und


Regierungsrationalität von 1943-1970, Wiesbaden 2016, S. 185.
12 Newell und Simon: The Simulation of Human Thought (wie Anm. 8), S. 8.
13 Simon: The Sciences of the Artificial (wie Anm. 4), S. 60-61.

77
The Logic Theorist is like a human decision maker programmed to discern
between inner and outer environment on a symbolic level, a fact which makes it
intelligent, because »intelligence is the work of symbol systems.«14
A programmed digital computer has the necessary and sufficient means to not only
crunch numbers but to interact with all kinds of thinkable symbols in an intelligent way.
The relation between brain and computer is therefore not a metaphorical.
Cognition does not occur by means of calculation, it is calculation according to Simon.
With computer simulations, the computer stops being a metaphor for the brain, because
it demonstrates how computers can produce human behavior.15 A computer simulation
of thinking thinks, Simon writes in an almost Heideggerian tone, because computer and
brain work with the same material, namely symbols.16 Thinking, unlike other actions
like digestion, occurs in form of an environmental oriented optimization via symbol
processing. In this understanding, the discourse on human rationality is freed from all
substance ontology that it carried since RenéDescartes, and the goal from now on is to
produce rationality as a organizational function through the design of symbol
processing machines, almost like Schopenhauer thought about it: not in terms of
philosophical world description, but still as »Abspiegelung der Welt in abstrakten
Begriffen«(a total reflection of world in abstract concepts) – as computer technology.
Before working for the RAND Corporation in Santa Monica, Simon studied
mathematical decision and business theories as a political scientist, so called Operation
Analysis. With his study Administrative Behavior from 1947, he laid out the
foundations for a behavioral-economical critique of the classic model of the homo
economicus, by demonstrating that human actors in larger organizations and
administrations act rational only to a certain degree.17 The all-knowing, profit-oriented,
and rational business man of (neo-)classic economics now appeared to be nothing but
an idealization, that no longer had anything in common with the reality of different
actors in modern organizations. The homo economicus according to Simon was nothing
more than »the idealization of human rationality enshrined in modern economic
theories«.18
Human behavior therefore is not determined by rationality, but keeps a certain
flexibility in order to be able to cope with a complex environment – of which it has only
partial knowledge. 19 With his decision theory of bounded rationality, for which he

14
Herbert A. Simon: The Sciences of the Artificial, Cambridge, MA. 31996, S. 23.
15 Vgl. Roberto Corderschi: Steps Toward the Synthetic Method, in: Philip Husbands, Owen

Holland und Michael Wheeler (Hg.): The Mechanical Mind in History, Boston 2008, S. 219-258,
hier S. 231.
16 Vgl. Herbert A. Simon: Machine as Mind, in: Peter Millican und Andy Clark (Hg.): Android

Epistemology, Cambridge 1995, S. 24. Zur Idee, künstliche Intelligenz heideggerianischer zu


Machen, vgl. Hubert L. Dreyfus: Why Heideggerian AI failed and How Fixing it Would Require
Making it More Heideggerian, in: Artificial Intelligence 171/18 (2007), S. 1137-1160.
17 Vgl. Herbert A. Simon: Administrative Behavior, New York 21957.
18 Simon: The Sciences of the Artificial (wie Anm. 13), S. 23.
19 Vgl. Simon: Administrative Behavior (wie Anm. 16), S. 241.

78
received the noble prize in economics in 1978, Simon contributed significantly to the
fact that psychological theories and factors entered the realm of management theory
and economic models. But economics itself was not the goal of Simon’s research, it
appears to be just the perfect environment to study the bounded rationality of human
behavior. And artificial intelligence was a promising way to optimize rationality that
depends on the interaction between inner and outer environment. From the beginning
on, the theory of bounded rationality wanted to be more than simply economic analysis
or theory, it wanted to be a new way of governing in the form of design and
programming.
According to Benjamin Seibel, the decision and game theories developed at
RAND were at the heart of the neoliberal transformation of statehood under Ronald
Reagan.20 In Seibel’s analysis of cybernetic governance, this political technology is
motivated by the desire to de-subjectify political sovereignty through mathematical
procedures. I like to add that it is not just mathematical procedures, but that the
cybernetic vision of automating civil governance processes meets with a behavioral
design strategy.21 Simon who served as political advisor under Lyndon B. Johnson and
Richard Nixon, does not address politicians and managers but a new type of engineer-
designer, whom he sees in need to learn economic cost-benefit analysis. 22 The
heuristics of the Logic Theorist – to run possible alternatives until a satisfying solution
was found – was to be applied whenever an optimum solution was not attainable. Simon
calls this problem solving heuristics ›satisficing‹ and it serves as an alternative to
rational decision theories: »Decision makers can satisfice either by finding optimum
solutions for a simplified world, or by finding satisfactory solutions for a more realistic
world.«23
When rationality according to Simon is not primarily an exclusive quality of
human reason but rather one of inner organization in relation to an environment, than
it depends directly on the system’s design. ‘Satisficing’ can be called the first artificial
intelligence programming heuristics, as well as a general design and governing maxim.
John von Neumann had reformulated the problem of how to design a hydrogen
bomb in such a way that it could be simulated on the architecture of the ENIAC at the
University of Pennsylvania.24 Simon and Newell reformulated the problem of human
intelligence and decision making in such a way that it could run on the architecture of
the JOHNNIAC at RAND. According to Seibel the governmental orientation of US
neoliberalism was not just political reaction towards social welfare reforms under
Lyndon B. Johnson as Foucault figured, but rather the result of the computer
technological modelling and governing that had to deal with a complex global political

20
Seibel: Cybernetic Government (wie Anm. 10), S. 202-203.
21
Seibel: Cybernetic Government (wie Anm. 10), S. 201. Siehe auch Jeannie Moser und
Christina Vagt (Hg.): Verhaltensdesign. Ästhetische und technologische Programme der 1960er
und 1970er Jahre, Bielefeld 2018.
22 Vgl. Simon: The Sciences of the Artificial (wie Anm. 4), S. 70.

23 Herbert A. Simon: Rational Decision-Making in Business Organizations, in: Assar Lindbeck

(Hg.): Nobel Lectures, Economics 1969-1980, Singapore, 1992, S. 343-371, hier S. 350.
24 Vgl. Peter Louis Galison: Computer Simulations and the Trading Zone, in: Gabriele

Gramelsberger (Hg.): From Science to Computational Science, Zürich/Berlin 2011, S. 118–157.

79
situation of the Cold War, in which traditional decision makers could not be trusted to
find objective decisions. With Simon pointing out the limits of capacity and complexity,
the problem of decision-making shifts towards the design of conditions which could
guarantee decidability in the face of limited resources.25
The cost-benefit-calculus forms the core of the behavioral economic governing
programs that thoroughly submit everything to economic analysis, even the non-
economical. As Foucault states in his analysis of US neoliberalism, the behavioral
economists at the University of Chicago developed already during the 1930’s the
methodology to calculate the cost-benefit ratio of everything – and after that even
something like racism could be reformulated as a problem of offer and demand, and its
economic cost could be expressed in dollars. According to Foucault, the behavioral
economics of Gary Becker and others epitomizes bio politics, a specific form of power
that aims at the control of a population through the governance of normalizing statistics:
Make life and let die.26
Behavioral economics together with new techniques of artificial intelligence form
a new experimental field of political technologies, in which decision making, ergo what
was formerly called intellect, is unhinged from its subjective and qualitative context in
order to be able to outsource it in economic-technological systems. According to
behavioral economics, there is no need for a homo economicus anymore because
rationality is not to be found in humans but in the organizational structures of overriding
importance. Artificial intelligence like it was developed and described by Simon and
others in the aftermath of WW2, has been administered in policy making as well as in
corporate management ever since – an expression of this behavioral design shift of the
political itself.

3. Supercritical Machines and Subcritical Minds

The already mentioned text by Bruno Latour, Why Has Critique Run Out of Steam?
was first published in 2004 and is clearly affected by the U.S. American presidency of
George W. Bush and his “war against terror” in the aftermath of 9/11. It is saturated of
a mistrust of human reason in general and of humanistic theories of the 20th and 21st
Century in particular. Latour attempts to reformulate the Kantian problem of criticism
according to the (new) language of artificial intelligence. With a reference to Alan
Turing’s Computing Machinery and Intelligence, Latour ends his essay just like Turing
ended his, with the “surprising result that we don’t master what we, ourselves, have
fabricated, the object of this definition of critique”. 27 In Latour’s text, the lack of
empiricism and in its aftermath the facticity critique of modern and postmodern theory
is the main factor for the feebleness of humanistic criticism in the times of homeland

25
»In der quantitativen Übersetzung trat das Regieren als ökonomische Tätigkeit hervor, deren
Resultate in einem Kosten-Nutzen-Kalkül evaluiert werden konnten.«(Seibel: Cybernetic
Government (wie Anm. 10), S. 203.)
26 Vgl. Michel Foucault: Geschichte der Gouvernementalitä t II. Die Geburt der Biopolitik, hrsg.
v. Michel Sennelart, Frankfurt am Main 2004, S. 300-330.
27 Bruno Latour, “Why Has Critique Run Out of Steam?”, Critical Inquiry, Winter 2004, 347.

80
security, while Turing 1950 simply reflects in the face of nuclear weapon technology if
something like a “critical mass” exists in the context of human theory production.
»Is there a corresponding phenomenon for minds, and is there one for machines?
There does seem to be one for the human mind. The majority of them seem to be ›sub-
critical,‹ i.e. to correspond in this analogy to piles of sub-critical size. An idea presented
to such a mind will on average give rise to less than one idea in reply. A smallish
proportion are super-critical. An idea presented to such a mind may give rise to a
whole ›theory‹ consisting of secondary, tertiary and more remote ideas. Animals’ minds
seem to be very definitely sub-critical. Adhering to this analogy we ask, ›Can a machine
be made to be super-critical?‹«.28
When Latour’s text was translated from English to German, a momentous error
occurred. The crucial sentence “a smallish proportion are super-critical” was forgotten,
so that the German translation now states that the sub-critical minds (and machines)
give rise to super-critical theories.29
Aloof all philological persnicketiness, this edition error demonstrates once more
how the space of the symbolic and intelligible is governed by chain reactions of
signifiers, and not by supposedly stable relationships between signs and things or even
meaning; a perception prevailing not only among the criticized postmodern theorists,
but as I tried to show, also within the heart of behavioral economics and
governmentalities and the beginnings of artificial intelligence. According to Simon, the
condition of possibility for a ‘satisficing’ organization of intelligent systems in complex
environments is the ability to make significant decisions. Facticity on the other hand
occurs on a different ontological level, because it is bound to sociality and its norms,
and to the symbolic organizational structure. And because of this inherent social fabric
of facticity, it will always be subjected to metonymical displacements and
communication noise.
Recently, artificial intelligence has ascended from an almost esoteric research
project and relict of the Cold War to a billion-dollar business under the new names of
machine learning and smart technologies. In the meantime, nationalist and racist
movements induce politics in Europe and the United States, while the first artificial
intelligence ran for office in a political campaign in Japan. In the face of the actual
political situation in Europe and the United States with the resurrection of ethnical
people movements that did not reach a ‘critical mass’ since the 1930’s, a theoretical
navel-gazing about lacking morals and facticity of postmodernism is in danger of
descending into sub-critical spheres. Once political decision making is completely
reduced to economic cost-benefit-calculus, the political as conflict and negotiating zone
runs the danger of being eradicated or being reduced to the production of affects. The
misery of criticism does not lie in the assumed postmodern lack of empiricism, but in

28
Turing zitiert nach Bruno Latour: Why has Critique Run out of Steam? From Matters of Fact to
Matters of Concern, in: Critical Inquiry 30/2 (2004), S. 225-248, hier S. 248.
29 „Der Verstand der meisten Menschen scheint ‚unkritisch‘ zu sein, dh. er entpricht bei dieser

Analogie den Reaktoren unterkritischer Größe. Eine einem solchen Verstand mitgeteilte Idee ruft
eine ganze ‚Theorie‘ hervor, bestehend aus sekundären, tertiären und noch fernerliegenden
Ideen.“(Turing quoted by Latour in the German translation, p.59)
81
the helplessness of intelligent systems that are confronted with a political madness that
operates within a completely rational cost-benefit paradigm.

82
通用人工智能为何需要胡塞尔的“意向性”理论?

徐英瑾

人工智能不得不具有意向性,因为心智需要意向性——前提是有根据环境
变化而修正信念的能力。然而,主流英语言说者的哲学意向性理论并不能“照
亮”通用人工智能(AGI)方面的问题:这些主流进路要么诉诸于外部环境因素
而无法触及内在模式,要么对不同认知状态之间的渐变力有不逮。由此看来,
所需的意向性理论必须能将心理意旨悬置于外部世界判断,并且能将心理模式
视为允许它们之间相互渐变的对象。这两个刻画方案自然而然地将我们引向胡
塞尔的“现象学悬搁”,并引向对他“Noema”概念的推论解释。而这两个方案
自身也可以通过“非公理化推演系统”(NARS)而得到算法化的说明。

83
Xu Yingjin

Why is the Husserlian Notion of “Intentionality”


Needed by Artificial General Intelligence?

1. Introduction

Although there is a sizable body of literature at the intersection of phenomenology


and cognitive science, there are not so many studies intended to clarify the relationship
between Edmund Husserl, the founding figure of the entire phenomenological
movement, and Artificial General Intelligence (AGI) , which bears affinities with
cognitive science in many aspects. The main motivation for marginalizing Husserl in
the circle of the so-called “naturalized phenomenologists” seems to be based on the
following syllogism:

• The most promising way to build the alliance between


phenomenology and cognitive science or AGI is to appeal to notions like
“embodiment”, “embeddedness”, “extendedness” and “enactedness”, as
summarized as “4E-ism” by Mark Rowlands, and all of these notions cannot be
well treated in the framework of symbolic AI or “good-old-fashioned AI”
(abbreviated as “GOFAI”, as coined by John Haugeland ).

• Husserl’s notion of “noema”, according to Hubert Dreyfus, is a


philosophical equivalent of AI scientist Marvin Minsky’s notion of “frame”
(since both include a pre-fixed data-structure for symbolically representing a
stereotyped situation) and hence belonging to the tradition of GOFAI.

• Therefore, Husserl’s legacy concerning the nature of intentionality is


not illuminating enough for a naturalized phenomenologist today.

However, besides the controversy involved in the first premise that we will address
in section 3, at least the second premise of this argument is doubtable, since there is a
relatively new tendency of interpreting the Husserlian notion of “Noema” not in terms
of Minskian frames or Fregean “senses” but by virtue of Robert Brandom’s
inferentialism, and it is this reading that attributes more dynamic features to Husserl’s
theory of intentionality (more on this in section 5). Therefore, mainstream naturalized

84
phenomenologists’ marginalization of Husserl (which is in sharp contrast with their
preference of Heidegger and Merleau-Ponty) is not warranted.

But the preceding claim itself does not imply that the relevance of Husserl to AGI
is self-evident. The revelation of this relevance requires some further arguments, which
are supposed to be provided in this article. To be more specific, these arguments are
supposed to be supporting the following sub-claims, which constitute the route-map of
this research:

• Intentionality is required by any intelligent system, no matter whether


it is artificial or natural, given that intelligence requires intentionality-
presupposing capacities of revising beliefs in accordance with environmental
changes.

• The mainstream externalist treatment of mental contents (which is


one component of intentionality) is to appeal to the correlation between them
and external factors, but this approach is not beneficial to the modelling of
intentionality in the sense that to directly model external factors is not feasible
for any AI/AGI system.

• The mainstream externalist treatment of psychological modes (which


is another component of intentionality) is to metaphorically view them as
“boxes” which apply different algorithms on contents emplaced in themselves,
but this treatment is not beneficial to the modelling of intentionality either in
the sense that it has assumed the discreteness among different modes and hence
goes against the intuition that there are gradual transitions from this mode to
another.

• Hence, the needed theory of intentionality has to view mental contents


as something which could be technically detached from external reality on the
one hand, and view psychological modes as something permitting gradual
mutual transformations among them on the another. The two requirements will
naturally lead us to the Husserlian notions of “phenomenological epoché” and
“noema”, both of which are expected to be algorithmically reconstructed.

The main purpose of doing this research is not only to persuade naturalism-
oriented AI/AGI researchers to acknowledge the values of Husserl’s phenomenology,
but also to reconstruct Husserl’s phenomenology from a new perspective, namely, a
perspective different from mainstream naturalized phenomenology by keeping distance
from 4E-ism. And explorations in this direction will hopefully save Husserl’s reputation
out of the shadows of Heidegger and Merleau-Ponty, who have long been favored by
mainstream naturalized phenomenologists.

85
2. Intentionality is required by intelligence

Here we will elude the complicated problem on how to strictly define the term
“intelligence” and begin with a simpler question: given that no reasoning system can
get its conclusion which are practically useful without premises encoding empirical
contents, and that prejudices are usually (albeit perhaps not inevitably) involved in
these premises, what kind of reasoning machine we need to build if it is supposed to be
bearing the mark of “intelligence”? Prima facie we have four options on the table:

Option 1: To build a system which reasons with premises which are all
true and is capable of revising its beliefs in accordance with new environmental
changes.

Option 2: To build a system which reasons with premises involving


prejudices and is capable of revising its beliefs in accordance with new
environmental changes.

Option 3: To build a system which reasons with premises involving


prejudices and is not capable of revising its beliefs in accordance with new
environmental changes.

Option 4: To build a system which reasons with premises which are all
true and is not capable of revising its beliefs in accordance with new
environmental changes.

Option 1 is quite weird in the sense that it looks unnecessary for a system to revise
its belief if its starting premises are all true. Surely the set of all true premises of a
system could be fairly small so that it is still necessary for such a system to enlarge the
scope of its true beliefs in order to be more adaptive to the environment. But to include
more new true beliefs does not mean that those older ones have to be revised, unless
they can be proven to be untrue. Thus, option 1 still remains weird. Option 3 is weird
too, since it is not so practically useful to build a machine which can only transfer
falsities from premises to conclusions rather a machine which can automatically
recognize falsities and separate them from truth. As to option 4, it is theoretically a bit
more acceptable than 1&2, since a system with no false starting premises would
theoretically require no revisions of its beliefs. But it is still practically too challenging
to build such a system, given that no programmer, who can be any one but an omniscient
being, can guarantee that all premises that she feeds into the system will not be proven
to be untrue in the future, unless the premises in question encode only trivial truth and
hence are not potentially relating to any interesting implications. Hence, only one
option, namely, option no. 2, is left on the table. That is to say, any intelligent system,
whether artificial or natural, has to be able to revise its initial beliefs, some of which
will be proven to be untrue.

86
And it is this option that makes the modelling of intentionality an indispensable
part of the modeling of any artificial agent, if it is supposed to be minimally intelligent.
Here goes the argument for saying so:

• The design of an artificial intelligent system has to assume that it is


capable of revising its stored beliefs (as what option 2 tells).

• The revisions of beliefs have to be accompanied with changes of


psychological modes, for instance, changes from a mental state of believing p to
that of suspecting p and further to that of disbelieving p, etc.(as what a folk
psychologist would predict).

• Intentionality is usually construed as a mental capacity to make the agent


directed at “something”, no matter whether this “something” exists in the
physical world (a commonsensical view of what intentionality is).

• Hence, intentionality is composed of both the manners of directing the


agent and the “something” to be directed. Or in another way around, it is
composed of psychological modes and mental contents.

• Although it is a bit hard to judge whether the existence of mental contents


conceptually assumes the existence of corresponding psychological modes (like
“belief” or “desire”), the contrary case should hold, that is to say, the existence
of psychological modes have to be based on corresponding mental contents,
which do constitute the core part of intentionality (given that higher-order
mental properties have to be based on first-order properties, although not
necessarily vice versa).

• Hence, from (2)&(5), it can be inferred that the requirement of the variety
of psychological modes will eventually lead to the modelling of full-fledged
intentionality in artificial systems.

• Therefore, from (1)&(6), it can be inferred that the requirement of


intelligence will eventually lead to the modelling of full-fledged intentionality
in artificial systems.

We believe that this argument, which is sound, can make any reasonable AGI
scientist seriously consider the problem of modeling intentionality, no matter whether
the term “intentionality” has to be construed in a Husserlian manner. However, some
readers may still ask: if the modelling of intentionality is so urgent for the design of any
intelligent system, why do most AI scientists seem to be dismissive of this issue?

The answer is fairly simple: they are mostly AI scientists rather than AGI scientists;
or in another way around, most AI systems that they built are too specific to certain
tasks to satisfy the general requirement of option 2. Actually, these systems are merely
intended to satisfy option 4, according to which premises fed into the system are at least

87
supposed to be all true. An exemplary case to footnote this point is Edward
HYPERLINK "https://en.wikipedia.org/wiki/Edward_Feigenbaum"Feigenbaum’s
expert system (which is fairly representative of GOFAI), namely, a system usually
designed to emulate the decision-making processes of human experts in a certain
domain of knowledge. Such a system is routinely composed of a knowledge base, which
represents empirical state of affairs which are supposed to be facts, as well as an
inference engine, which applies the inference rules to given “facts” to yield new “facts”.
But such a system can work well only when the “facts” stored in its knowledge base do
encode genuine facts and hence immune to further revisions, and this condition itself is
hard to satisfy since the progress in any domain of human scientific inquiries will
routinely force human experts to update what they did believe, whereas it is technically
challenging to make an expert system to automatically update its knowledge base as
what a human expert would do with less efforts. Surely an AI scientist may try to design
an expert system which literally has the capacity of automatically acquiring genuine
knowledge from a large body of information including falsities, but this move is
tantamount to the adoption of option 2, which eventually leads such as designer to the
modelling of intentionality, as the preceding 7-step argument predicts.

Advocates of connectionism may wonder why option 2 is also compelling for


connectionists, given that artificial neural networks that connectionists appeal to are not
directly encoding mental contents on the symbolic level and hence seemingly not
relevant to any option from 1 to 4. But it is noteworthy that the the mapping
relationships between the training data fed into a typical neural network and the ideal
outputs of the whole network are still analogical to the “knowledge base” of an expert
system in the sense that they still crystallize knowledge of how an human programmer
determines what type of inputs have to be mapped onto what type of outputs. Thus,
analogical to an expert system, an neural network still needs to revise these mapping
relationships when an human programmer finds it practically necessary to do so.
However, still analogical to a typical expert system which cannot automatically update
its knowledge base, a typical neural network, once having been trained to be adaptive
to a certain type of mapping relationship, is also hard to be adaptive to a new
relationship as well. Therefore, in order to be more intelligent, even a neural network
needs to exhibit intentionality by taking option 2 seriously.

Now philosophers should do their job by providing a plausible theory of


intentionality to guide the modelling of intentionality, given that the abstractness of the
term “intentionality” itself can be only philosophically construed. However, not all
philosophical theories of intentionality are suitable to guide AGI researches. Against
many readers’ intuition that analytic philosophy bears more affinities with AI/AGI than
continental philosophy, we will immediately argue that the notion of intentionality
provided by mainstream analytic philosophy is less preferable to its counterpart in
Husserl’s phenomenology.

88
3. Mental contents cannot be treated externalistically in AGI/AI

As aforementioned, besides psychological modes, the core part of intentionality is


mental content, and in this sense, the problem of intentionality-with-a-t is also
correlated with that of intensionality-with-an-s and hence relevant to semantic
considerations. For any reader sympathizing with the tradition from Brentano to
Husserl, it looks natural to view the existence of mental content as “inexistence”,
namely, a mode of existence which has to be confined within one’s internal mental life
and hence not directly relevant to external reality. By contrast, the mainstream
Anglophone treatment of mental content is of an externalistic flavor, especially after
Hillary Putman’s twin earth case became the standard thought experiment for
participating the debate between semantic internalism and externalism. However, to be
formally involved in this four-decade-long debate is not on the agenda of this research;
rather, what is more relevant to our basic concern is the discussion of which side of the
debate looks more acceptable from the perspective of AI/AGI. And our conclusion is
that internalism has to preferred since externalism cannot be compatible with any
conceivable from of practice in AI/AGI. Here goes the argument:

• The formal framework of semantic externalism is two-dimensional


semantics, by which the external dimension of meaning has to be detached from
its internal dimension. To be more specific, such semantics allows one to
distinguish the the primary intension from the secondary one: The primary
intension is the method by which the agent attempts to pick up her desired object
in a cross-worldly manner and to which she has the epistemic access, whereas
the secondary intension is the information imbedded in the external object
which she actually picks out in a certain possible world by using certain
linguistic tools but to which she may have no epistemic access.

• Hence, two-dimensionalism has assumed that there is an omniscient


being’s perspective from which the secondary/external intension could be
presented, e.g., a perspective allows one to refer to the chemical composition of
water even when modern chemistry is entirely out of the mind of the agent in
question.

• From (1) & (2), it can be inferred that any attempt to model
intentionality in accordance with externalism has to encode the secondary
intension from an omniscient being’s perspective.

• However, it is not feasible for any AI system to present an omniscient


being’s perspective, given that the knowledge of AI is ultimately derived from
human-beings, who are not omniscient beings.

• Hence, from (4) & (3), it can be deduced that semantic externalism

89
cannot provide a feasible framework for AI.

• Therefore, semantic internalism is more appealing than externalism


for AI, given that internalism and externalism have exhausted the logical space
for semantic constructions.

Some readers may doubt the acceptability of step 2 by denying the necessity of
introducing an omniscient being’s perspective for fixing the secondary intension. They
may contend that a high-level ascriber who knows more than the agent in question may
suffice for ascribing the secondary intension to the target representation. But the
question is: to know how much more is more enough for such an ascriber? Advocates
of two-dimensionalism simply cannot say that “the ascriber only needs to know that
the chemical composition of water is H2O” in the twin earth case, since it would be too
ad hoc to explain why this ascriber is so lucky to acquire the right piece of knowledge,
among others, for picking up the right sort of secondary intension just in this case.
Given that luck will routinely undermine the reliability of ascribing the secondary
intension, luck has to be precluded in such processes, and the best way to preclude it is
to appeal to an idealized ascriber who delivers semantic knowledge steadily and reliably.
Obviously only an omniscient being can perfectly satisfy this condition, whereas no
artificial system can stimulate such a being.

Some readers may also doubt the acceptability of step 4. Although for GOFAI, as
they may contend, it looks necessary to deliberately avoid introducing an omniscient
being’s perspective by constructing “micro-worlds”(namely, partial representations of
worlds which could be processed by a certain configuration of computing machinery),
GOFAI is not the only game in the town. It seems that both connectionist and enactivist
systems are irrelevant to the problem posed by step 4 by avoiding building such micro-
worlds.

But we don’t think so. Actually, even in a connectionist system, it still makes sense
to view “neuronal activation space” as another form of micro-worlds, although
elements of these worlds are points, regions, or trajectories rather than symbols in their
GOFAI-counterparts. Moreover, according to AI scientist Ian Goodfellow et al., in a
deep learning system (which is an updated form of connectionism), increasing amounts
of raw data equivalent to fragments of certain mirco-worlds do go hand in hand with
the increasing complexity of the micro-world-building mechanisms. Hence, just like
GOFAI, even in connectionism, there is no place for an omniscient being who is not
constrained by any micro-world-building mechanism either.

There is no such a being in any enactivist system as well. Enactivism is a trend of


thought in both philosophy of cognitive science and AGI/AI which claims that
cognition arises as the result of interplays between an acting organism and
environmental factors. One of the philosophical doctrines of enactivism is formulated
in terms of AI scientist Rodney Brooks’ “physical grounding hypothesis”, according to
90
which to build a system that is intelligent is necessary to have its representations
grounded in the physical world. But our philosophical worry of this remark is: Is it
really possible for any cognitive system to be connected to the “physical world” without
the mediating role of a certain micro-world which is epistemologically assessable to the
system in question? We don’t think so, and we even believe that Brooks’ own following
comment cannot be conceptually precluding such an mediating micro-world:

The key observation is that the world is its own best model. It is
always exactly up to date. It always contains every detail there is to be
known. The trick is to sense it appropriately and often enough…. To
build a system based on the physical grounding hypothesis it is
necessary to connect it to the world via a set of sensors and actuators.
Typed input and output are no longer of interest. They are not physically
grounded.

This observation is self-defeating because the term “physical grounding” seems to


indicate the identity between the external world which “contains every detail to be
known” and the world perceived by “a set of sensors and actuators”, whereas actually
they cannot be the same. The pictures taken by a sensor, say, a digital camera simulating
the operation of the compound eyes of a dragonfly, should be different from another
sensor, say, another camera simulating the operation of the eyes of an owl, and different
visional inputs of two sensors are themselves subject to different constructing rules of
different micro-worlds, none of which is unbiased towards the physical reality. Hence,
Brooks’ “physical grounding hypothesis” at most implies that language-like
representations of the external world are unnecessary, rather than that any somehow
biased presentation of the external world is unnecessary. And this implication is
definitely not powerful enough to introduce an omniscient (and hence entirely unbiased)
point of view of the physical world.

Another representative enactivism-inspired AI research deserving mentioning is


provide by Randall D. Beer, who attempts to build a framework in which an agent and
its environment are modeled as two coupled dynamical systems whose mutual
interaction is in general jointly responsible for the agent’s behavior. But the
epistemological problem involved here is still salient: how could a programmer model
the external environment of the agent in a perspective-free manner? Actually there is
no way to do so, and Beer’s own design of an artificial agent simulating insect-like
walking is also based on the construction of a continuous-time recurrent neural network,
which can perceive the external environment only in accordance with what its internal
structure allows it to perceive. Hence, there is no omniscient view of reality involved
even in Beer’s enactivist model.
91
Here we simply have no space to make comments on all enactivism-inspired AI
researches. But the philosophical problem that they face are basically the same. They
all assume that there are “information” stored in the external environment and that
either mental representations or perceptions of agents can be modeled as teleologically
oriented to this “information”. The more abstract form of this assumption is a
teleological account of information processing of agents, which is proposed by
mainstream Anglophone philosophers like Fred Dretske, Ruth Millikan and developed
by Karen Neander. To be more specific, according to Neander, a representation R has
the content C if the subject has the function of producing R-type representations to
respond to C-type environmental factors. This definition is patently attempting to
introduce an omniscient being’s view in the sense that it allows to formulate “C-type
environmental factors” not from the subject’s perspective. However, even though this
teleosemantic account were philosophically plausibly, when algorithmically realized, it
would still have to appeal to internalism, because a subject-independent encoding of
“C-type environmental factors” will presuppose a further encoding perspective in
which these factors have to be emplaced in accordance with a certain format, and
thereby “internalized” on a deeper level. To be a bit more formally, although the
computing language Le for representing environmental factors outside the agent may be
superficially different from the language Lr for presenting representations of the agent,
Le has to be substantially expressive enough to make all Lr-expressions translatable into
their Le-equivalents in order to maintain the uniformity of the entire computing platform.
The resulting matryoshka-like structure of this world will still assume an underlying
internalizing perspective.

Therefore, semantic internalism has to be assumed for modeling intentionality.

4. Psychological modes, directions of fit, and the box-approach

Another critical component of typical intentionality is psychological modes. If


intentionality can be construed as any mental state which is essentially or at least
potentially directed at anything that can be mentally presented, then different
psychological modes can be accordingly viewed as different pathways through which
the agent in question can direct herself at her mental target. John Searle has a long but
still incomplete list of these modes in his widely-cited research of Intentionality,
including belief, fear, hope, desire, love, hate, aversion, liking, disliking, doubting,
wondering, whether, joy, elation, depression, anxiety, pride, remorse, etc.. However, as
the analysis in this section will immediately show, the treatments of psychological
modes proposed by mainstream Anglophone philosophers, like John Searle and Jerry
Fodor, are not satisfactory enough even due to pure philosophical reasons, needless to
say AGI-based considerations.
92
We will start with Searle. His characterization of psychological modes is based
on the notion of “direction of fit”. To be more specific, according to Searle, modes like
belief has a “mind-to-world” direction of fit in the sense that “it is the responsibility of
the belief, so to speak, to match the world, and where the match fails I repair the
situation by changing the belief”. By contrast, modes like desire has a “world-to-mind”
direction of fit in the sense that it is the responsibility of the world to match the desire,
and when the world fails to do so, “I cannot fix things up by saying it was a mistaken
intention…Desires and intentions…cannot be true or false, but can be complied with,
fulfilled, or carried out…”

But we don’t think that a theory of psychological modes based on “directions of


fit” is untenable. Firstly, Searle’s description of directions of fit cannot be always fitting
our linguistic intuitions in ordinary discourses. For instance, it looks intuitively
unacceptable to say that “the world has to take its responsibility” if one’s desire cannot
be fulfilled, when the content of such desire is utterly ridiculous, e.g., a desire that “I
want to be landing on the sun.” (Hereafter I will simply call this desire as the “sun-
desire”). Obviously there is nothing wrong for the sun, which has no free choice, to be
a huge sphere of hot plasma which makes any attempt to land on it unrealizable, and in
this case, against Searle’s suggestion, the speaker in question has to take the
responsibility of having such a ridiculous sun-desire.

Secondly, it may be implausible to attribute responsibility to the world even when


intentions involving non-ridiculous contents cannot be fulfilled. For instance, if Tom
fails to fulfil his desire of having a cup of Japanese tea by doing X, the whole situation
can be more naturally interpreted as a failure based on his wrong belief, say, that “I can
have a cup of Japanese tea by doing X.”, and this interpretation quickly transfers the
target of responsibility-attribution back to the agent again. This pattern of analysis can
be even applied to those evaluating attributes intended to replace truth-values in
Searle’s list, such as “fulfilment” or “being carried out”, etc.

More generally, the failure of fulfilling a desire of content p can be analyzed as a


compound state of three internal components: (1) the agent recalls that she did believe
that p would happen if she could do X; (2) She recalls that she did X; (3) She observes
that p does not happen. Surely responsibilities of not being able to make p to happen
have to be attributed to the agent again if either of the two cases occurs: (1) the belief
that doing X would cause p to happen is false; (2) The agent did not successfully
complete the task of X whereas she still believes that she did. In both cases the world
itself is still innocent.

The moral of our analysis of Searle’s treatment of psychological modes is that the
desire/belief distinction cannot be treated in terms of directions of fit, which assume
that these modes are based on relationships between mental entities and external entities
(otherwise it would make no sense for him to talk about the direction either of “mind-

93
to-world” or “world-to-mind”). Moreover, even seemingly world-oriented actions like
“carrying out X” can be also viewed as something based on (although perhaps not
reducible to) internal states and hence still more relevant to agent’s internal mental life.
This perspective-based analysis of psychological modes is perfectly compatible with
the internalist treatment of mental contents, which was proposed by the last section,
whereas Searle’s perspective-free view is conflicting with it. Hence, if the conclusion
of last section is sound, Searle’s treatment of direction of fit cannot be acceptable.

Compared with Searle, Jerry Fodor’s treatment of psychological modes, which is


part of his Language of Thought Hypothesis (LOTH), is more internalism-oriented.
According to LOTH, thinking is a processes in which mental representations are
“tokened” by some lexicon-like mental entities with the aid of a combinatorial syntax
which gives these items an appropriate structure. Since the rules guiding the operations
of this syntax are determined by the internal features of the cognitive architecture rather
than external environmental factors, on the LOT-level, Fodor is not so interested in
“whether what the oracles write is true; whether, for example, they really are
transducers faithfully mirroring the state of the environment, or merely the output end
of a typewriter manipulated by a Cartesian demon bent on deceiving the machine”.
Hence, LOT has a minimal internalist flavor compared with typical teleosemantic
accounts of mental contents. And this feature is also inherited by Fodor’s following
account of psychological modes (or “propositional attitudes” in his terms):

LOT says that propositional attitudes are relations between minds


and mental representations that express the contents of the attitudes. The
motto is something like: ‘For Peter to believe that lead sinks is for him
to have a Mentalese expression that means lead sinks in his ‘‘belief
box’’’. Now, propositional-attitude types can have as many tokens as
you like. I can think lead sinks today, and I can think that very thought
again tomorrow. LOT requires that tokens of a Mentalese expression
that mean lead sinks are in my belief box both times….

Psychological modes, in this narrative, are metaphorically viewed as “boxes”,


in each type of which a certain combination of tokens tokening certain mental content
is emplaced to constitute full-fledged intentionality. In addition, each type of “box”
instantiates a specific type of syntactic rules that contents emplaced in them have to
follow. Although Fodor is not interested in characterizing the difference among
different types of “boxes” (the only type of “box” other than the “belief box” that he
mentions is “intention box”), he has to assume that the demarcation line between this
“box” and another can be explicitly drawn, otherwise it would make no sense to talk

94
about “having a mental box in the belief box”. Hereafter we will call this treatment of
psychological modes as the “box-approach”.

However, though Fodor’s box-approach is not as externalism-evoking as Searle’s


narrative of “directions of fit”, it is still problematic. Actually we have doubts on the
applicability of this approach to the task of modelling natural intentionality, since this
approach mistakenly assumes that it is always easy to find the demarcation line between
this attitude and another. But this assumption is definitely not true in cases wherein the
“strength” of a psychological mode is gradable. Here goes our analysis.

Obviously both the strength of beliefs and desires are gradable: It makes perfect
sense to say that I have a strong belief of p or a weak desire of q. But the semantic
problem involved here is that the meanings of many psychological words, when
supplemented with adverbial expressions indicating strength, are mutually overlapping
or even synonymous to each other: As for an instance, is there really a substantial
difference between “A very weakly believes that p is the case.” and “A very weakly
suspects that p is the case.”? If there is no substantial difference between them, then the
most natural explanation for the lack of this difference seems to be that the scope of the
so-called “belief-box” is continuous to that of “suspect-box”. But this explanation
quickly makes Fodor’s box-metaphor, which assumes the discreteness of boxes, fade.

Sympathizers of LOT would like to contend that linguistic intuitions on how we


use psychological verbs in ordinary discourses may not be illuminating for how LOT
works on a deeper level. For instance, it may be the case that the gradable “belief” in
our natural language does not strictly correspond to the ungradable “belief-box” on the
LOT-level. But we don’t think this remedy can work. It is an undeniable fact that beliefs
are gradable on the level of public language, and it is also widely accepted that speech
acts cannot be produced without corresponding mental activities. Hence, speech acts
involving gradable psychological words have to be accompanied with corresponding
mental activities, which are expected to be explained by LOTH. But LOTH simply
cannot plausibly explain the explanandum on the table, given that it is always fairly
difficult for a theory assuming abrupt transitions from a basic state to another to explain
phenomenon involving gradual inter-state transitions, unless the number of basic states
on the level of explanans is as tremendous as an astronomical figure. But it is
psychologically implausible to suppose that the number of types of psychological
modes is so tremendous on the LOT level (otherwise the resulting human cognitive
architecture would be too complicated to make itself a conceivable result of natural
selection), hence, any competing explanation, whatever it is, has to abandon the box-
approach. Analogically, in the modelling of artificial intentionality, the box-approach
cannot be adopted if the system is expected to be able to exhibit gradual transitions
among different mental states as humans would do.

95
Now sympathizers of either Searle or Fodor may still contend that neither
philosopher is interested in algorithmically realizing artificial intentionality; rather,
both philosophers have their independent arguments against the possibility of doing it,
e.g., Searle’s “Chinese Room Argument” and Fodor’s “Argument against High-level
Modularity as a Requisite of Computational Theory of Cognition”. But we don’t think
this objection is relevant to our argument. Our point is: no matter whether their global
hostility towards the algorithmic reconstruction of intentionality is warranted, their
theory of natural intentionality is flawed, hence, any AI scientist who has adopted their
general view about how intentionality works cannot model intentionality successfully.

Now we will give some further reasons to explain why Searle’s and Fodor’s
theories are problemic in AGI. In the last section, we have explained why the
perspective-free view of “world” assumed by enactivism-oriented AI cannot be
coherently modelled. Since the similar view has been assumed in Searle’s notion of
“direction of it”, this notion itself cannot be algorithmically modeled as well. As to
Fodor’s box-approach, actually a variant of it has been adopted by mainstream AI
scientists in a branch of AI which is labeled as “context modelling”. The aim of context
modelling is to build a computer system which can automatically handle data
differently according to different contexts, and the whole goal here is relevant to the
issue on intentionality in the sense that each type of psychological state can be more
abstractly viewed as a type of context (e.g., the belief-context, the desire-context, etc.).
Hence, if the box-approach in a theory of context is flawed, the similar approach in the
modelling of contexts, namely, an approach according to which each context is treated
as a “box”, cannot bring about satisfactory results as well.

And following examples may show that even the box-approach in context
modelling is defective, and this observation would conversely reinforce our current
doubt of the validity of the similar approach in a theory of intentionality. A typical AI-
oriented (but still philosophical) formulation of the box-approach in context modelling
is given by Fausto Giunchiglia and Paolo Bouquet (hereafter G&B):

It is quite common intuition that some sentences are true (polite,


effective, appropriate, etc.) in a context, and false (impolite, not
effective, inappropriate) in others, that some conclusions hold only in
some contexts, that a behavior is good only in some contexts, and soon.
For instance, “France is hexagonal” (or “Italy is boot-shaped”) is true in
contexts whose standard of precision is very low, false in the context of
Euclidean geometry. …All these examples seem to suggest that a
context can metaphorically be thought of as a sort of “box”. Each box
has its own laws and draws a sort of boundary between what is in and
what is out. A closer look to the literature on context will show that this

96
metaphor can be given two very different interpretations. According to
the first, a “box” is viewed as part of the structure of the world;
according to the second, a “box” is viewed as part of the structure of an
individual's representation of the world.

It is not hard to see that G&B’s expressions like “each box has its own laws and
draws a sort of boundary between what is in and what is out” predicts that inter-box
transition has to be abrupt. Since the second first type of “box” in G&B’s narrative
obviously refers to psychological modes, inter-mode transition cannot be gradual in
G&B’s framework as well.

However, trans-contextual reasoning has to be done in many practical cases, and


AI scientist should do something to meet this practical demand. Their recipe is to
provide some ad hoc bridge-like formula to bring information stored in one box to get
to another, such as G&B’s “bridge laws” and Ramanathan V. Guha & John McCarthy’s
“lifting formula”. But none of the proposal here is flexible enough to meet the demands
of AGI, since these trans-contextual reasoning devices cannot be built without
previously individuating all boxes and confining all inter-box boundaries, whereas in
ordinary discourses, even if it makes sense to talk about the boundary between this topic
and another, the boundary itself is routinely pragmatically determined. Hence, the box-
approach is only useful in building specific AI systems which are not expected to
exhibit human-level flexibility.

The general moral of this section and the last one is that mainstream philosophical
theories of intentionality is not illuminating for AGI because they either appeal to
external environmental factors which cannot be internally modeled, or they cannot
handle gradual transitions among different cognitive states. Now it is the right time to
introduce Husserl to solve these problems.

5. How could a Husserlian AGI scientist solve the problems?

First of all, we will show how Husserl could explain intentionality without
introducing external factors by reinterpreting his notion of “phenomenological epoché”
or “phenomenological reduction”. The core text relevant to this notion is as the follows:

The theory of categories must start entirely from this most radical
of all ontological distinctions — being as consciousness and being as
something which becomes “manifested” in consciousness,
“transcendent” being — which, as we see, can be attained in its purity
and appreciated only by the method of the phenomenological reduction.

97
In the essential relationship between transcendental and transcendent
being are rooted all the relationships already touched on by us repeatedly
but later to be explored more profoundly, between phenomenology and
all other sciences - relationships in the sense of which it is implicit that
the dominion of phenomenology includes in a certain remarkable
manner all the other sciences. The excluding has at the same time the
characteristic of a revaluing change in sign; and with this change the
revalued affair finds a place once again in the phenomenological sphere.
Figuratively speaking, that which is parenthesized is not erased from the
phenomenological blackboard but only parenthesized, and thereby
provided with an index.

Now we attempt to reinterpret Husserl’s meaning by formulating the following


procedures of “epoché”, in which no puzzling terms like “transcendent being” or
“purity” will be used:

Step. 1. To introduce the commonsensical view that the truth-conditions of


p is different from SMs(p) (wherein “S” refers to a subject, “M” refers to a
certain type of psychological mode). For instance, even the truth-conditions for
the sentence that “Tully is Cicero” are all satisfied, this does not imply that the
truth-conditions of “Sally believe that Tully is Cicero.” can be satisfied
accordingly.

Step. 2. It is obvious that the truth-conditions of “Sally believes that Tully


is Cicero.” can be only internally determined in terms of, say, whether the target
belief is coherent with other stored beliefs or whether the belief is sufficiently
supported by evidence acquired by the agent. Otherwise it will be too hard to
explain why truth conditions of SMs(p) is so irrelevant to truth conditions of p.

Step. 3. Now we take a further step by presupposing that there is an implicit


speaker accompanying any conceivable sentence. Hence, each proposition is
supplemented with a psychological mode.

Step. 4. Hence, by executing step 2&3, the truth conditions of each


conceivable sentence can be only internally determined. This is nothing but the
“residues” of phenomenological reduction.

Some readers may wonder how one could be entitled to presuppose the
omnipresence of an implicit speaker (as step. 3 requires) without introducing subjective
idealism. But an AI/AGI-related point of view can easily explain how. Obviously, no
AI/AGI system can be built without a certain programing language, and the
organization of each programing language has to encapsulate how the world works
from the perspective of a specific designer. Therefore, nothing mysterious will be

98
involved in presupposing such an “implicit speaker” if the preceding procedures are
construed in an AI/AGI context. And this interpretation can even make Husserl’s
notion of “epoché” perfectly compatible with metaphysical physicalism (which is the
metaphysical assumption of most AI scientists), since the irreducibility of an “implicit
speaker” in any algorithmically reconstructed micro-world implies neither that the
physical world itself does not exist independently of how the cognitive systems
perceive them, nor that the cognitive activities are not supervenient on corresponding
physical events. Or in Husserl’s own terms in the preceding citation, speculations about
the metaphysical nature of the world are “not erased from the phenomenological
blackboard but only parenthesized”. Hence, a Husserlian AI programmer does not need
to take the burden of modelling the world beyond the horizon of an omnipresent
“implicit speaker”.

As to how to construe gradual inter-mental-state transitions, Husserl’s


phenomenological theory of time, if reconstructed, can be used to form the following
argument against Fodor’s box-approach:

• If the box-metaphor is applicable to intentionality, then it has to be


applicable to any component of intentionality, just as if one can separate A from
B, then she should be able to separate any component of A from B.

• Consciousness of temporal sequences has to be involved in many


psychological modes like hope and regret (commonsense).

• Phenomenologically speaking, a typical internal temporal sequence is


composed of original impression (namely, the phenomenological equivalent of
“present”)”, protention ( namely, the phenomenological equivalent of
“future”)and retention (namely, the phenomenological equivalent of “past”).

• But it makes no sense to talk about the abrupt transitions among these
components, given that they constitute a continuum in which the “present” can
be only seen as an ideal limit, “just as the continuum of species red converges
towards an ideal pure red”.

• Hence, the box-metaphor cannot be applied to temporal components


of intentionality.

• It is obvious that (5) is incompatible with (1).

• Therefore, the box-metaphor is inapplicable to intentionality.

So far, so good. Husserl’s theory of intentionality is immune both to externalism


and the box-approach. However, some readers may still complain that his theory has
little value for AGI in the sense that it provides no algorithmic details. But we believe
that it has at least provided some general guidelines on how intentionality could be
modeled. And these guidelines can be found in his notion of “noema”.

99
But what is noema? Unfortunately, even within Husserl scholarship there is a
debate over different interpretations of noema. For example, according to the Fregean
interpretation (supported by Føllesdal, Dreyfus and McIntyre, etc.), noema is a
meaning-encoding entity between mental act and the external object, and the relevant
object becomes the referent of the relevant mental act just because noema specifies the
way in which the referent is referred. By contrast, a competing interpretation of noema
(supported by Sokolowski, Drummond, etc.) contends that noemata are not mediating
entities between mental acts and external objects but just the external objects considered
in the phenomenological reflection, or “experienced objects” for short.

The first interpretation of noema looks less promising from the perspective of AGI,
because it assumes a huge programing burden of modeling the sandwich-like structure
of “act-noema-object”, and despite the formidable work of specifying each noematic
meaning as a contextually invariant manner of fixing referents, how to harmonize these
meanings with contextually emerging factors would be another tricky problem. By
contrast, since no contextually invariant entities have been assumed in the second
interpretation of noema, it may afford a more elegant way to model intentionality.

However, even the the second interpretation is problematic by including the key
phrase “experienced objects”. Given that the specific perspective involved in any piece
of experience is by nature in contrast with the object itself which is perspective-free,
this gap cannot be easily filled by appealing to a compound expression like
“experienced objects”, which can be only unpacked as a weird phrase like “perspective-
free entities from the lens of a specific perspective”(but how could any entity keep on
being perspective-free when viewed from a certain point of view?). Hence, the burden
of modelling perspective-free external entities is still left on the table if this compound
expression is literally put into practice.

A way out of this embarrassment is to appeal to an internalized version of Robert


Brandom’s inferentialism, which is applied to the interpretation of nemoa by Steven
Crowell. Inspired by Brandom’s discussion of “material inferences”, Crowell defines
Husserl’s notion of noema in terms of “a quasi-inferential concept of representation”,
which is footnoted by the following illustrations: The perceived color is an
“adumbration of something”; the front side “implies” the unseen back; taking it as a
barn “entails” a specific relation to the landscape, the barnyard, and farming practices,
etc.. Hence, the noema in this sense can be viewed as a gateway from aspects of objects
that have been experienced to those expected to be experienced in the future. According
to this interpretation, noema is definitely not any static entity but some high-level
features of the object-relevant inferences that the subject is engaged in.

This reading of noema fits with Husserl’s following comment on the nature of
phenomenological “object”, which is synonymous to “noematic X” in his context:

100
Everywhere ‘object’ is the name for eidetic concatenations of
consciousness; it appears first as the noematic X, as the subject of sense
pertaining to a different essential types of sense and posita. Moreover, it
appears as the name, ‘actual object’, and is then the name for certain
eidetically considered rational concatenations in which the sense-
conforming, unitary X inherent in them receives its rational position.

Or in another way, the term “object” is nothing but a system of harmoniously


connected experiences, and the object per se is merely the external correlate of the
“object” internally construed.

But how to algorithmically model Husserl’s notion of noematic X as an inferential


node? First of all, the harmoniousness of the whole inferential network about noematic
X might be tested in terms of, for instance, the compatibility of the beliefs encoded in
a corresponding network (e.g., Touretzky’s ‘Inheritance System’ and Franz Baader et
al. eds.’s ‘Description Logic’). However, the remaining technical obstacle is still salient.
To recall, the noematic X as an inferential node is connected to unexperienced aspects
of objects, hence, a computational model of it has to be open to unexpected data in a
flexible but not very resource-consuming manner. This requirement will pose a big
challenge to GOFAI approaches (including Touretzky’s and Baader’s approaches),
given that the axiomatic nature of these approaches in principle renders the system
unresponsive to environmental contingencies. The similar challenge applies to
connectionism or deep learning too, since when a connectionist/deep-learning system
is trained to be adaptive to a certain type of task, say, the recognition of human facial
expressions, it cannot be adaptive to a new task of another sort, say, speech recognition,
which may require a new set of training data and even a different neural architecture
with different parameters. By contrast, a human agent can flexibly combine the
cognitive capacity of recognizing human face and that of recognizing human voice for
completing the task of, say, recognizing somebody as somebody.

A possible technical solution to this problem is provided by Pei Wang’s Non-


Axiomatic Reasoning System (NARS, with “Narsese” as its adjective form, which
literally means “of the language of NARS” ). Due to the limitation of space, we can
only explain how NARS helps to model Crowell’s inferentialist interpretation of noema.
In NARS, both lexicon-like entities and minimally stable patterns of experiences can
be viewed as “Narsese concepts”, which are connected to each to form Narsese
sentences and hence more complicated inferential pathways. The whole Narsese
conceptual map is not axiomatically pre-determined by the programmer but
automatically and gradually coming into being as the result of the interplays between
some internal parameters of the system and inputs fed into it. And the whole system is
described as “non-axiomatic” just in this sense.

101
More importantly, in NARS, psychological modes are characterized without
appealing to the box-approach. Rather, belief, the most primitive psychological mode,
is firstly implicitly expressed in terms of the strength or weight of pathways connecting
one Narsese node and another. For instance, if a pathway connecting the node S with
that of P is highly weighted, it means that the system strongly believes that all Ss are
normally Ps. As to the weight-values of pathways, they come from the interactions
between acquired evidence and the corresponding Narsese sentence (by the way, each
piece of evidence is regarded as a Narsese term in NARS). That is to say, the more
evidence for a Narsese belief is at hand, the more firmly the system has the belief. This
evidence-based treatment can easily handle psychological modes like suspect and
disbelief (both of which involve the role of positive/negative evidence) as mutually
transformable states.

Psychological modes like intention or desire do make things a bit more


complicated, since they involve the notion of “goal”, which is future-oriented, whereas
any evidence is past-oriented. But there is still no need to introduce Searle’s notion of
“direction of fit” here, since the future/past contrast is one thing, while the world-to-
mind/mind-to-world contrast is another. Rather, the Narsese recipe for handling desire
can be unpacked as the following steps:

Step. 1. Firstly, we assume that through a certain procedure of learning, the


system has acquired a pool of Narsese sentences about how the artificial system
itself can functionally survive, e.g., the knowledge about how to maintain the
battery level.

Step. 2. The system applies general knowledge in the preceding pool to the
current state of itself to find whether it is “healthy” enough. If it is, then no
desire will be produced; if not, go to execute the next step.

Step. 3. Due to the inferential capacity of the system, it finds out that if a
precondition p were true, it could “live” much better.

Step. 4. But the system finds that it cannot believe that p is true now since
it lacks enough positive evidence.

Step. 5. Then the system would like to attach the label of “primitive goal”
to p and calculate how much evidence is needed to make it true.

Step. 6. Since the needed evidence is not actually presented, the system
would attach the label of “derived goal” to each operation that would make a
certain piece of relevant evidence occur.

Step. 7. The forgoing reasoning will drive the system into proper actions.

102
Step. 8. The system will evaluate the gap between the newly acquired
evidence and the p-requiring evidence after each run of actions, until the gap is
reduced to a certain level, which means that the desire is satisfied.

The preceding procedures characterize how a “selfish” AGI system could


entertain and derive desires with the ultimate goal of its own survival. Surely one can
build an “altruist” system by replacing the pool of knowledge in step 1 with another
pool concerning how other systems or human masters could functionally survive.
Moreover, one can even build a system both bearing the mark of “selfishness” and
“unselfishness” by “teaching” the system to form both types of pools, and thereby
represent the so-called “complexity of human nature” in an artificial system. However,
no matter how complicated systems could be built on the basis of the preceding 8-step
recipe, desire or intention will never be treated as a static box waiting to be filled with
neutral mental content. Rather, in NARS, “intention” or “desire” refers to a high-level
feature of dynamic inference overarching both action and cognition. In addition, in
NARS, the notion of desire, albeit not directly evidence-based, is still relevant to
evidence, since conversions from expected evidence to evidence-making actions do
assume that the system’s sub-system of beliefs is evidence-based. Hence, even though
the label of “primitive goal” itself looks like a box-label, it is not literally an intention-
box which is in contrast with a belief-box, since belief-supporting evidence have to be
used in the process of forming intentions of desires.

As to how these Narsese constructions are relevant to Husserl’s notion of “epoché”


or “bracketing”, we just want to make one point explicit now. Although this notion can
be applied to any AI system in some degree due to the fact that any AI system has some
built-in prejudices about how the world works, there is no mainstream AI system
deserving to be attributed with the label of “an distinct individual”, since different
computers implementing the same software would behave basically in the same way
and hence “bracket” the external world from basically the same perspective. By contrast,
human perspectives are definitely more diversified and hence capable of producing
intentionality in a way specific to the historically formed habits of individuals, or to
take words from Dermot Moran’s interpretation of Husserlian egos, “different egos
have their different streams of temporalization, and it is a complex issue how a
‘common form of time’ is constituted”. In this aspect, NARS is superior to most
mainstream AI systems, provide that for each individual computer implementing
NARS, the topology of its Narsese conceptual map is nothing but the result of its own
learning history, and habits of inferences could be thereby made distinct from this
individual computer implementing NARS to another. Hence, it is fairly natural for two
NARS computers to bracket the same content in different ways, or even “have their
different streams of temporalization” in Husserl’s sense. In an upshot: NARS makes a
relatively promising approach to the desired Husserlian AGI project.

103
6. Metaphilosophical observations as concluding remarks

Hitherto we have explained: (1) why the notion of intentionality is indispensable


for any AGI system; (2) why the treatment of intentionality by mainstream Anglophone
philosophy is less preferable to its Husserlian counterpart; (3) How to model the
Husserlian notion of intentionality by appealing to NARS. Now it is the time to
articulate our underlying motivation propelling the whole research. We concede that
more than half of the space budget of this article is consumed to clarify point (1), and
this way of distributing space is necessary since externalism-oriented (and hence anti-
Husserlian) speculations are so dominant in current Anglophone philosophy of mind
that Husserl’s own approach cannot find its niche without a serious battle with them.
However, we are still loyal to the tradition of “analytic philosophy” in a very general
sense, if this label is only understood as a general name of any manner of thinking and
writing philosophy by using explicit arguments. And Husserl’s philosophy is not
“analytic” enough even according to this loose definition of the term, given the
overpopulation of his terminology and the difficulties of directly reconstructing his
wordy comments as lineal arguments. And due to this consideration, this article is also
intended to “disenchant” Husserl by appealing to resources in AGI.

But why AGI? Why not only formal tools from logic or statistics, given that all AI
systems have to rely on them? The primary reason is that a workable AGI system has
to be something more than these formal tools. For instance, it has to have a proper
cognitive architecture and hence to be minimally relevant to human intentionality,
whereas formal tools do not need to be so. Meanwhile, due to its reliance on algorithmic
details, any AGI narrative, albeit perhaps on a high level, still has to be “analytic” in
the most general sense of the term. Hence, due to this duality, AGI could provide a
perfect platform to interpret Husserl.

Another reason not to appeal to formal logic is that by “formal logic”, most people
just mean the Fregean logic, which is actually more suitable for characterizing semantic
externalism, since the ontological status of external referent (e.g., objects or truth-
values) has to be assumed in the Fregean theory of meaning, otherwise it would make
no sense for a Fregean to view meanings as mapping mechanisms correlating symbols
with referents. In this sense, the Fregean logic should be a very cumbersome tool for
modelling Crowell’s inferentialist interpretation of noema, from which naïve
externalism has to be precluded. By contrast, if we appeal to AGI rather than “logic”,
then the novelty of the term “AGI” itself will give us more space to introduce some
form of non-Fregean logic, e.g., the Narsese logic. And this treatment will naturally
separate Husserl’s own position from Føllesdal’s and Dreyfus’ Fregean interpretation
of Husserl, in which the Fregean view of logic is still assumed.

104
认知科学与人文科学的模糊边界

江 怡

我们知道,认知科学至今都没有一个单一可接受的定义。根据不同的定
义,认知科学领域被划分为七个或四个。罗伯特· J. 斯坦顿(Robert J.
Stainton)在他主编的《认知科学的当代争端》一书的序言中提出了一种有所
争议的区分,即区分为四个分支:行为科学和脑科学部分,如心理语言学、神
经科学和认知心理学;社会科学部分,如人类学和社会语言学;形式学科部
分,如逻辑、计算机科学和人工智能;哲学部分,如心灵哲学和语言哲学。根
据斯坦顿的说法,认知科学的标志是,它规定了所有这些分支的方法和结果,
试图提供对心灵的全面理解。(Stainton, p. xiii)在这些分支中,我们发现
只有两个领域在传统上被看作是属于人文学科,即语言学和哲学,虽然有两个
学科使得语言学成为一门交叉学科,即社会学和心理学。从斯坦顿的划分中,
我们还可以看到自然科学在认知科学中占据着支配地位。所以,这里的问题就
是:人文科学在认知科学中会有什么作用?或者说,人文科学是否对认知科学
有所贡献?

无论认知科学包含了多少领域,其中有一个强烈的自然科学立场,使得认
知科学成为具有自然科学指向的学科。认知科学基于自然科学之上,这是自然
的,也是必要的,因为它在性质上就是经验的,在朝向上是实验的。认知科学
的目的是要更好地理解人类心灵,正确地观察这个世界。虽然理解心灵是极其
复杂的,处于多学科之中,但认知科学是讨论不同学科相互作用的话题的最好
选择。认知科学中不欢迎思辨和形而上学。虽然哲学讨论中也存在这样一种倾
向,即自然主义,但哲学却很少被包含在认知科学中,除了从性质上就具有经
验特征的心灵哲学和语言哲学。所以,在这种意义上,认知科学就在一定程度
上有根据地被理解为属于自然科学。

但是,人文科学会如何呢?如果认知科学的性质就是更好地理解人类心
灵,它就应当包含某些人文科学,因为以某些特殊的方式探讨人类心灵,这对
人文学科来说是自然的,也是必要的。那么,人文科学如何探索人类心灵呢?
通过思辨还是沉思?或者只是论证?在哲学史上始终存在哲学与科学之间的明
显区分,在当代欧洲大陆哲学中也是如此。根据这种区分的观点,哲学必须远
离科学,由此哲学就可以保持与世界的独特地位。但在当代分析哲学传统中,
哲学家们更愿意指向一种科学的模式,它以可观察和实验的方式改变了哲学。
以一种更为公众的和常识性的方式加以重建的,不仅仅是哲学,还包括其他人
文科学,如文学、历史、宗教和艺术。模式化支配了文学创作。历史变成了对
历史证据和文献的研究。宗教也努力接受科学的理论。艺术的发展则依赖于实
验。如我们所知,实验哲学在最近十几年里得到了发展。所有这些都表明了这
样一个观念:在哲学与科学之间如今很难做出区分了。

105
如果这个观念是可以接受的,我们如何对待这个观念就是一个我们必须解
决的问题。我这里想要强调的是,哲学与科学之间特别在当代不存在严格的区
分。如果认知科学是属于自然科学的,哲学就会有某些部分是具有自然科学导
向的,特别是心灵哲学和语言哲学。在分析哲学史上,一直有这样一种宣传,
即哲学应当基于科学而得到重建。即使是今天,更多的心灵哲学家和语言哲学
家试图根据心理学、神经科学、人工智能以及实验科学中的其他学科探索人类
心灵的性质。然而,相反,当代欧洲大陆哲学,如现象学、诠释学和后现代主
义哲学等,则反对这个观念。在他们看来,哲学应当保持其自身不同于科学的
对人类心灵的地位。但问题在于:随着科学突飞猛进的发展,我们如何在心灵
和语言上保持哲学的特殊形式?

由于哲学与科学之间不存在严格的区分,如今我们就无法完全离开科学而
讨论哲学问题。科学已经渗入到了哲学之中。这不仅包括了科学的思维方式,
而且包括来自科学的术语用词,这些都强烈地影响到哲学的讨论。即使是大陆
哲学家也会关心科学的发展,虽然他们的解释不同于科学家。没有人会认为哲
学可以与某种方式与科学对立。相反,哲学在认知科学中也具有一种作用。卡
罗琳·索贝尔和保罗·李在他们的《认知科学:一种跨学科方法》中指出,
“哲学在我们研究如何理解我们所面对的宇宙中始终起到了非常重要的作用,
对我们理解自身也是如此。”(Sobel and LI, p. 343)以往的哲学家们始终
努力解决从古希腊以来就提出的身心问题。这个问题是科学家们探索心灵独特
性质的起点,由此发现心灵的特征和心灵与身体之间的关系。而当科学家们在
根据最新的科学技术发展中发现某些无法解决的问题时,他们就会求助于哲学
家的帮助。例如,如何解释感受质(qualia)的性质?如何描述现象意识?我
们在什么意义上可以解释道德?雷尼(Regina A. Rini)在《道德与认知科
学》中描述了关于道德判断的认知科学理论与哲学上的道德理论之间的互动关
系。根据这种描述,大多数哲学家都否认认知科学在道德哲学中的作用。某些
哲学家则主要赋予认知科学消极的作用。对这些哲学家来说,哲学研究是无法
用科学研究取代的。例如,道德哲学家试图回答一些实质性的伦理问题,如我
们可以追求的最为有价值的目标是什么?我们应当如何解决这些目标之间的冲
突?存在某些我们不可为的方式吗,即使这样做会促进最好的结果?什么是好
的人类生活形式?我们可以如何获得这种形式?一个正义的社会是如何组织
的?显然,这些问题是无法仅仅根据某些实验成果和经验数据得到回答的。科
学家们在努力达到他们更好地理解人类心灵的目标时,他们会寻求来自哲学家
们的帮助。在这种意义上,哲学研究不仅是科学家们的出发点,也是他们从事
科学研究的终点。

综上所述,我们无法看到哲学与科学之间的严格区分,在这种意义上,认
知科学与人文科学之间也不存在清晰的边界。这个边界是模糊的,无法划定
的。

106
参考文献

Rini, Regina A., Morality and Cognitive Science, in Internet Encyclopedia of


Philosophy, https://www.iep.utm.edu/m-cog-sc/, September 16, 2018.

Sobel, Carolyn P. and Li, Paul, The Cognitive Sciences: An Interdisciplinary


Approach, Los Angeles: Sage, 2013

Stainton, Robert J. ed., Contemporary Debates in Cognitive Science, Oxford:


Blackwell, 2006

107
Jiang Yi

The Fuzzy Boundary of Cognitive Science and


Humanities

Abstract: As we know, there is no such a single acceptable definition of cognitive


science up today. According to different definitions, fields in cognitive science are
divided into seven or four. The controversy division by Robert J. Stainton in his Preface
to the Contemporary Debates in Cognitive Science is made into four branches:
behavioral and brain Sciences as psycholinguistics, neuroscience and cognitive
psychology; Social Sciences as anthropology and sociolinguistics; formal disciplines
as logic, computer science and artificial intelligence; parts of philosophy as philosophy
of mind and language. The hallmark of cognitive science, according to Stainton, is that
it draws on the methods and results of all these branches, to attempt to give a global
understanding of the mind. (Stainton, p. xiii) In these branches, we can find only two
fields which are claimed traditionally as humanities, linguistics and philosophy, though
there are two disciplines which make linguistics as an inter-discipline, sociology and
psychology. From Stainton’s division can we find also that natural sciences are much
overwhelming in cognitive science. So the question will be raised: what role of
humanities will be in cognitive science? Or do humanities make any contribution to
cognitive science?

No matter how many fields are involved in the cognitive science, there is a strong
position on natural sciences which make the cognitive science natural-science-directed.
It is natural and necessary that the cognitive science is based on natural sciences, for it
is empirical-intrinsic and experimental-oriented. The aim of the cognitive science is to
understand human mind better and to observe the world right. Though understanding
of human mind is complicated and in varieties of disciplines, the cognitive science is
the best choice for discussion of this topic with interactions of disciplines. No
speculation and metaphysics in the cognitive science is welcome. Though there is a
trend in philosophical discussions, namely so-called naturalism, philosophy is not much
involved in the cognitive science, exception of mind and language which are empirical-
intrinsic. So in this sense, the cognitive science is accordingly to some extend part of
natural sciences.

108
But how about humanities? If the nature of the cognitive science is better
understanding of human mind, it should contain also some humanities, for it is natural
and necessary for humanities to explore human mind in particular ways. Here is the
question for humanities. How do humanities explore human mind? By speculation or
mediation, or just argumentation? There has been a clear discrimination between
philosophy and science in the history of philosophy as well as in contemporary
continent philosophy. According to this discrimination, philosophy must be away from
science, in which philosophy could keep its peculiar position to the world. But in
contemporary analytic tradition philosophers prefer directing to the scientific model
which modifies philosophy in observable and experimental way. Not only philosophy
but other humanities as literature, history, religion and fine arts are reconstructed in a
much public and commonsense way. Models dominate writings in literature. History
has changed to a study of historical evidences and classics. Religion is also engaged
with scientific theories. Fine arts develop relying on experiments. And as we know,
experimental philosophy has aroused in recent decades. All those show this idea up that
it is hard to make distinction between philosophy and science today.

If this idea is acceptable, what we can do with the distinction is the problem we
have to solve. I would like to address here that there is no sharp distinction of
philosophy and science particularly in contemporary times. If the cognitive science is
part of natural sciences, philosophy would have some part in natural-science-directed,
especially in mind and language. It has been a propaganda in the history of analytic
philosophy that philosophy should be reconstructed on the basis of sciences. Even today,
more philosophers of mind and language attempt to explore the nature of human mind
according to developments in psychology, neuroscience, artificial intelligence and
other disciplines in experimental sciences. It is opposite, however, that the
contemporary Continental philosophies as phenomenology, hermeneutics and post-
modernist philosophy reject the idea. For them philosophy should keep its own position
on human mind different from sciences. But the problem is: how could we keep the
philosophical way on mind and language while sciences are developing rapidly?

Because there is no sharp distinction between philosophy and sciences, we could


not discuss philosophical problems without sciences today. Sciences have penetrated in
philosophy already. Not only the scientific way of thinking but words or terms from
sciences affect strongly on philosophical discussions. Even the Continental
philosophers would take care about developments in sciences, though their
explanations are different from scientists. Nobody would say that philosophy could be
in opposite to sciences in some way. In contrast, philosophy has also a role in the
cognitive science. As Carolyn Sobel and Paul Li said in their The Cognitive Sciences:
An Interdisciplinary Approach, “philosophy has long played a very important role in
our search for understanding the universe that confronts us and, indeed, for
understanding ourselves.” (Sobel and Li, p.343) Philosophers in the past have been

109
engaged with such a problem of mind-body so far since the ancient Greek. The problem
is the starting-point for scientists to explore the unique nature of mind by finding some
features of mind and its relation to human body. And scientists would appeal for
philosophers’ help when they find some unsolvable puzzles on the dated developments
in sciences and technology. For example, how to explain the nature of qualia? How to
describe the phenomenal consciousness? In what sense we can explain morality? In
Morality and Cognitive Science Regina A. Rini described the interaction of cognitive
scientist theory of moral judgments with moral theory in philosophy. According to this
description most philosophers deny much or less role of cognitive science in moral
philosophy. Some assign to cognitive science a primarily negative role. For those
philosophers philosophy is irreplaceable with scientific researches. For instance, moral
philosophers try to answer the substantive ethical questions, such as, what the most
valuable goals we could pursue? How should we resolve conflicts among these goals?
Are there ways we should not act even if doing so would promote the best outcome?
What is the shape of good human life and how could we acquire it? How is a just society
organized? It is evident that those questions could not answered just according to some
experimental achievements and empirical database. Scientists would ask for a favor
from philosophers when they are approaching to their goals to understand human mind.
In this sense, philosophical research is not only the starting point for scientists but the
end for their exploration in scientific research.

In concluding above, we could not find the sharp distinction between philosophy
and sciences in the sense that there is no clear boundary of cognitive science and
humanities as well. The boundary is fuzzy and unable to be drawn.

References:

Rini, Regina A., Morality and Cognitive Science, in Internet Encyclopedia of


Philosophy, https://www.iep.utm.edu/m-cog-sc/, September 16, 2018.

Sobel, Carolyn P. and Li, Paul, The Cognitive Sciences: An Interdisciplinary


Approach, Los Angeles: Sage, 2013

Stainton, Robert J. ed., Contemporary Debates in Cognitive Science, Oxford:


Blackwell, 2006

110
作为文化技术的媒介——从书写平面到数字界面

克莱默(Sybille Krämer)

20 世纪 80 年代之后,媒介根本主义(media fundamentalism)形成潮流。麦
克卢汉、基特勒、德里达以来的众多学者将媒介视为高度自律的文化动因,认为
媒介创制了其所传达的意义。这一立场源自尼采、福柯等对人类主体概念的消解。
但在媒介根本主义中,媒介实际上沿袭了过去人类主体的自我中心主义。单纯将
媒介视为意义的创造者、过分张扬其建构性和自律性,毋宁说贬低了传播活动的
创造性价值。有鉴于此,本文欲跳脱媒介根本主义的束缚,探寻一种三元的媒介
哲学。媒介犹如信使,连结相异的两方,其根本功能在于使不可见者得以被感知。
它具有本雅明所谓“间接的直接性”(mediated immediacy):交流顺畅意味着媒
介消隐,后者的物质性只有在断裂、失序处才被察觉。信使并不像言语行为理论
的说话者那样为自己所说的内容负责,他仅是传话的第三方,不可避免要受其余
两方的制约。因此,媒介在使用中,一方面要顾及它所传送的意义,一方面要重
塑其内容,使之适应媒介自身的结构与物质性,从而处于自律与他律的持续互动
中。平面化技术(the technique of flattening)是这种三元媒介哲学的一个范例。
人类通过设想现实中并不存在的、可供书写刻画的纯平面,为思维赋予了可见、
可操作的外在形式,这无论对于审美还是认知都意义重大。以认知为例,《美诺
篇》中的小男孩通过在绘图过程中不断试错,成功画出了两倍于前的正方形;高
斯通过观察算式中数字的空间排布,迅速算出了从 1 到 100 的数字总和。其中,
平面扮演了思维的试验场、参与者、推动器,无形的智识活动一旦落实于平面就
变得直观、有序。二维平面是一维时间与三维空间之间的中介,是时间连续性与
空间同时性之间相互转化的枢纽。这一转化同时伴随着重构,如拼音文字对口语
的空间化不止是单纯的记录,亦包含对语言本身的分析。此外,平面也是个体与
社会之间的中介,是推理与直观两种认识能力之间的中介。平面媒介代表了欧洲
启蒙精神对于明晰、可控的追求,然而当数字化时代来临,书写平面演化为彼此
联通的人机界面,一种全新的深度模式死而复生。电脑好似黑箱。人工智能在海
量数据中通过自我学习获取的能力,连其开发者都捉摸不透。平面化技术极力想
要消除神秘之物与不可知物,而如今,这二者重新回到了我们身边。那么,我们
能否重审启蒙在当下的崭新含义,设想某种“数字化的启蒙”呢?

111
Sybille Krämer

Media as Cultural Techniques:

From Inscribed Surfaces to Digital Interfaces

1.

Media create what they transmit. Marshall McLuhan’s “the medium is the
message,” Friedrich Kittler’s “only that which is switchable is at all,” and Jacques
Derrida’s “there is nothing outside the text” paved the way for an interpretation of
media as more or less autonomous agents of social and cultural life. Media construct
and constitute what they present. That is the foundational idea of a theoretical
movement during the last two decades of the 20th century. As result media were
permitted as legitimate objects of intellectual work in the humanities. Although there is
a wide range of differences in how media were equipped with autonomous power and
as an instance of ultimate grounding, I would like to collect the proponents of this
movement in media theory under one label. The transformation of media to a quasi-
autonomous cultural agency will thus be referred to as “media fundamentalism.”

Media fundamentalism interprets itself as a criticism of the sovereign power of


human subjects; nevertheless, it participates in the tradition of a prominent self-image
of the human being as “homo faber” and “homo generator.” The reason is, that with the
Nietzian and the poststructuralist (Foucault) erosion of the concept of the human subject,
this constructive power and nearly autonomous agency was handed down to media. The
core of the media's cultural power to act consists of producing, fabricating, and
generating what they mediate. A relevant implication is, that the inventive significance
of circulation, distribution, and transmission are devalued by this approach.

The following considerations question whether the shaping and constitutive


strength of media can be conceived theoretically and secured with good arguments
without following the position of media fundamentalism.

112
2.

In looking for a media philosophy, not following the autonomization of media I


would like to introduce a model of mediality, which can be characterized as ‘messenger
model’. Its initial situation involves the existence of two heterogeneous sides, fields, or
worlds, in between which a third is situated whose role and function is to establish a
connection between the separated sides. The medium thus arises from a constellation
of a thirdness, whereas social theory and western philosophy usually introduce dual
relations such as “speaker and listener,” “sender and receiver,” and “subject and object”
as foundational structure. Mediated relations are—from a media-theoretical
perspective— not dualistic but triangulated. The characteristic of the “messenger
model of mediality” is that the medium is not understood autonomous but rather
heteronomous. The messenger perspective stresses that a medium too is subject to
external constraints. The messenger always speaks with somebody else’s voice.

To avoid any misunderstanding: the use of the term “messenger” is not an attempt
to personalize media. Nothing is as easily replaced by symbolic and/or technical means
as the messenger function. What matters here is only that the fundamental purpose of
media lies in mediating between heterogeneous worlds that are not accessible to one
another. The messenger function is usually defined as enabling or extending
communication between unconnected sides. However, if the role of media is
connection and transmission then mediation has to be understood not only as enabling
communication but as a process of “making perceptible” (Wahrnehmbarmachen).
Media endow connections to make what is hidden visible and to make what is absent
present. The basic, the primordial function of media is not representation, yet
presentation in the sense making something to be looked at. The reason for
strengthening the pivotal role of perceptibility is that a messenger does not speak in
terms of speech act theory. Speech act theory assumes, that speakers not only speak but
are responsible for the content of their saying. Yet a messenger is discursively
powerless because he or she is not responsible for what the messenger was instructed
to tell. Rather, the messenger makes apparent, or presents and recalls that was told by
someone else and that happened somewhere else. Making perceptible is the basic
principle of being a medium as a third in between heterogeneous, distant fields.

3.

To make the invisible perceptible means radically to transform it. The medium
transfigures the information to be transmitted into a configuration of data that has to
conform to the constraints of the medium itself. This metamorphosis into the code of
the medium constitutes the formative part of media by virtue of which they not only
convey information but rather at the same time shape, condition, and finally even
constitute what they transmit.
113
We see: Distancing from media fundamentalism does not imply an invalidation
and renunciation of the generative aspect of medial functions. The relationship between
generation and transmission or production and mediation should be understood not as
mutually exclusive but rather as mutually dependent. This constructive power of the
medium is apparent in the trace the medium leaves behind on the content of what it
mediates.

To understand this, it is necessary to take into account a principle of all media


usage: properly functioning media disappear for the user as long as they are in use. Let
us take a closer look at this.

When media function smoothly their physical materiality remains below the
threshold of perception. A “good medium” is invisible when in use. The content of a
speech is heard, but not the sound waves. We do not read single letters, yet a meaningful
text. The image must be turned around in order to see the canvas on which it is painted.
Media make something present through the process of their own withdrawal. The user
only becomes aware of the materiality of the medium when there is a disorder and
disruption. All media—and not only digital media—thus have an immersive power.
They have the ability to make what they mediate seem unmediated. The German author
and philosopher Walter Benjamin called this “mediated immediacy.”

The fact of the disappearing medium is also indicated by the etymological origin
of the word “medium.” The Greek “terminus medius” is found in both premises of a
syllogism and establishes their connection, but it is extinguished in the concluding
sentence of the “conclusio.” It thus becomes apparent that the terminus medius endows
syllogistic reasoning by withdrawing itself in the conclusion. The topos of the “dying
messenger”—the runner in Plutarch’s tale who delivered the message of the Greeks’
victory over the Persians in 490 BC—alludes to this issue like a media theory avant la
lettre.

This leads to a preliminary conclusion: every use of media occurs in the field of
tension between the heteronomy of what is being mediated and the autonomy that
allows the content to be transfigured into a representational structure that is aligned to
the physicality and structure of the medium itself.

4.

Let us get now more concrete by looking at a class of graphic media, such as tables,
writing, graphs, diagrams, and maps. These media all involve the application of
inscribed and illustrated surfaces, which will be referred to as the cultural technique of
flattening.

114
We live in a three-dimensional world, yet we are constantly surrounded by
inscribed and illustrated surfaces. Artificial flatness is an everyday phenomenon – even
in cultural history. From an empirical perspective, there are no pure surfaces. By
drawing, writing, or storing, however, we act as if these surfaces have no depth: what
matters can be seen on the surface. Seen from an anthropological perspective, the
cultural technique of flattening is a relevant evolutionary tendency in our symbolic and
technical practices; it extends from cave paintings and skin tattoos to the invention of
writing, diagrams, and maps to computer screens, tablets, and smartphones. Not to
forget that ‘to be flat as possible’ gets a maxim of nearly all technical devices nowadays.

What is the reason for the success of artificial flatness?

Our sense of spatial orientation is grounded in the fundamental relationship


between our bodies and the living environment around us, and due to the fact that our
bodies have three axes that are perpendicular to one another we distinguish between
over/under, right/left, and front/back. One of these axes is associated with a perceptual
deficit, as whatever lies behind us is not only invisible but also uncontrollable. The
technique of flattening involves projecting the two registers of right/left and over/under
onto a surface, while the distinction between front/back is eliminated. An artificial
space is produced in which everything that is inscribed or drawn can be overseen and
controlled. Flatness thus cancels the unobservable and uncontrollable “back” and
“below.”

The fullness of the real world as well as the phantasms of fictional worlds thus
obtain an observable and manipulable form; things that are not yet or that can never be
(such as images of logically impossible objects) are made perceptible too.

An illustrated or inscribed surface can even become a laboratory of cognition as


well as a workshop for aesthetic experimentation and technical design. Artificial
flatness is a cultural achievement of the first order. Its aesthetic and cognitive
ramifications are obvious, yet surprisingly little studied. Just as the invention of the
wheel facilitated mobility and creativity in the world of the body, the invention of
artificial flatness facilitated mobility and creativity in the world of the mind.

Artificial flatness has a productive aesthetic and cognitive power; to write down
music, changes what we can do with music, to produce choreography, modifies the
nature of dance; theatre and film mostly depends on scripts etc. In what follows, we
focus on the cognitive, on the epistemic use of flatness.

Whenever we have to orient within a space of knowledge, the graphic projection


of complex content onto a surface makes invisible theoretical entities visible, as
relations and connections become viewable and complexity becomes manageable.
Simultaneity matters here: A synoptic overview is furnished, which allows operations

115
to be performed so to say by ‘paper and pencil’. Every symbolic structure can be
restructured, and every configuration can be reconfigured.

Inscribed surfaces are used not only as instruments for visualizing information but
also as tools for operating and exploring the inscribed and visualized. When we do not
know our way around a foreign city, we can become oriented with the help of a map or
navigational device. We can transfer this operative principle into the realm of the
cognitive. Written and graphic notations help us to navigate spaces of knowledge in
much the same way. The cartographic impulse, which is familiar in the context of
moving in real spaces, can thus be transferred to intellectual activities in knowledge
spaces. The transformation of the cartographic impulse into moving in intellectual
landscapes is the reason for the effectivity of cognitively applied artificial flatness.

Let me give a philosophic and a mathematical example for the cognitive


deployment of the cultural technique of flattening.

5.

Plato’s MENO dialogue is designed to show that knowledge is not a kind of entity,
which is transferable from one person to another through language and telling, because
it has to be produced by the knowing individual him- or herself. This is demonstrated
using the situation of a mathematically uneducated slave boy. Socrates draws a two-
foot square in the sand and tells the youth to double the area.

The boy first doubles the length of the sides of the square, but he recognizes that
this fourfold increase is too much. He then increases the length of the sides to three feet,
but – as he can see - this also produces a square that is more than twice as large. The
boy is puzzled and admits that he is irritated: “I don’t know,” he confesses to Socrates.
With the aid of further Socratic questions, in which Socrates does not communicate the
technique of doubling a square, and further geometrical drawings, the boy finally
recognizes that it is possible to double the area by constructing another square from the
diagonal.

What does this “diagrammatic primal scene” reveal? The first step is that the
engagement with the drawing involves the realization not of knowledge but rather of a
lack of knowledge. An intellectual mistake literally becomes visible, and the
perceptibility of this false assumption paves the way for the generation of positive
knowledge. The surface becomes the experimental field of this mathematical insight,
insofar as the drawing is always also revisable: everything that is illustrated can be
drawn differently. It is also clear that the act of working with diagrams is embedded in
dialogue. Image and text, or drawing and speech, are interconnected. There is not such
a thing as a singular, context independent diagram.

116
The Menon scene is not a singular diagrammatical event in Plato.

6.

Let us go to our mathematical example. The legend of the German mathematician


Carl Friedrich Gauß is instructive in this context. At the age of nine the future
mathematician was reportedly given the task of determining the sum of the first one
hundred numbers, and unlike his fellow students he produced the correct answer within
minutes.

Look at this reconstruction:

(1) 1+2+3+4+5+….+97+98+99+100

By exchanging the positions of the numerals he rearranged this sequence as


follows:

(2) (1+100) + (2+99) + (3+98) +…..+ (49+52) +(50+51).

This resulted in an optical situation that showed that the sum in each set of brackets
was equal.

(3) (101) + (101) +….+ (101) + (101)

Due to the fact that there were 50 such sets of brackets, the answer was

(4) 101 x 50 = 5050

This simple example demonstrates the cognitive benefit of using two-


dimensional spatiality. As theoretical entities, numbers have no spatial position,
nobody has ever seen a number. As numerals, however, their spatial positioning on
the surface can become a tool of arithmetic problem solving. According to the
commutative and associative law of addition, the spatial arrangement of numerals as
a linear sequence can be rearranged and combined into groups. What is actually done
on paper is the shifting in the position, the locus of the numerals; but this spatial
operation produces a visual configuration that makes the solution to the problem
immediately obvious. Complex intellectual activity is less performed mentally and
thus “within the head”; rather, it can be carried out externally through the systematic
manipulation of external symbols on paper. Eye, hand, brain and the medium work
together, and the “mind” emerges through this triadic mediation.

117
7.

I have described media as occupying a third position as an intermediate “in-


between.” What does this mean with regard to inscribed surfaces as media? The surface
constitutes a third between the one-dimensionality of time and the three-dimensionality
of space. A conversion occurs, as something that is temporally successive is drawn as
something that is spatially synchronous. Temporal sequentiality is transfigured into
spatial simultaneity and vice versa. This “transfiguration” of time into space and of
space into time is not simply a process of transference; rather, it implies a
metamorphosis. Consider, for example, alphabetic writing. When temporally
successive speech is converted into the spatially organized configuration of a text, the
written characters give rise to a new potential for which there is no model in spoken
language. For example: grammatical distinctions that remain hidden in speech are first
made visible through capital and lower-case letters, punctuation, etc. Phonetic writing
does not record speech; rather, it provides an analysis, so to say: a ‘cartography of
language’.

It is therefore the intermediate position between the one-dimensionality of time


and the three-dimensionality of space that enables artificial flatness to become a
medium. It is of no coincidence that one of the first cultural use of artificial flatness has
been the sundial. Shadows flatten and the ancient sundials are based on the epistemic
use of silhouettes. As we all know, the hours of the day are readable through the lengths
of the shadows cast by illuminated things: the shorter the shadows, the higher the
position of the sun. Vitruvius, a roman architect and theoretician, describes the
functioning of an ancient sundial in his Ten Books on Architecture.1 A gnomon or
pointer is placed in a hole within a network of lines that is subtly constructed as a
diagram based on astronomical observations and mathematical calculations. The
shadow cast by the gnomon onto this network of lines, the analemma, allows the hours
of the day and the months of the year to be ascertained. What matters here is not a static
diagram, yet movement of the shadow across the diagrammatic field.

Back to our general reflection: the interplay of time and space, the spatialization
of time makes it possible to write programs, notate musical scores, and prepare design
drawings that can be seen, red and realized by others. Temporal performances thus
solidify into stable and transmissible spatial configurations, which can become fluid
through their implementation and then solidify once again into new stable structures.
And it is already apparent here that operative flatness facilitates not only the transfer
between space and time but also the mediation between the individual and the social.
Because the inscribed surface introduces a form of visibility and operativity that is
always in the “we-mode” (Modus des Wir). It organizes mutual perceptions and
experiences. A contribution to the social cultural mind outside the head!

1
Vitruvius, De Architectura, 9,1,1ff.
118
However, there is yet another issue that illustrates the mediating aspect of
inscribed surfaces. Reasoning and intuition are—at least since Immanuel Kant—two
distinct and irreducible sources of knowledge, yet written notations, scientific diagrams,
and graphs constitute an intermediate world that permits to connect reasoning and
intuition. This can be illustrated using the example of the natural scientist,
mathematician, and philosopher J. H. Lambert (1728-1777).

Lambert wanted to calculate the deviation of the magnetic needle from the
geographic North Pole over time and in relation to Paris. To this end, he plotted the
observed data as points on a coordinate plane with the axes of space and time. He then
connected these points by drawing a curved line. What is important is that this line
embodied the general law of deviation. General laws cannot be seen. The induction
problem raises the difficult question of how something general can be derived at all
from something singular. Lambert solved this problem haptically by connecting the
points with a line and interpreting the line itself as the representation of a law. The
drawing hand thus fills in the gaps between the observed and the unobserved, and the
individual drawing provides a visualization of a general law. Lambert used the
inscription surface not only as an instrument of recording and storage, but also as an
instrument of analysis. New insights emerge through the interaction of point, line, and
plane. The paper becomes a mental laboratory that mediates between singular
perceptions and general concepts, between observation and theory. We do not think on
paper, yet with paper.

8.

To sum up: the invention of artificial two-dimensionality created a space of


overview, control, and operativity, as the graphic interaction of point, line, and plane
enabled the visualization and observation of theoretical concepts. This creative
potential was based on spatial arrangements. By transforming temporal succession into
spatial simultaneity and vice versa, an exploratory space for cognition, communication,
and computation emerged. By the way: It also produced a space for play, as there is
hardly any game that does not come with diagrammatic drawings.

However, the cultural technique of flattening changes with digitization. What


happens when the inscribed surface becomes a networked interface?

Insofar as the inscribed surfaces evolve into a networked interface and graphic
user interfaces control our interactions with the computers, a new kind of depth in form
of an expanding universe of interacting machines and protocols behind the screen
comes into being. Rhizomatically in the back of ‘smart usability’ sprawls an invisible
and uncontrollable region of an resurgent "secret," a black box in the literal sense. Each
software develops a "virtual machine" that is hidden from those working with the

119
software. The skills that computers acquire inductively from huge datasets through self
learning programs (deep learning) of Artificial Intelligence remain unclear to the
developers in the "how" of the acquired rules and routines. And the multiple data
traces left by users on the net and on social media, and commercially used by profiling
algorithms and behavioral prediction algorithms, are usually beyond the reach of their
creators.

The European Enlightment was connected with the promise of transparency and
control in a media perspective offered by the device of artificial flatness. But if the
surfaces evolve to interconnected interfaces and transfigure into black boxes, we
witness a return of the withdrawn, of the secret, of the unknown, which the cultural
technique of flattening tried to eliminate. Do we have think about a new idea of
enlightment, to create a ‘digital enlightment’?

120
文化延续与人文科学再定义:大学于 21 世纪全球化

社会之角色

吉见俊哉(Shunya Yoshimi)

在信息大爆炸时代,“大学”概念必然面临质疑与再定义。今日全球大学数量
已逾 15000 所(日本 780 所,韩国 200 所,中国 1800 所,美国 4200 所,俄罗斯
1000 所等),这些大学及其学生如何幸存于此信息大爆炸时代?互联网的无限
延展使我们得以轻易藉由谷歌、维基百科、电子图书系统等诸多路径抵达知识金
字塔之巅,“学院知识”概念备受互联网社会发展的激烈挑战。

就此,我将在讲演中首先阐明 16 世纪与 21 世纪诸多相似之处,尤其就传播


与交通革命而言。16 世纪的地理大发现与印刷革命,使信息量暴增,世人获知途
径远超前代。生活在 21 世纪全球化与数字革命时代的我们,则正面临前所未有
的知识巨量。

继而,我将讨论人文与社会科学话语在当下高科技社会中变动不居的位置。
2015 年夏,我们曾就人文与社会科学话语在当今大学教育中的重要性/非重要性
展开激烈争论,此次我将重申并进而阐明大学教育必须为其留一席之地的诸种原
因。人文与社会科学之重要性在于鼓励人们批判既有价值体系,从而使得细辨新
兴价值与社会去向成为可能。例如,人文/社会科学与工程/自然科学之联合必须
建基于对未来社会“时间”结构的谋划之上。

121
Shunya Yoshimi

Cultural Sustainability and the Redefinition of


Humanities: The Role of University in the 21st
Century Globalized Society

Abstract: In the age of information explosion, the concept of university


unavoidably faces to be problematized and redefined. Today, there are already more
than 15,000 universities in the world (Japan 780, Korea 200, China 1,800, USA 4,200,
Russia 1,000 etc.). How can these universities and their students survive in the age of
information explosion? After the expansion of internet, we can easily access to the
myriads of knowledge through Google, Wikipedia, e-library system, etc. The concept
of academic knowledge is now radically changing by the progress of the internet based
society.

In this lecture, I will firstly remind you about the similarity between the 16th and
the 21st century especially in terms of the communication and transportation revolution.
In the 16th century, the age of Discovery and Printing Revolution, information was
exploded and people began to access to larger knowledge compare with previous
century. In the 21st century, the age of Globalization and Digital Revolution, people
are now beginning to access huge amount of knowledge.

Then, I will discuss the changing location of the discourses of humanities and
social sciences in the highly advanced technological society. Last summer in 2015,
we have big debates on the significance/non-significance of humanities and social
sciences in the university education. While explaining the points of the debates, I’m
going to give light on the reasons why we should not give up humanities and social
sciences in university education. They are very “useful” because they can make
people to think about the new value and purpose of the society through criticizing the
already established value system which has been taken for granted. So, for example,
the collaboration between humanities/social sciences and engineering/natural sciences
needs to be based on the design of structure of “time” of the future society.

122
死亡凸显对内疚和羞耻的影响及其神经机制

徐振华,刘 超

恐惧管理理论认为当人们面对死亡时,个体的思想、态度和行为会发生改
变。很多研究表明死亡凸显会影响人的社会行为,但是死亡凸显如何影响人的
情绪感知以及其背后的神经机制尚不清楚。本研究关注内疚和羞耻两种自我意
识情绪,应用 fMRI 技术探索死亡凸显如何影响内疚和羞耻情绪的加工过程。被
试先在网上填写问卷,写下自己经历过的感到内疚、羞耻的事件以及中性情绪
事件,来到实验室之后随机进行死亡启动(死亡凸显组)或负性情绪启动(对
照组),之后被试对内疚事件、羞耻事件以及中性事件进行回忆。我们发现,
不管是对于内疚情绪还是羞耻情绪,死亡启动组被试表现出更强的腹内侧前额
叶激活。进一步分析表明,相较于回忆中性事件,回忆内疚事件时死亡凸显增
强了腹内侧前额叶和楔前叶、颞中回的功能连接。而回忆羞耻事件时,死亡凸
显则减弱了腹内侧前额叶与楔前叶以及后扣带回皮层的功能连接。死亡凸显对
内疚和羞耻情绪产生了不同的调节机制。

123
Xu Zhenhua, Liu Chao

Effect of mortality salience on the neurocognitive


processing of guilt and shame

Abstract: Mortality salience impacts many kinds of human social behaviors,


generating various adaptive phenomena. However, how being remembered of death
influences emotion experience remains poorly understand. Some research found that
mortality salience modulated individual concerns about self and others. This study
focused on two important self-conscious emotions involving self and others: guilt and
shame. Using fMRI technique, we investigated how mortality salience affected the
neural processing of these two emotions. After mortality salience priming or negative
affect priming, participants recalled the shame events, guilt events and neutral events
they wrote down in an online questionnaire before coming to the laboratory and relived
emotions in these events. We found that, no matter comparing guilt condition vs. neutral
condition or comparing shame condition vs. neutral condition, mortality salience group
showed greater activation within ventromedial prefrontal cortex (vmPFC) region than
control group. Further psychophysiological interaction (PPI) analysis revealed that
mortality salience increased vmPFC connectivity with precuneus and middle temporal
gyrus (MTG) in guilt condition comparing with neutral condition. But when comparing
shame condition vs. neutral condition, mortality salience decreased vmPFC
connectivity with precuneus and posterior cingulate cortex (PCC). Our findings
demonstrated that mortality salience triggered different emotional regulation
mechanism for guilt and shame.

124
人机交互领域的转变:从数据具身化到经验资本主义

桑普森(Tony D. Sampson)

本文将提出一种新的人机交互批评理论(critical HCI)以重新检验该领域
存在的诸多假设与缺失。正如哈里森所论,人机交互正在由一种认知理论框架转
变为一种对用户体验的现象学式理解。这既是人机交互领域的第三种研究范式,
也正如苏珊娜·博德克(Susanne Bødker)所指出的,这是第三波人机交互浪潮。
尽管这种对用户体验的高度关注开启了学术研究领域的多种新路径,但是本文将
在数码环境中着重关注基于任务型(task-based)数字工作和使用环境(use
context)的传统人机交互学科,与对消费者体验兴趣日渐浓厚的商业活动之间
的 独 特关联。人机交互批评理 论将以两种相互关联的方式阐述经验 / 体验
(experience)问题。 一方面,该理论将探索市场逻辑在促使用户体验发生作
用时所扮演的角色。另一方面,该理论将与对经验/体验(experience)的本体
论理解相结合。实际上,对经验/体验的本体论理解已经被人机交互领域以现象
学矩阵的方式所认知。在总结部分,本文将通过引入 A. N. 怀特海德的相关论
述对“经验”(experience)予以重新认识。本文认为,“经验”使得本体论问
题(ontological concerns)与“经验资本主义”(experience capitalism)
这一更为宽广的哲学概念相互关联。

125
Tony D. Sampson

Transitions in Human Computer Interaction: From


Data Embodiment to Experience Capitalism

Introduction: The Politics and Philosophy of Critical-HCI

The intention of this article is to develop a critical theory of human computer


interaction (critical-HCI) that tests some of the assumptions and omissions made in the
field as it transitions from a cognitive theoretical frame to a phenomenological
understanding of user experience described by Harrison et al (2007) as a third research
paradigm and similarly Bødker (2006, 2015) as third wave HCI. As a significant
constituent of twenty-first-century trends in HCI, the focus on experience has provided
some novel avenues of enquiry focused on embodied interactions (Dourish 1999; 2004),
felt experiences (Wright and McCarthy 2004), emotions and affect (Norman 2004;
Picard 1997) grasped in ever more pervasive and smart technological contexts of use
(e.g. Kuniavsky 2010). Nonetheless, this article contends that interest in experience
does more than simply address new use contexts in academic circles. It also draws
attention to a distinct bridge between conventional HCI disciplinary concerns with
predominantly task based digital work and a growing business interest in consumer
experiences in digital environments. Indeed, as the notion of the user experience (UX)
becomes embedded in the HCI curriculum, commercial practices and the operational
level of digital media, it simultaneously develops into a powerful marketing tool that
business enterprises readily utilize in order to tap into experiential triggers that establish,
some argue, cognitive, emotional and visceral engagements between consumers and the
digital commodities, services and brands they consume (Norman 2004).

It is my further contention that the problem of experience needs to be addressed


by critical-HCI in two interrelated ways. On one hand, a critical approach needs to
explore the role market logic plays in putting user experiences to work - what I go on
to call in this article experience capitalism: a term closely related to notions of an
experience economy. This is an economic model that ushers in new experiential
contexts for user/consumer interactions with the marketplace increasingly accessed
through pervasive digital media technologies with enhanced operational capacities.

126
Here we find a significant and potentially reciprocal overlap between established media
theory critiques of the political economy in which digital communication technologies
are operative and the need for critical-HCI. On the other, critical-HCI needs to fully
engage with ontological understandings of experience hitherto realized in HCI by way
of a phenomenological matrix (Harrison et al 2007). The idea is to test the limits of this
matrix by drawing on an alternative philosophy of experience, which, I argue, helps
critical-HCI to more effectively approach ontological transitions to new technological
contexts of interaction. This means bringing in an old thinker (A.N. Whitehead) to
consider experience in novel ways that relate ontological concerns to this broader
political concept (and persistence) of experience capitalism.

What is at stake in this political-philosophical enquiry is the status of human


consciousness as understood by, on one hand, current phenomenological HCI, and on
the other, the nonbifurcated theory of experience Whitehead (2004) conceived of in the
1920s. The twofold problem that consequently emerges from this dual venture concerns
the extent to which experience of twenty-first-century digital media systems can be
regarded as under the spell of subjective minds, or alternatively, conceived of as a
production of subjective experience composed in the durational events of interaction.
The article concludes by asking if it is the case that, as one post-phenomenologist media
theorist assumes, the ontological status of a once privileged human experience of media
is somehow cut out of the loop between user interaction and operational media (Hansen
2015), or, following Whitehead’s nonbifurcated adventure, can we conceive of a
politics of experience in which the mindful experience of (and human interaction with)
the external world is regarded as inseparable from the durational passage of events.

The Three Paradigms of HCI Revisited

This article marks a development on an earlier critical-HCI focus on efficiency


analysis that runs seamlessly through the three paradigms of HCI (Sampson 2016, 45-
74). To briefly recap on this work it is important to note that each paradigm is defined
by Harrison et al (2007) according to three distinct metaphors of interaction. The first
concerns the body/machine couplings developed in a predominantly
engineering/pragmatic focus on ergonomic design (let us call this the ergonomic
paradigm). The second (the cognitive paradigm) is arrived at through the influence of
cognitive psychology and a theoretical framework developed around the
mind/computer metaphor. The third paradigm (my main focus here) is informed by a
number of trends in HCI research including phenomenological arrived at notions of
embodied interaction, a neuroscientific leaning toward the role of emotions, feelings
and affect in cognitive computer work and recognition of new technological use
contexts brought about by innovations in pervasive computing, for example. For
reasons that will become apparent, I have replaced the notion of a phenomenological

127
matrix with a catchall name for this recent shift in focus: the experience paradigm.
However, following my earlier approach to efficiency analysis in each paradigm, I will
similarly argue here that experience is not simply the defining factor of a third paradigm
of computer interaction, but can be traced through all three paradigms as they each
endeavour to capture the variations of experience in different ways. So unlike Bødker
(2006), for example, who argues for a discontinuity between a second paradigm related
to computer work efficiency and a third all about online consumer experience, I note a
continuity apparent in the efficiency analysis of work and consumption in which
experiences are similarly put to work. Indeed, in addition to the contextual political and
philosophical discussion below, the article will also set out a nascent agenda for a
critical-HCI events based analysis of each paradigm focused on an alternative concept
of experience informed by Whitehead.

Part One: A Political Economy of Experience

This first section brings in a political perspective intended to address a general


omission in HCI research concerning the user experience; that is to say, it draws
attention to the role capitalism plays in shaping a new alienating economic space of
commodity production developing around shared experiences and the increasing
ubiquity of an operational level of digital technology intended to capture, cultivate and
put experiences to work. To begin with, the politics of user experience needs to be
couched in discussions concerned with what has been termed the experience economy
(Pine and Gilmore 2010). This is an economic model intimately related to
developmental trends in HCI research and its wider relation to a burgeoning UX
industry. To be sure, it is the very foundation on which the aforementioned bridge
between HCI and business has been constructed. It would seem that whereas earlier
HCI research paradigms were dependent on the metaphorical coupling of human bodies
and minds to machines in the digital workplace, a fresh focus on experience shifts ever
more toward understanding the processing of emotional, affective and felt experiences
with new digital communication contexts involved in work and consumption. As
follows, the experience economy is composed of a digital circuitry linking together
workers, consumers and business in ways that are assumed to owe more to the aesthetic
of a Walt Disney theme park or theoretical production than Henry Ford’s factory model
(Pine and Gilmore 2010, 56).

The origins of the experience economy have been traced back to Alvin Toffler’s
1970 book, Future Shock, and a chapter therein titled “The Experience Makers” which
prophesizes where the economy is heading after the exhaustion of the service industries
(Pine and Gilmore, 2013). It is here that Toffler (1970, 208-09) first introduces the idea
of the experience industries.

128
[The experience industries are] a revolutionary expansion of certain
industries whose sole output consists not of manufactured goods, nor
even ordinary services, but pre-programmed ‘experiences’. The
experience industry could turn out to be one of the pillars of super-
industrialism, the very foundation, in fact, of the post-service
economy… the experience industry of the future and the great
psychological corporations, or psych-corps... will dominate.

A similar theme emerges in the field of consumer research in the early 1980s
where Holbrook and Hirschman (1982, 132-40) argue for “an experiential view” of
consumption focused on the symbolic, hedonic (the pursuit of fantasies, feelings, and
fun), and aesthetics of the consumption experience. It is in 1999, nonetheless, when
Pine and Gilmore (2010), seemingly unaware of Toffler’s futurology, introduce a
notion of the experience economy that can now be concretely related to the current
digital landscape. As follows, the twenty-first-century expansion of the UX industry (a
convergence of interaction design and marketing akin to Toffler’s psych-corps) can
indeed be grasped as a major component of a political economy of experience marked
by a shift from commodities, factory goods, and services to the added value of
experiential consumption increasingly associated with industrial scale operations in a
digitalized environment.

Following the experience economy model, the added value of digital experiences
can, on one hand, include conventional commodities, goods and services readily
transformed into new experiences realized through design, branding and marketing.
The point is that the experience economy is more attuned to the idea that it is the
experience itself that often captivates user-consumer attention, leading to emotional
engagements and the all-important purchase intent (Norman 2004). At its most deep-
seated though, on the other hand, there is a commodification of experiences that do not
refer back to a tangible product or service. The design of smart phone interactions with
social media are apposite here. The value extracted from user interactions with social
media apps, for example, does not appear to relate in any palpable way to a conventional
product, but instead extracts value from the experience of social interaction. It is this
digital transformation of commodity production that arguably leads to a business need
to realize value in newly mediated interactions and experiences related to social context.
It is indeed the work of the UX industry, composed of UX consultants, interaction
designers, information architects, ethnographers, behavioural psychologists, big data
researchers, coders, biofeedback experts, network strategists and online marketers to
produce the sensory environments in which shared experiences can be captured,
cultivated and exploited.

129
The UX industry is able to draw on the resourceful expertise of a range of
specialists to prime sensory environments in which experiences might occur, but no one
person or business enterprise produces experience. To be sure, the broader concept of
experience capitalism emerges from research into (and extracting value from) what is
already in action. Borrowing from Langlois and Elmer’s (2013) approach to corporate
social media, we might say that what experience capitalism does is more closely aligned
to the patterning of experience, and I might add, significantly focused on the relational
aspects of interaction and the capacity of machines to learn from social context rather
than individual subjective experience. Here we can see how Pine and Gilmore’s (2010)
Erving Goffman inspired theatre productions are perhaps expanded to a point where the
capture of the performance of experience moves beyond any one locatable subjective
viewpoint to the massive-scale automations of experience gathering. As these big data
captures become more pervasively realized through the invention of ubiquitous
computer technologies, the subjective experience – described by Goffman as the
presentation of self, is, as Greenfield (2006) argues, increasingly teased out into the
public domain. That is to say, human subjectivity is not the producer of experience
(indeed, as I will contend below, it never has been). On the contrary, experience
capitalism persists in a world full of social media apps, relational databases, sensors
and computerized things that process experiences in which subjectivities are constantly
being made.

We can see the extent to which this economic shift toward experience steadily
dovetails with the three paradigms of HCI. Ostensibly, the pragmatic concerns of
early designers of computing systems demonstrated very little regard for the user
experience beyond a Tayloristic concern with bodily fatigue associated with
inefficiencies in the workplace. However, the eventual introduction of social factors
into ergonomics followed by a conceptual move to a second paradigm underpinned by
cognitive psychology and centrality of the information metaphors of mind/computer
coupling, transitions increasingly toward a focus on user need, for example, through
usability studies. The subsequent development of user related services, like user testing,
heralds a distinctive trend toward incorporating elements of use initially focused on
cognitive processes of memory, attention and perception, but latterly incorporating user
motivation, frustration and satisfaction, requiring some knowledge of emotions,
feelings and affect. This trend can perhaps be seen as a precursor to third paradigm
concerns with the processing of felt experience, including previously marginalized
research questions, such as, what is fun (Harrison et al 2007).

To fully understand the bridge that spans HCI and the experience economy, there
is a need to look more closely at two components of third paradigm research. Firstly,
there are fresh concerns with the role emotions, affect and feelings play in the
processing of experience. Secondly, the research focus shifts towards exploring new
pervasive contexts of computing use. It is my contention here that while much attention

130
has been given to the undoubted importance of these two components of third paradigm
HCI (e.g. Boehner et al 2007), there is a further need to explore how each becomes
interwoven with the experience economy.

Processing Experience through Emotions, Feelings and Affect

The third paradigm marks the significant appearance of emotion in HCI research
as it emerges from its marginal positioning in the cognitive paradigm. Most notably this
interest in emotion stems from the HCI related affective computing research carried out
by Rosalind Picard (1997) at MIT, as well as the work of HCI and UX guru, Don
Norman (2004), whose influential emotional design thesis borrows from neuroscientific
ideas concerning the so-called emotional brain thesis to inform a model of experience
processing. According to Norman (2004: 21-24) experience is processed through three
interconnected levels: reflective (cognitive), behavioural (use) and visceral (affective).
This approach does not however go unchallenged in HCI. To be sure, Harrison et al
(2007) draw attention to a “wide range of [opposing] approaches to emotion”
including challenges to the “central role” it is assumed to play in cognition as a kind of
“information flow.” In contrast, there is a rejection of the “equation of emotion with
information” in favour of an “interpretation and co-construction of emotion in action
[and interaction]” (Harrison 2007). The transition from second to third paradigm HCI
research plays a key role in how these opposing conceptions of emotional experience
take shape. To begin with, the accusation against Norman’s model of experience
processing is that it (a), remains stuck with one foot firmly in the cognitive paradigm
and its tendency to reduce experience to the internal processor (and rationality) of the
individual user’s mind (i.e. the cognitive mind/computer metaphor), and (b) tends to
counterpoise cognition and emotion. A second kind of emotional experience therefore
emerges which is referenced back to Wittgenstein, and argues that emotions are not the
opposite of cognition, but like cognition, they are made in social and cultural
interactions. Indeed, Boehner et al (2007) argue for a culturally grounded understanding
of emotional experience in HCI research that recognizes the dynamics of shared
experience socially constructed in action and interaction.

Experiencing the Internet of Things

Following fairly recent discourses from the technology sector, we can see how the
digitized experience economy has the potential to considerably expand beyond the
current wave of social computing to the Internet of Things (IoT). We may indeed
already have one foot firmly standing in a future wherein experiential data, mostly
captured today by way of conventional computing devices like PCs, mobile tablets and
smart phones, are being gathered from interactions with pervasive computing in every

131
conceivable location, everywhere and at any time. To be sure, experiences are already
being captured through interactions with everyday things like cars and so-called
wearables (fitness gadgets and training shoes, watches etc.), and now other things, like
kettles, mirrors, speakers, furniture, pavements, and streetlamps are fast becoming
computational devices. There are a number of implications for the growth of the
experience economy (and the focus of HCI research) in terms of the changing
spatiotemporal experience of computing. To begin with, the disappearance of the
conventional graphical user interface (GUI) and dissolving of computer power into
these everyday objects will alter the way the subject/object relation with technology is
approached. Encounters with IoT will be triggered by non-task interactions, fleeting
moments of contact, often hidden from users, and even accidentally engendered
interaction. Furthermore, biometric detection systems could potentially capture data
about the affective valence of the body. Here the capacity of facial recognition software,
for example, to detect emotional responses to environmental stimuli comes into play.
Secondly, pervasive computing challenges the way cognitive process, like memory,
perception and attention, have been conventionally studied in HCI. For instance,
although generally considered as an augmentation of memory, media technology can
capture past experiences, lost to memory in the complex passage and variation of events,
so that they can be prompted back into action in the present. In other words, via machine
learning technologies, forgotten experiences can work in the background to generate
inferred experiential performances (Blackwell, 2015) that become perceptible in the
here and now of the experience economy. Thirdly, although the capture of entangled
experiences relating to animals, landscape and climate is already yielding a kind of
nonhuman experiential data, the pervasive operational level of computingmay well
threaten the status of an assumed human centred, conscious experience (Hansen 2015).

The Phenomenological Matrix

Harrison et al (2007) contend that the changing digital environment draws our
attention to the importance of embodiment in third paradigm HCI research. How we
come to “understand the world, ourselves, and interaction” in these new contexts
crucially derives, they argue, “from our location in a physical and social world as
embodied actors” (Harrison et al 2007). Embodied interaction has become one of the
major concerns of HCI, as such, and to understand it researchers have turned to
phenomenology. Dourish (1999; 2004), for example, sees these new contexts as
intimately linked to the technological changes he first observed in the latter part of the
twentieth century. To begin with, in the 1970s, GUI technology introduced a
visualization of computing that prompted a representational turn in the study of
interaction typified by cognitive task based testing and mental models utilized in the
cognitive paradigm. Yet by the 80s the growth in digital network communication adds
new importance to the social in interaction design, prompting a trend in research toward
132
analysing distributed notions of cognition. Subsequently, in the 90s, when computing
first begins to break out of the screen and make its way into the physical environment
in the shape of tangible technologies, attention is drawn toward the limits of the
cognitive approach. It is indeed these two latter developments in the context of
computer use (social and tangible) that, Dourish (2004, 15-22) argues, require a new
HCI framework focused on embodiment and grasped through the twentieth century
phenomenological tradition.

Embodiment is defined in a way that makes it useful to the HCI researcher because
it provides a “property of being manifest in and of the every-day world” in which
interactions take place (Dourish 1999). This property is not, however, simply restricted
to physical things, like computers or mobile devices, but can include participatory
patterns, like conversations between “two equally embodied people” set against “a
backdrop of an equally embodied set of relationships, actions, assessments and
understandings” (Dourish 1999). This backdrop owes an initial debt to Husserl’s
phenomenology, insofar as it is seen as part of a transition away from an experience of
the world grasped through the realm of abstract ideas (idealism) to one derived from
the experience of concrete phenomena. However, importantly, more attention is given
to Heidegger and Merleau-Ponty in third paradigm HCI research. In the first instance,
Heidegger famously tried to escape Husserl’s “mentalistic model that placed the focus
of experience in the head” (Dourish, 1999). This is, evidently, important to the third
paradigm’s similar transition from the cognitive realm of mental modelling to
embodied interaction whereby interaction is no longer considered in the head (or mind),
“but out in the world… that is already organised in terms of meaning and purpose”
(Dourish 2004, 108). Indeed, Heidegger’s ontological worldview is not taken as a given
- it arises through interaction (Dourish 1999).

Dourish is not the first to utilize Heidegger for HCI purposes. Below he uses
Winograd and Flores (1986) adoption of the phenomenological distinction between
“ready-to-hand” and “present-at-hand” to explain a distinctly first paradigm experience.

[C]onsider the mouse connected to my computer. Much of the time,


I act through the mouse; the mouse is an extension of my hand as I select
objects, operate menus and so forth. The mouse is, in Heidegger’s terms,
ready-to-hand. Sometimes, however, for instance on those occasions
when I reach the edge of the mousepad and cannot move the mouse
further, my orientation towards the mouse changes; now, I become
conscious of the mouse mediating my action, and the mouse becomes
the object of my attention as I pick it up and move it back to the centre
of the mouse-pad. When I act on the mouse in this way, being mindful

133
of it as an object of my activity, the mouse is present-at-hand (Dourish
2011, 109).

This switching between automatic interaction and mindful attention suggests that
the mouse only really exists because of the way it becomes present-at-hand through
embodied interaction. The point is that the mindful activity of using the mouse is
constitutive of ontology, not independent of it (Dourish, 1999). The mouse comes into
being in the mind because, it would seem, it is part of an embodied experience of being
in the world. Indeed, this notion of mindful embodiment is developed further, Dourish
(2004, 114) notes, by Dreyfus (1996) who brings in the phenomenology of perception
developed by Maurice Merleau-Ponty (1962). Here we find that perception itself is
an active process, carried out by an embodied subject. As a result, third paradigm HCI
research begins to focus on a somewhat dualistic distinction between the “physical
embodiment of a human subject, with legs and arms, and of a certain size and shape”
and a “cultural world” from which subjects extract meaning from (Dourish 2004, 114).
From this stance the importance of developing “bodily skills and situational responses,”
alongside mindful acts (or “cultural skills”), which in turn respond to the user’s
embeddedness in this “cultural world,” comes to the fore (Dourish 1999). It is in
between bodily and mindful interactions that abilities and understandings of computing
are developed. There is also a considerable social component to this notion of
interaction. On one hand then, we find the presence of the phenomenological body of
the user-subject, who, on the other hand, simultaneously becomes the “objective body”
experienced and understood by others in the cultural worlds they encounter (Dourish
2004, 115). From this point on, HCI researchers start to draw on Merleau-Ponty’s
phenomenal perception of embodied and cultural worlds to develop, for example, “a
taxonomy of embodied actions for the analysis of group activity” (Dourish 2004, 115;
Robertson 1997).

Although escaping Husserl’s mental prison of the head to explain how experience
emerges from human interaction with the world, human perception remains stubbornly
(and problematically) central to the phenomenologist’s ontology. Whether or not it is
in the head or embodied in the world, HCI phenomenology similarly begins with the
notion that it is the human who has the experience. In other words, where the action is
can be grasped ontologically as it is sensed (in the head, in the hand or through some
other bodily interaction) to the human. So why use Whitehead to challenge such a
position and what tools can we take from this radical departure from the
phenomenological tradition?

134
Part Two: A Whiteheadian Adventure in HCI

A Whiteheadian adventure in HCI offers a challenging but also profound


alternative concept of experience that illuminates these emerging use contexts in new
ways distinct from a phenomenological approach that has thus far situated minds and
bodies in a bifurcated relation to environmental experience (Whitehead 2004). This is
Whitehead’s (2004) ostensibly uncanny notion that experience did not start with
subjective human consciousness. That is to say, the world, and the cosmos it floats in,
did not simply begin with the arrival of human awareness. Indeed, it is not human
consciousness that draws attention to experience. It is, on the contrary, experience that
draws attention to an anomalous worldview limited by its own perception of the here
and now. For Whitehead, it is important to avoid a solipsistic theory of mindful
perception which erroneously bifurcates from the concreteness of the passage of nature
from which it emerged. Whitehead’s adventure therefore offers a constraining
philosophical point of departure since it is not phenomenal human consciousness that
sheds light on experience, but experience in the actual world that draws attention to the
aberration that is human consciousness. In other words, it is very important that the
place and time (the here and now) of interaction is no longer simply understood as an
anthropomorphic phenomenal experience, but rather grasped through a set of tools that
refuse the bifurcation between mind and the nature of what is experienced. As follows,
in Whitehead’s early process philosophy, the embodied location of points in time and
positions in space suggested in the phenomenological matrix are not regarded as well
formulated problems since they overlook the complex “temporal thickness” and
intensity of the durational quality of the actual occasions (or events) of experience
(Whitehead 2004, 56).

Of course, HCI researchers may well want to question the value of an approach to
HCI that side-lines the human, or more specifically, human consciousness. However,
this stance is important to critical-HCI because the transient perception of the subject-
user of the here and now of experience only represents a small slice of the passage of
events occurring in the actual world. Arguably therefore the focus on human perception
neglects to grasp the full extent of the shift to the experience economy and changes to
the technological infrastructure that newly redefine where the action is. This is not,
however, an approach that is dead set against perception. But perception needs to be
seen as only taking into account what occurs (Stengers 2014, 147). This is not the same
as saying that perception produces reality. Perception does not decide if things are more
or less real! That is to say, embodied interaction only goes as far as declaring mere
instants of percipient, and sometimes specious, events in experience. What the
adventure profoundly tells us is that it is, inversely, the process of reality that produces
subjectivity.

135
Analytical Tools for Non-Bifurcated Experience

In a nutshell, Whitehead helps us rethink the status of human consciousness in


HCI. While the phenomenologist brings in a bifurcation between the perceiving human
mind, embodiment and experience in the actual world, a Whiteheadian adventure
eschews theories that force such a bifurcation. The phenomenologist, for example, takes
what is experienced in the actual world as the here and now. What is ready-in-hand, for
instance, becomes a position in space and a point in time from which meanings can be
constructed from what is present-in-hand. But this perception of the here and now of
experience is, following Whitehead, an often misplaced abstraction of a far more
complex relation to reality experienced through a concrete passage of events. For
Whitehead then the data of experience are not in the mind. The actual world is not
apprehended by the mind; on the contrary the mind is part of the passage of events in
the actual world. Significantly though, it is not that mindfulness does not exist;
evidently, it does, but the mind only has a “foothold” in experience rather than a
“command post” (Stengers 2014, 67).

Whitehead was determined not to limit his philosophical outlook to theories that
made such a bifurcation happen. He looked, as such, to develop new concepts of
experience that are not exclusively the property of human perception, but rather
inclusive and interlocked with the actual world humans are a part of. Of course, this is
a complex task. It is necessary to, first, undo the subject predicated philosophies
developed over epochs of human consciousness; to completely disengage from the
solipsistic sense that humans are the masters of subjectivity when it comes to observing
real material substances or the formulation of ideas that describe them. It also means
overcoming the language games we have absorbed into our minds that explain our
subjective experience of the real world in such limited ways. Second, and clearly related
to HCI, we need to challenge the rigidity of subject-object relations as the only way to
think about the ontology of spatial interaction, and, third, Whitehead prompts us to
move beyond purely spatial concepts of interaction to radically approach experience in
terms of the passage of events.

Freeing Subjects and Objects from the Syntax Trap

The Whiteheadian adventure asks us to test the limits of language and redesign it
in a similar way to which the tools of physics are intended to better probe the dynamics
of the actual world. As Whitehead contends, language was designed to handle a static
world and fails, as such, to express the dynamics of reality (Urban 1951, 304). For
example, in his endeavour to refuse bifurcation Whitehead criticized the orthodox
concept of “having an experience” of an object since it is erroneously determined by
the mould of the subject-predicate. That is to say, the subject (the knower) is always
situated by the experience of the object (the known). As Victor Lowe (1951, 106)
136
argues, the subject-predicate mould is “stamped on the face of experience” so that the
experient is the subject who is always qualified by the sensations of the objective world.
This is how language traps experience in the unidirectional relation between the private
subject and the public object.

Whitehead’s intervention into the trappings of language are of use to critical-HCI


for two main reasons. Firstly, we see how the subject predicate trap is already at work
in the research focus on situated interactions where, for example, it might be said that
the user experiences the smooth ergonomics of the mouse so that the subject-user is
situated by their experience of the public object. As a counterintuitive alternative,
Whiteheadian subjects can be made into objects, and inversely, objects into subjects.
The notion that objects can experience subjects, as is the case when a well-designed
mouse experiences the hand of the user, should not perhaps be an entirely alien design
concept in tangible computing or ergonomics. But, by drawing on Whitehead’s
reinvention of terms like feeling, emotion, satisfaction and enjoyment theorists are able
to develop effective ways to account for the relationalities of experience not yet
adequately realized, so that it might be possible to conjure up a concept of the mouse
feeling the warmth of the user’s hand. The subject does not simply know the object, but
is provoked into knowing by the experience of the object. Furthermore, in the new IoT
contexts of interaction a user who encounters an object can become the subject of
interaction. It might be the case then, as Hayles (2009) similarly argues, that in twenty-
first-century media subject agency has ceded control to the technological object; that is
to say, the binary divide between active, communicative subjects and passive, silent,
fixed objects, no longer works. HCI researchers may also have to take into account
objects that have become sociable (Mitew 2014), sidestepping human awareness or
taking the place of humans altogether. Ultimately though, rethinking experience as
neither predicated by subject nor object makes way for immanent relations in which
subjective forces are not predetermined as the knowers of objects, but focus attention
instead on the shifting relations in which each experiences the other.

Secondly, in Whitehead, we encounter a viable alternative to Heidegger’s solution


to Husserl’s problematic concept of experience as locked inside the head wherein
experience is said to be “the self-enjoyment of being one among many, and of being
one arising out of the composition of the many.” (Whitehead 1985, 145). This is not a
self-satisfying moment in time beginning in the head, brain, mind or body. Experience
may indeed be related to human activities of the brain, mind or body, but they cannot
be decoupled from the interlocking relations of the actual world. As Whitehead (cited
in Dewey 1951, 644. Emphasis added by Dewey) puts it:

[W]e cannot determine with what molecules the brain begins and
the rest of the body ends. Further, we cannot tell with what molecules

137
the body ends and the external world begins. The truth is that the brain
is continuous with the body, and the body is continuous with the rest of
the natural world. Human experience is an act of self-origination
including the whole of nature, limited to the perspective of a focal region,
located within the body, but not necessarily persisting in any fixed
coordination with a definite part of the brain.

Clearly, this is not experience limited to any privileged sense organ (the brain or
the sensation of a body), or indeed, a higher level of consciousness (the all-perceiving
mind with the capacity for language). Although, Whitehead (1967, 78) concedes that
human consciousness may well be an exhibit of the “most intense form of the plasticity
of nature,” there is no dichotomy between the human and what is experienced, and
ultimately, in this nonbifurcated sense-making assemblage, nature is closed to mind.

Space is Interaction

As we will see below Whitehead fundamentally changes the concept of space by


introducing a process philosophy in which it is the passage of events that is experienced.
To be sure, early on in his so-called pre-speculative epistemological phase Whitehead
sought to develop a relational theory intended to overturn the ancient Greek’s notion of
absolute space (Lowe 1951, 53-54). This nascent trajectory of the adventure begins with
a mathematician’s interest in overturning orthodox geometry. The problem for
Whitehead is the geometric point! His relational theory of space thus notes how time is
missing or constrained to points in the Euclidean geometric grid. He argues that things
do not occur in points in space; points are not ultimate entities, but abstractions of
complex durations. We need to therefore forget a concept of space defined as the place
where we find bodies at certain fixed points in time, acting on each other. Indeed,
interaction is not a property of space. Bodies are not in space because they interact.
Space is, in itself, a certain kind of process of interaction. Interaction in space is not, as
such, defined by one point effecting another, for example, the hand meeting the mouse,
but is a coming together of a coherent population of interacting bodies into a society of
events. It is this process of coming together, what Whitehead would go on to call
concrescence, which requires attention and needs explaining as best we can (Lowe 1951,
104).

In critical-HCI we might start by redefining interaction as an imminent relation in


which it is not points in time or space that are experienced, but durations. This again
fundamentally changes the terms of third paradigm HCI research. Where the action is
does not bring us to a location determined by the perceiving mind or indeed where a
body interacts with a computer, but space itself is interaction. Here we can see how the

138
first paradigm may well have been onto something that the second and third have gone
on to ignore. Instead of concentrating on perceptive locations of interaction in space –
i.e. the points in space where hands (and minds) meet the mouse – ergonomic experts
engaged in capturing (and breaking down) computer tasks into discrete activities in
time. Albeit an oversimplification of a passage of time lacking in the thickness required
by Whitehead’s theory of events (Stengers 2014, 52), the first paradigm ergonomic
study of interaction is not limited to a notion of perception fixed to a geometric grid.

Like third paradigm HCI, the Whiteheadian adventure endeavours to escape from
the same Cartesian structures that underpin the second cognitive paradigm. To do this
Whitehead borrows from William James’s concept of pure experience to make a contra-
Cartesian move (Stengers 2014, 70). But we must first clearly distinguish here between
the phenomenological contra-Cartesian position Dourish (2004, 127; 191) takes in
Where the Action Is and Whitehead’s event analysis. On one hand, Dourish (2004, vii)
is critical of the cognitive paradigm’s convention of grasping interaction through a
mind-computer metaphor that seems to have lost its relation to a body. As we have seen,
embodied interaction is not just information in the mind; it is also experienced in the
hand. On the other hand though, Whitehead does not regard mind or body as the
situation where interaction occurs, but rather draws attention to how both are composed
in a passage of events. The “I” of the mind (and the body from which it seems to belong)
does not determine who we are, since in the duration of events, both body and mind are
swept up in the present before slipping into the past. So unlike Descartes dualism, the
mind does not determine who we are. Again, this is not the command post of experience
we find in the phenomenological matrix. To be sure, the mind always comes later! The
experience does not therefore belong to the mind. The mind’s perceptual judgements,
as well as its apparent capacity for memory and attention, can only testify to the passage
of events from its percipient foothold - in the duration of events (Stengers 2014, 75).

From an events perspective then we can begin to look at perception in a very


different light from the phenomenological subject and her interaction with concrete
objects in abstract points of time and space. Perception needs to be approached not by
way of what is ready or present-in-hand, but by way of what is in passage; in what
Whitehead calls a percipient event (Whitehead 2004, 107-08). So unlike the
phenomenal mind that puts concrete objects to death because they are only ready-to-
hand or miraculously brings them back to life since they are here right now and present-
at-hand, in mental space, it is the event itself that becomes the concrete fact of
experience. There would be no objects to perceive, no mindfulness of objects, without
the passing of these concrete events. The object perceived is not therefore what is
concrete or what brings about the abstractions of consciousness. Whiteheadian objects
are not concrete substances from which abstract properties arise; on the contrary,
objects are abstractions (Stengers 2014, 90-91). In an events analysis, it is not enough
to say here is the mouse since it will be perceived in a complex array of abstract objects,

139
including how it is sensed through a clicking noise even if it is not seen, as well as the
haptic physicality and perception of shape or even viewed under a microscope as a mass
of molecules, and so on. Abstract objects are not experienced merely in the now either.
They provide a uniqueness and continuity that presents the foothold the mind needs in
the events that pass it by; there is the mouse and there it is again! It is not, as such, an
object in a given space. It is a mouse-event or pattern of interaction that produces the
subjective reality of the mouse. Ontologically, the mouse is not therefore hidden from
consciousness, but it is declared in the percipient encounter with events (Stengers 2014,
46). To put this another way, it is not the abstract properties of the concrete object that
declares the mouse, but rather the mouse is an abstract object perceived of in the unified
concrescence of the events that declare it. The subject who perceives the mouse is not
the author of the event, or indeed, the author of the many variations in mouse-events.
But we must not simply replace subject/object with object/event relations. We need to
think of interaction as a society or a nexus of events in passage that provide ingression
to objects so that the object is expressed in the event and the event expressed in the
object (Whitehead 2004, 144-52). As Stengers (2014, 52) puts it, every duration of an
event “contains other durations and is contained in other durations.” This is the
relational temporal thickness of Whitehead’s event that cannot be grasped in individual
points in time or space. As follows, we need to recall that making the subject the author
of this kind of mouse-event reintroduces bifurcation. The human mind (however
exceptional its plasticity in nature) cannot experience the whole event. The subject does
not decide on events (whether the mouse is here or not here), as such. The events decide
the subject. The subject’s point of view (this percipient window on experience) belongs
to an “impersonal web” of events (Stengers 2014, 65). To put it another way, events are
not a privileged conscious point of view the user adopts. Users may well occupy the
here, but it is their relation to the now that sweeps them up in a complex flow of events
in which they might confuse the observational present for something that exceeds the
mere foothold the mind has in all of this complexity.

To counter the phenomenal mind, which finds meaning in the symmetry of the
here and now Whitehead introduces us to the asymmetry of the here and now. Yes, the
percipient event locates us in the here but this here does not move in tandem with the
now. The durational now scoops up the here producing infinite variation. It is indeed,
as Stengers (2014, 67) points out, the and in the here and now that really matters in
terms of meaning making. This is what relates the asymmetrical sense of an
observational present (the here) to the now in durational passage. This is Whitehead’s
cogredience (Whitehead 2004, 108-09), which would later be developed more fully in
process philosophy as the vector-like concept of prehension.

140
Prehending HCI

The need for prehension begins with a problem regarding how humans
confusingly perceive what’s here with real things that are supposed to exist at a distance;
as there. Prehension, according to Lowe (1951, 97), therefore provides the “thread” of
process and reality. It is the vector that makes events into concrescent unities, and
analyzable, as such. The prehension take us beyond the here and now of phenomenality
by otherwise looking to how the there becomes the here. Unlike the idealist’s answer
to this problem wherein the abstraction of space by the mind results in a solipsistic
subjective perception we find a production of reality in what is felt is always becoming
(Whitehead 1985, 236-43): the past (objective datum – what is prehended) is alive and
well in the present derivation (subjective form – how it is prehended). Prehensions thus
provide a way of grasping how what is there becomes something here. In other words,
a prehension is the relation established between events in which the past has a stake in
the composition of what is new. Again, it is not simply the here and now (immediate
present) that matters to Whitehead, but how prehension sweeps past events up into a
unity (or nexus) in which something there becomes something here (causal efficacy).
Following Whitehead’s nonbifurcated event analysis then, the mouse cannot be said to
be in or out of mind because the past (what is prehended as the mouse) is always in the
now (this is how the mouse becomes a subjective form). In short, the mouse is
experienced as a flow of events (a process) whereby the past event flows into the present
event.

The use of prehension in critical HCI might also help researchers to go beyond
Dourish’s criticism of the second cognitive paradigm by not only radically inverting
the notion that action in the world necessarily comes after concrete experiences of
objects (the mouse) followed by an abstraction (the mouse in hand or mind), but also
questioning the very concept of social context. Indeed, as Blackwell (2015) argues,
much of the study of situated and embodied interaction misses the new technical
landscape in which social context is engendered by machine learning systems. Machine
learning operates on “‘grounded’ data, and their ‘cognition’ is based wholly on
information collected from the real world” (Blackwell, 2015). These systems directly
interact with social context insofar as they collect data from social media, cookies and
relational databases making the user experience increasingly inferred and akin to
Toffler’s forecast of a pre-programmed experience industry. For Blackwell, the
critical issue at stake now is that by making humans into “data sources” in the service
of machine learning systems, it is no longer simply a problem of grasping human
cognition as situated in the machine, but instead we need to recognize the inhumane
character of a ‘cognition’ emerging from a new technological context. Prehension can,
as such, help us to reconceive of a user experience beyond the subjective relations of
an Euclidean objective world of the here and now, by looking to a spatiotemporal
concept of interaction defined by what is experienced over there (by a machine)

141
becoming experienced here (by the human). These are concerns in critical HCI that
considerably overlap with similar concerns in media theory.

Return to Experience Capitalism

[There has been] a shift in the economy of experience itself, a shift


from a media system that addresses humans first and foremost to a
system that registers the environmentality of the world itself, prior to,
and without any necessary relation with, human affairs (Hansen 2015,
8).

Marc Hansen’s use of Whitehead helps us to conclude this discussion with a


seemingly different orientation of the problem concerning human experience in digital
culture to that forwarded by phenomenological HCI. It is, ultimately, a post-
phenomenological media theory which unashamedly backslides into the
phenomenological human-centred territory it tries to escape, but, nonetheless, Hansen
draws attention to the difficulties of developing a robust nonbifurcated analysis of
experience capitalism. His argument is a complex one, the detail and fault lines of
which cannot be fully unpacked here, but I want to focus for a moment on one
conclusion Hansen makes concerning the human experience of twenty-first-century
media; that is to say, that the current wave of digital media technology refuses human
minds access to the kind of worldly experience the phenomenological matrix introduces.
This is because, what Hansen (2015, 81) calls ‘higher-order perceptual experiences’ are
no longer implicated, he claims, in the making of the operational levels of digital culture,
including data gathering and mining.

At first look, this may seem like a plausible explanation for what happens when
capitalism, weaponized by the latest operations of digital technology, captures and
commodifies experience. Nonetheless, what I argue here is that the notion of the loss
of human experience in digital culture, suggested by Hansen, glosses over Whitehead’s
more profound and constraining concept of nonbifurcated actual experience -
something Hansen (2015) reduces to this “worldly production of experience” in which
the ontology of duration appears to be full of gaps and ruptures between human
consciousness and technologically produced experience. As Greg Seigworth (2015)
similarly argues in a recent talk:

Hansen opens an experiential gap or an interval between the body’s


perceptual apparatuses and the making of worldly sensibility (the latter

142
can be done and done more comprehensively in Hansen’s view by, say,
technical machines of various sorts). But such a conception creates a
rather troubling kind of ahistorical suspension or hiatus in any sense of
what might be longer stretches of temporal continuity – durations
persisting alongside any array of ruptures / gaps / delays – within the
ontological itself.

To be sure, the experiential gap that Hansen offers up seems to break all the rules
of Whiteheadian nonbifurcation. The point is that human experience is not increased or
lessened; it is not a case of less or more consciousness in twenty-first-century media,
or for that matter is experience something that can simply fall through an experiential
gap. On the contrary, experience is generative in the circuitries of the capitalist
economy, which records and patterns interactions as they occur in spatiotemporal
occasions. Indeed, the experience of the there, and there it is again, mouse-event is
transformed in pervasive digital media, but only in respect to the novel digital objects
that now ingress with the thickness of durational passage.

Significantly, ubiquitous, always-on, big-time data gathering operations do


capture more experience than a mere mouse click, but we have our media history
confused if we think that there was ever a time when the human mind had a privileged
status in media space. Hansan’s account, like the phenomenological matrix, is
reminiscent of the alien in Nicolas Roeg’s 1976 film The Man Who Fell to Earth,
Thomas Newton, who can experience all of the events of the analogue media world into
which he fell. Sitting in front of multiple TV screens Newton seems to inhabit the
symmetry of the here and how. “Get out of my mind, all of you… Leave my mind alone,
all of you. Stay where you belong!” he shouts at the screens. But humans are not aliens
of this kind. We cannot detach our experiences of media objects (sensed or otherwise)
from the entangled thickness of duration. We do not operate from such a command post!
In other words, while it does seem to be the case that capitalism is, via large scale data
gathering and machine learning, implicated in the processing of experience, it is
important to stress that, so-called higher order human experiences are not bifurcated
from Whitehead’s actual experience, and therefore, rather than being cut out of the loop
of actual experience, human experiences are instead captured in a complex maelstrom
of eventful entanglements that confound notions of predicated subjective conscious
experience and objective reality.

To conclude, a critical-HCI theory of experience capitalism should not be


concerned with trying to wrestle back human consciousness from operational media;
that is to say, putting the command post mind back into the loop between conscious
interaction and the technological unconscious operations of data gathering. On the
contrary, following a nonbifurcated line, we might need to admit to the impossibility of

143
such a task and focus instead on the far more dystopic grip of experience capitalism in
which the mere foothold of the mind in the durational thickness of events is captured in
a twenty-first-century media circuitry. We may choose to ponder our asymmetrical
experiences in this circuitry, but the most pressing critical issue, it would seem, is the
extent to which capitalism experiences us! Although seemingly overlapping critical
concerns from some quarters of HCI and media theory, this circuitry presents a very
different politics of experience to those that are founded on a perceived loss of human
judgement in the face of a new dehumanizing technological context. The power of
experience capitalism, weaponized by data gathering and machine learning, is not to be
found in the human’s experiential exclusion from an inhumane world of inferred
interaction. On the contrary, although there is more work to be carried out to fully grasp
the folded nature of human computer interaction and its relation to experience
capitalism, this is a power that seems to tap directly into the often improvised
experiences and events in which subjectivity is produced. The power of experience
capitalism is therefore found in a capacity to prehend past events so that they become
part of the composition of what is experienced as new.

Expand on experience capitalism - Something here on the convergence of current


trends in HCI (see reviewer one) and media theory.

References

Blackwell, A (2015) “Interacting with an inferred world: the challenge of machine


learning for humane computer interaction.” Proceedings of The Fifth Decennial
Aarhus Conference on Critical Alternatives, 169-180.

http://dl.acm.org/citation.cfm?id=2882878

Bødker, S (2006) “When second wave HCI meets third wave challenges.” NordiCHI
'06 Proceedings of the 4th Nordic conference on Human-computer interaction:
changing roles. 1-8. http://dl.acm.org/citation.cfm?id=1182476

Bødker, S (2015) “Third-Wave HCI, 10 Years Later – Participation and Sharing.”


Interactions Volume 22 Issue 5, September-October 2015, 24-31.
http://interactions.acm.org/archive/view/september-october-2015/third-wave-hci-
10-years-later-participation-and-sharing

Boehner, K, DePaula, R, Dourish, P & Sengers, P (2007) How emotion is made and
measured. International Journal of Human-Computer Studies (65), 275-291.

Dewey, J (1951) The philosophy of Whitehead. In: Schilpp, P.A (ed.) The philosophy
of Alfred North Whitehead. Tutor Publishing Company, New York.

144
Dourish, P (1999) Embodied interaction: exploring the foundations of a new approach
to HCI. Researchgate.
https://www.researchgate.net/publication/228934732_Embodied_interaction_Ex
ploring_the_foundations_of_a_new_approach_to_HCI

Dourish, P (2004) Where the action is, MIT Press, Cambridge. (Originally published
by MIT Press in 2001).

Dreyfus, H (1996) The current relevance of Merleau-Ponty’s phenomenology of


embodiment. In: Haber and Weiss (eds.), Perspectives on Embodiment.
Routledge, London.

Greenfield, A (2006) Everyware: the dawning of the age of ubiquitous computing.


New Riders, Berkeley.

Harrison, S, Tatar, D, & Sengers, P (2007) The three paradigms of HCI” paper
presented at the Conference on Human Factors in Computing Systems
https://people.cs.vt.edu/~srh/Downloads/TheThreeParadigmsofHCI.pdf

Hayles, NK (2009) RFID: human agency and meaning in information-intensive


environments. Theory, Culture and Society 26 (2-3): 47-72

Holbrook, MB & Hirschman, EC (1982) The experiential aspects of consumption:


consumer fantasies, feelings, and fun. Journal of Consumer Research Vol. 9, No.
2: 132-140.

Kuniavsky, M (2010) Smart things: ubiquitous computing user experience design,


Elsevier, Amsterdam.

Langlois, G & Elmer, G (2013). The research politics of social media platforms.
Culture Machine Vol 14 Open Humanities Press.
http://www.culturemachine.net/index.php/cm/article/viewArticle/505

Lowe, V (1951) The development of Whitehead’s philosophy. In: Schilpp, PA (ed.)


The philosophy of Alfred North Whitehead. Tutor Publishing Company, New
York, 15-124.

Mark B N Hansen (2015), Feed-forward: on the future of twenty-first-century media.


University of Chicago Press. Chicago

Merleau-Ponty, M (1962) Phenomenology of perception. Trans. G. Smith. Routledge,


London.

Mitew, T (2014) Do objects dream of an internet of things? The Fibreculture Journal


Issue 23. http://twentythree.fibreculturejournal.org/fcj-168-do-objects-dream-of-
an-internet-of-things/

145
Norman, DA (2004) Emotional design: why we love (or hate) everyday things. Basic
Books, New York.

Picard, R (1997) Affective computing. MIT Press, Cambridge.

Pine, J & Gilmore, JH (2011) The experience economy. Harvard Business School
Press, Boston. (Originally published by Harvard Business School Press in 1999).

Pine, J & Gilmore, JH (2013) The experience economy: past, present and future. In:
Jon Sundbo and Flemming Sørensen (eds) Handbook on the experience
economy. Edward Elgar Publishing, Northampton: 21-44.

Robertson, T (1997). Cooperative work and lived cognition: a taxonomy of embodied


actions. Proceedings of the European Conference on Computer-Supported
Cooperative Work ECSCW’97 (Lancaster, UK). Kluwer, Dordrecht.

Sampson, TD (2017) The assemblage brain: sense making in neuroculture. University


of Minnesota Press, Minneapolis.

Seigworth GJ (2015) Structures of digital feeling. A keynote address at the University


of Buffalo. http://www.academia.edu/26759922/Structures_of_Digital_Feeling_-
_Keynote_Address_-_University_of_Buffalo_March_2015

Stengers, I (2014) Thinking with Whitehead: a free and wild creation of concepts
(Trans Michael Chase) Harvard University Press, Cambridge (originally
published in French in 2002).

Toffler, A (1970) Future shock. Pan Books, London.

Urban, WM (1951) Whitehead’s philosophy of language and its relation to his


metaphysics. In: Schilpp, PA (ed.) The philosophy of Alfred North Whitehead.
Tutor Publishing Company, New York, 301-328.

Whitehead, AN (2004) The concept of nature, Dover, New York (Originally


published in 1920 by Cambridge University Press).

Whitehead, AN (1967) Adventures of ideas. Free Press, New York. (Originally


published in 1933 by Cambridge University Press, Cambridge).

Whitehead, AN (1985) Process and reality (corrected edition), Free Press, New York.
(Originally published in 1929 by Macmillan, New York).

Winograd, T & Flores, F (1986) Understanding computers and cognition: a new


foundation for design. Ablex, Norwood.

Wright, P & McCarthy, J (2004) Technology as experience, MIT Press, Cambridge.

146
幽灵般的媒体

张正平(Briankle G. Chang)

媒介是至高无上的,它统御至上:媒介无处不在,而且正如人们所熟知,我
们的生命依赖媒介而生。媒介又是崇高的:媒介不仅支撑着我们的日常生活,而
且正如生活会溢出生命那般超乎我们的理解。媒介的重要性远胜一切,因为它无
所不在且无所不能。可以说,在最普遍的意义上,任何两者之间的存在均为媒介。
媒介是“第三者”,是“中间物”(Medium)——你我也为媒介之一种。无论何
时何地,只要稍加留心,媒介始终“与你同行”。的确,任何两者之间总会有一
个第三者存在。该第三者会进一步成为其它“两者”中的一者,由另一“第三者”
来发挥伴随或媒介作用。由此看来,媒介总是另一媒介的媒介,或旋即消失,或
恍然出现,居于此间(in medias res)。同样,我们只能在媒介之中、以媒介
之法对媒介展开思考。

媒介,媒也(Media mediates)。然而,这种同义反复式的表述依然符合有
关媒介/中介(mediation)的事实——这个简单的事实是,在作为媒介的过程之
中,在行其所能之时,媒介消失了。媒介,无论以何种形式存在,都会成功地全
身而退,化入所谓媒介效果(media effects)之中。有效的媒介都是无形的,
唯有失效的媒介才会显现。这一点已经被海德格尔等人多有论及,同时,我们也
可智能手机或笔记本电脑失灵(或不受我们控制)时感知到媒介的这种特性。媒
介存在的前提是它允诺自我消除,隐入幕后,以至消失不见(dis-appear)。媒
介的终结意味着媒介自身的终结(the end of media is the end of media),
媒介作用的终结(the end of mediation),一切传送、传递以及一切通信与交
流的终结(the end of all correspondence and exchange)。这是“邮递员之
死”(the death of the postman),又或许会如孟德斯鸠所说的在普遍意义上
通信的“绚烂终结”(“brilliant end” of the post in general)。

从媒介的视角看来,我们的世界更近乎于莱布尼茨式而非笛卡尔式的。世界
并非由锐角事物构成,而是由在不同感知力层面上或展开或折叠的“花园”
(gardens)和“鱼塘”(fishponds)构成;而且,是在一个乐观主义者的“圆
房子”(an optimist “round house”)内——一个受万物和谐与正义主导的
天堂,其中,原子及其聚合物从不同的角度相互关联。这幅屡被提及的“万物互
联”图,正如当今世界一样基于媒介、中介、万物秩序(order of things)而存
在,从而预示着一个有关连接、中断与再连接、再中断的“完美世界”(best
possible world)。

本文认为,一切“部分-整体”关系之中都存在两种不同类型的“连接”
(connection),正是二者的归并(conflation)决定了我们大部分关于媒介的
常规理解:以内容导向为主,由此在输送、编码/解码以及传播的过程中始终以

147
人质的形象出现,且永生不灭。在这部分的结尾,我将思考两种或多或少具体些
的例证——以太(aether)和链接器/耦合器(the coupler),从而尝试厘清在
理解媒介的过程中常常被混淆的那两种连接形态。在论文的结尾部分,我将对“网
络”(network)这一理念展开反思。 “网络”持续性地赋予我们作为节点(node)
的功能而后又剥夺之,而且如德勒兹所指出的那样,我们对“节点”的理解是一
种幻觉(hallucinations),对之既无法避免、又浑然不觉。

148
Briankle G. Chang

Spectral Media

Media is sovereign; it reins supreme: not only is it everywhere, but our life, as we
know it, depends on it. Media is also sublime: not only does it supports our daily life,
but it also exceeds our comprehension in the same fashion that life itself always exceeds
the life lived. More than anything because it could be everything and is everywhere, we
will do well to begin by saying that, in its most general sense, media is whatever stands
between any two things. It is a “third,” a someone or something that, whenever and
wherever we look, “walks always beside you”—a medium, so to speak, that we
ourselves are as well. Indeed, between any two, there is always a third, which is in
turn one of the two accompanied or mediated by its other(s). Seen in this light, a media
is always a media of media, presently absent or absently present in medias res. Seen in
this light too, we can begin thinking about media only in the midst of media but also
with media.

Media mediates. This tautology, however, doesn’t fail to betray a fact about
mediation, the simple fact that in mediating, in doing what it does, media disappears.
Media, whatever form it may take, succeeds in withdrawing itself, into its work we call
media effects. When media works, it does not appear to work and it appears only
when it fails to perform, as amply discussed by Heidegger and others, but also reflected
and recognized by us when our smartphones or laptops fail to work (or work according
to its own will), as it were. The premise of media is that it promises to erase itself, to
recede to the background, to dis-appear. The end of media is the end of media; it is the
end of mediation, of all transmission, all delivery, all correspondence and exchange; it
is the death of the postman, the “brilliant end” of the post in general, as Montesquieu
would call it.

From a media point of view, our world is more Leibnizian than Cartesian; it is a
world made not sharp-angled objects, but of “gardens” and “fishponds” folding and
unfolding across levels of perceptibility and according to multiple perspectives from
which each atom and aggregates of atoms relate to one another all the while within an
optimist “round house,” a paradise, governed by universal harmony and just rebribution.
The oft-invoked image of the “internet of things” as emblematic of the present day

149
world is itself based on a media, or mediated, order of things, which in turn is predicated
on an image of a “best possible world” of connection, interruption, and ceaselessly
interrupted reconnection.

In this paper, I argue that there are two distinct types of “connection” that underlies
any part-whole relation, and it is the conflation of the two that determines much of our
common understanding of media as largely content-driven and, consequently, kept
hostage to the self-perpetuating images of transmission, encoding/decoding, circulation.
To that end, I will consider two more or less concrete figures, aether and the coupler,
that help illustrate the two modalities of connection often con-fused in our conceptions
of media. I will end by offering some reflections on the idea of network that continually
makes and unmakes us as a node whose perceptions, as Deleuze suggests, are inevitably
and unknowingly hallucinations.

Figure 1. Eric Kogan, Objects: A Love Story 2017

Figure 2, Douglas Huebler, Mediate, 1976

150
Figure 3, Robert Morris, Ring with Light 1965/66

Figure 4, Train Coupler Design

Figure 5, Train Coupler

151
Participants (与会学者)

Briankle G. Chang
Associate Professor of Department of Communication
University of Massachusetts Amherst

张正平(美国马萨诸塞大学传播系教授)

Mary Ann Doane


Class of 1937 Professor of Film and Media
Department of Film and Media
University of California, Berkeley

多 恩(美国加州大学伯克利分校电影与传媒系荣誉教授)

Fang Weigui
Distinguished Professor of School of Chinese Language and Literature
Beijing Normal University
Research Fellow of the Centre for Literary Theory
Changjiang Scholar (Ministry of Education of China)

方维规(北京师范大学文学院特聘教授,文艺学研究中心研究员,长江学者)

David J. Gunkel
Presidential Teaching Professor of Communication Studies
Department of Communication
Northern Illinois University

冈克尔(美国北伊利诺伊大学传播系教授)

152
Mark Hansen
James B. Duke Professor of Literature
Department of Art, Art History & Visual Studies
Duke University

汉 森(美国杜克大学艺术、艺术史与视觉研究系荣誉教授)

Jiang Yi
Professor of School of Philosophy and Sociology
Shanxi University
Changjiang Scholar (Ministry of Education of China)

江 怡(山西大学哲学社会学学院教授,长江学者)

Myung-koo Kang
Professor Emeritus of Media and Cultural Studies
Director of Asia Center
Seoul National University

姜明求(韩国首尔大学传播系教授,亚洲研究中心主任)

Sybille Krämer
Professor of Theoretical Philosophy
Institute of Philosophy
Free University of Berlin.

克莱默(德国柏林自由大学哲学系教授)

Liu Chao
Professor of State Key Laboratory of Cognitive Neuroscience and Learning
Beijing Normal University
Young Top-notch Talent of Ten Thousand Talent Program

刘 超(北京师范大学认知神经科学与学习国家重点实验室教授,万人计划青
年拔尖人才)

153
Luo Yuejia
Distinguished Professor of College of Psychology and Sociology
Founding Director at the Research Center of Brain Disorder and Cognitive Science
Shenzhen University

罗跃嘉(深圳大学特聘教授,脑疾病与认知科学研究中心主任)

Tony D. Sampson
Reader in Digital Culture and Communication
School of Arts and Digital Industries
University of East London

桑普森(英国东伦敦大学艺术与数字产业系教授)

Christina Vagt
Associate Professor for European Media Studies
Department of Germanic & Slavic Studies
University of California, Santa Barbara

瓦格特(美国加州大学圣塔芭芭拉分校日耳曼与斯拉夫研究系教授)

Joseph Vogl
Professor of German literary, Cultural and Media Studies
Institute of German Literature
Humboldt University of Berlin
Permanent Visiting Professor at the Department of German
Princeton University

福格尔(德国柏林洪堡大学德语文学系教授,美国普林斯顿大学德语系常任客
座教授)

154
Xu Yingjin
Professor of School of Philosophy
Fudan University
Changjiang Young Scholar (Ministry of Education of China)

徐英瑾(复旦大学哲学学院教授,青年长江学者)

Shunya Yoshimi
Professor of Interfaculty Initiative in Information Studies
Vice President, University of Tokyo.

吉见俊哉(日本东京大学信息学研究科教授,副校长)

Siegfried Zielinski
Professor for Archaeology & Variantology of the Arts & Media
Berlin University of Arts
Michel-Foucault-Professor for Techno-Aesthetics and Media Archaeology
European Graduate School in Saas Fee

齐林斯基(柏林艺术大学媒体理论教授,瑞士欧洲研究院米歇尔·福柯讲席教
授)

155

Vous aimerez peut-être aussi