Vous êtes sur la page 1sur 7

1

Taylor Dodson
5/6/2015
Assignment 5 Opinion Piece
Teach
The Problem With a Moral Machine:
Programmers, Not Programs
In the world of computer science, ethical advanced artificial intelligence (also known as A.I) is
a growing concern among scientists and philosophers alike. Because the government is now
spending billions of hard-earned dollars on research of moral A.I (Tucker, 2014, p.1), we, too,
must ask ourselves the questions that plague them: do we need a moral A.I? Is it possible to
create one? Although many professionals continue to argue for the relevance of morality in
machines, the fact of the matter is that a computer as a moral agent is a poorly-conceived idea on
ideological, factual, and practical fronts; it is far removed from the actual capabilities of a
being that operates on syntactical and numerical algorithms. For this reason, in this essay, I
shall be arguing that the government, computer scientists, and moral theorists would be wise to
place their efforts and funding into creating moral programmers instead.
One of the main reasons this debate exists is that experts are trying to argue that morality
is somehow programmable. Laszlo Versenyi (1974), a professor of philosophy at Williams
College, is one of these individuals; he reasons that morality and virtue are a result of a
combination of skills and experience. Skills and experience are, as he puts it, objective in nature,
meaning that their outcomes are consistent and predictable, much like a mathematic formula.

Hence, he believes that it is entirely possible to re-create virtue via a computer algorithm (p.
250). This, unfortunately, demonstrates a misunderstanding of both the nature of morality and the
fundamental aspects of computational thinking.
First of all, skills and experience are most certainly not objective: while two people may
have the same experience or learn the same skill, the two will acquire, interpret, or use said skill
or experience in different ways depending on an unquantifiable number of factors in infinite
combinations; for that reason, while such experiences may be objective in terms of when or how
they happened, the way they are carried throughout the rest of a persons life are not. For
instance, if two people are raised by the same working mother, one may come to interpret her
being gone most days of the week as good work ethic, thus coming to see constantly working
mothers as morally outstanding, while the other might grow bitter and see it as neglect,
developing a moral objection to working mothers as a result. There are hard facts here (e.g. the
mother being absent for most of the childrens childhood) but they can be seen in different ways,
meaning that it is ultimately subjective -- a stark contrast to a computer programs objective
nature.
Second of all, Versenyis claim makes moral judgement sound far simpler than it actually
is. To understand the sheer magnitude of what morality would demand of our technology,
consider the following scenario presented by Adam Keiper (director of the Science and
Technology Program at the Ethics and Public Policy Center) and Ari Schulman (Computer
science major and former research assistant at The New York Times) (2011):
A friendly robot has been assigned by a busy couple to babysit [. . .] During the day, one
of the children requests to eat a bowl of ice cream. Should the robot allow it? The
immediate answer seems to be yes [. . .] Yet if the robot has been at all educated in human

physiology, it will understand the risks posed by consuming foods high in fat and sugar. It
might then judge the answer to be no. Yet the robot may also be aware of the dangers of a
diet too low in fat, particularly for children [. . .] before the robot could even come close
to acting [. . .] it would first have to resolve a series of outstanding questions in medicine,
child development, and child psychology, not to mention parenting and the law [. . . ] (pp.
84-85)
This is a rather small list of examples of data a moral A.I would have to recall, sort,
define and interpret, and yet, the prospect is already incredibly daunting: in the above example,
were asking the A.I to be a doctor, a parent, a nutritionist, and enjoyable company for the child
all at once, and that is far more than we ask of even the brightest of human beings. From this, we
can come to the conclusion that making a moral decision is extremely complex: so complex that
no USB port, hard drive, or compiler is big or powerful enough to account for every hormone,
neural message, fact, and memory involved in making allegedly moral judgements and decisions.
There are further arguments to be made for the existence of the insurmountable barrier
between moral reasoning and the mind of a computer; according Concordia Universitys
resident mathematician William P. Byers and The University of Quebecs moral theorist Michael
Schleifer (2010), an algorithm is characterized by having a well-defined temporal order [. . .]
The existence of such algorithms presupposes that specific areas of intellectual activity can be
made conflict free (p. 32). Notice that very last phrase: conflict free. What do they mean by
that? Conflict free is synonymous with without conflict, meaning that there is no contention,
contradiction, or battle between ideas, physical bodies, or otherwise. Such a state, as Byers and
Shleifer rightfully assert, is innate to programs (p. 32); computer algorithms cannot function if
they are riddled with conflict, contradiction, or gray areas. If there are any, the program either

ceases to function, produces the wrong output, or reports an error. A program is a collection of
hard facts and orders, and if those orders or facts ever come to contend with one another, things
go wrong.
Morality is the exact opposite of this: it is opinions, experiences, and interpretations, all
of which are extremely dynamic, layered, and festering with exceptions and dark uncertainties.
The inherent subjectivity of it has spurred warfare on mental, civil, and international fronts,
sparked the most heated political debates, and torn families asunder. It is, without exaggeration,
one of the biggest sources of human conflict on earth. Knowing this, we can finally understand
exactly what creating a truly moral A.I requires: we ourselves must obtain one universal, flawless
moral code that can be agreed on by every single individual that could ever exist in all nations
and walks of life. It must be one made up of facts and solid rules that could never be challenged
within a persons own mind or by anyone around them for any reason. Wed have to create a
conflict free source of conflict; an oxymoron of spectacular proportions.
Even then, many still argue that we need a moral A.I in our advancing world. A.I
researcher Steven Omohundro, for instance, believes that a moral robot may be very necessary,
specifically in the case of military combat, stating that human lives and property rest on the
outcomes of [military] decisions and so it is critical that they be made carefully and with full
knowledge of the capabilities and limitations of the systems involved (Tucker, p.1, 2014). His
concern is not misplaced considering the increased use of autonomous combat agents such as
drones; however, while it would be preferable to put machines at risk instead of humans in
military operations, there is still absolutely no reason to leave the moral decision making in such
matters in the proverbial hands of a robot. To come to this conclusion, all we must ask ourselves
is which is more comfortable for us; placing human lives at the discretion of living, feeling,

trained human solider, or a cold, non-feeling, non-human, potentially error-prone metal agent?
The unassailable divide between man and machine one that we established above -- makes one
wonder: if we need a human mind behind the robot, why cant we simply place a human behind
its controls rather than giving the robot its own, ever-changing take on the battlefield? While it
does still require endangering a human being on a psychological level, that is still a decidedly
lesser risk than both sending a man out in the flesh and allowing a machine to call the shots.
Like Omohundro, Alan Winfield (2013), a professor of electronics engineering at the
University of the West England, argues that moral autonomous systems are needed in the
development of advanced robotics, pointing to major services like commercial airlines to
substantiate his view. Through this, he asserts that we can only trust such machines because they
are held to stringent design and safety standards, and thus, we must hold artificial intelligence
to the same kind of standards in the form of a strict moral code; the very same we use to protect
ourselves from each other. But why, we must ask, should ensuring the level of virtue or safety of
the machine be the responsibility of the machine itself? Anything we might need a computer to
do, right down to the manner in which it can be done, can be directly set and adjusted by humans
perfectly, so why not simply have a human make the moral judgment beforehand, and then give
the robot a thoroughly defined task in correspondence to that judgement? It is unlikely that there
is any solid reason not to, and thus, the development of such an agent is rather pointless; if
computer scientists remain actively engaged in what the A.I does, there is little reason to make it
capable of ensuring the goodness or safeness of its own actions.
None of this, however, is meant to imply that morality has no place in the development in
advanced A.I: rather, I am asserting that were placing the burden of behaving morally in the
wrong place. I have suggested above that the morality of a program should be dependent on

human beings, and thus, in order to ensure that any future A.I we develop is moral (as the above
advocates so desire), we must transform the entry-way into the computer science profession. As a
computer science student myself, one thing that I have noticed about engineering and
computational science curricula is that plenty of focus is placed the theoretic and applicable
elements of creating advanced computational systems, but the possible broader effects and
ethical implications of creating said systems are simply glossed over. Programmers must be
made to understand that they cannot mindlessly innovate: they must be taught to tread cautiously
and carefully, thoroughly considering the possible ethical implications of each their inventions,
especially one that is capable of some form of thought. Hence, urging young people interested
in creating their own advanced programs and intelligences to uphold ethical and moral values
(e.g. avoiding creating systems that violate human rights like the right to privacy) will eliminate
the need for a moral A.I since, thanks to the efforts of a moral creator, it would not be capable of
immorality in the first place.
The allure is understandable. To have virtually invincible, infallible beings at our sides in
our brightest or difficult moments definitely seems to be a near and shining dream on the surface:
there are few who would not appreciate being able to march on an enemy without spilling human
blood as Omohondro imagines, having useful, trustworthy service agents as Winfield hopes, or
creating new partners in learning and growth as Versenyi believes. I must stress once again,
however, that it is an ultimately fruitless endeavor; one that costs far more time and money than
it is worth. In order to truly innovate in the best ways possible, we must focus on teaching our
new engineers to be kind and mindful in their work: that, in the long run, is far more realistic,
and far more important.

References
Byers, W. and Schleifer, M. (2010). Mathematics, Morality & Machines. Philosophy Now, 78,
30-33.
Keiper, A. and Schulman, A. (2011). The Problem with 'Friendly' Artificial Intelligence. The New
Atlantis: A Journal of Technology and Society, 32, 80-89. Retrieved from: http://0web.a.ebscohost.com.lib.utep.edu/ehost/pdfviewer/pdfviewer?sid=dbf58df2-1a93-48b19eb0-ea3c3234f637%40sessionmgr4002&vid=1&hid=4114
Tucker, P. (2014, May). The Military Wants to Teach Robots Right From Wrong [Supplemental
material]. The Atlantic. Retrieved from:
http://www.theatlantic.com/technology/archive/2014/05/the-military-wants-to-teachrobots-right-from-wrong/370855/
Winfield, A. (2013, October). Ethical robotics: Some technical and ethical challenges
[Presentation slides]. Retrieved from:
https://drive.google.com/file/d/0BwjY2P_eeOeiOThZX3VlRFU1Xzg/view
Versenyi, L. (1974). Can Robots Be Moral? Ethics, 84, 248-259.

Vous aimerez peut-être aussi