Académique Documents
Professionnel Documents
Culture Documents
Liability for
Crimes Involving
Artificial
Intelligence
Systems
Liability for Crimes Involving Artificial
Intelligence Systems
ThiS is a FM Blank Page
Gabriel Hallevy
The idea of liability for crimes involving artificial intelligence systems has not been
widely researched yet. Advanced technology makes society face new challenges,
not only technological, but legal as well. The idea of criminal liability in the specific
context of artificial intelligence systems is one of these challenges that should be
thoroughly explored. The main question is who should be criminally liable for
offenses involving artificial intelligence systems. The answer may include the
programmers, the manufacturers, the users, and, perhaps, the artificial intelligence
system itself.
In 2010 a few articles of mine were published in the USA and Australia on certain
aspects of this issue. These articles explored the specific aspects that seemed to be
important to open up an academic discussion on this issue. The main idea of these
articles was that criminal law is not supposed to change technology, but should adapt
itself to modern technological insights. They also called for thinking and rethinking
the idea of imposition of criminal liability upon machines and software. Perhaps, no
criminal liability should be imposed on machines, but if basic definitions of criminal
law are not changed, this odd and weird consequence is inevitable.
Dozens of comments arrived for each article, and the time has come for narrow
generalization of this idea. The first generalization of this idea was restricted to
tangible robots, which are equipped with artificial intelligence software and commit
homicide offenses as specific offenses and not through derivative criminal liability.
Thus, my book When Robots Kill was published in 2013 in the USA by UPNE and
Northeastern University Press. Although the book is academic, it made an attempt
to address wider population other than legal academics.
The book was found innovative, and reviews were published in various places
such as the Washington Post, the Boston Globe and the Chronicle Review. Dozens
of comments arrived as well. Some of these comments called for the final and full
academic generalization of this issue, not restricted to tangible robots, not restricted
to homicide offenses and opened for derivative criminal liability. The need was for
an academic professional textbook towards this issue, although it may not address
to wide population. This book is the final and full academic generalization of this
issue. The general idea expressed in this book relates to all types of advanced
artificial intelligence systems, including both fully operational and planned
systems, to all modes of criminal liability, including direct and derivative liability,
and to all types of offenses.
v
vi Preface
The reader would find in this book a mature thorough theory towards the
criminal liability for offenses involving artificial intelligence systems based on
the current criminal law in most modern legal systems. The involvement of the
artificial intelligence systems in these offenses may be as perpetrators, accomplices
and mere instruments for the commission of the offense. One of the points of this
book is that, perhaps, no criminal liability should be imposed on technological
systems, at least yet, but if basic definitions of criminal law are not changed, this
odd consequence is inevitable.
Gabriel Hallevy
Contents
vii
viii Contents
Contents
1.1 Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 The Rise of Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Outlines of Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.3 Daily Usage of Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2 The Development of the Modern Technological Delinquency . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.1 The Aversion from Wide Usage of Advanced Technology . . . . . . . . . . . . . . . . . . . . . 16
1.2.2 Delinquency by Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.2.3 Modern Analogies of Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Artificial intelligence technology is the basis for growing number of science fiction
compositions, such as books and movies. Some of them reflect fears from this
technology and some reflect the enthusiasm towards it. The major epistemological
question has always remained whether machines can think. Some agree that they
can “think”, but the question is whether they can think (without commas).
The modern answer to this question may be proposed by artificial intelligence
technology.1 This technology is considered to be modern, but its roots are not
necessarily modern. In fact, since the very dawn of humanity mankind has always
sought tools to ease daily life. In the Stone Age, these tools were made of stone. As
mankind discovered the advantages of metal, these tools were made of metal. As
human knowledge became wider, more and more tools were invented to take
growing roles in human daily life.
1
For the technical review of this issue and the historical developments see GABRIEL HALLEVY,
WHEN ROBOTS KILL – ARTIFICIAL INTELLIGENCE UNDER CRIMINAL LAW 1–37 (2013).
2
See e.g., AAGE GERHARDT DRACHMANN, THE MECHANICAL TECHNOLOGY OF GREEK AND ROMAN
ANTIQUITY: A STUDY OF THE LITERARY SOURCES (1963); J. G. LANDELS, ENGINEERING IN THE ANCIENT
WORLD (rev. ed., 2000).
3
René Descartes, Discours de la Méthode pour Bien Conduire sa Raison et Chercher La Vérité
dans Les Sciences (1637) (Eng: Discourse on the Method of Rightly Conducting One’s Reason and
of Seeking Truth in the Sciences).
4
Terry Winograd, Thinking Machines: Can There Be? Are We?, THE FOUNDATIONS OF ARTIFICIAL
INTELLIGENCE 167, 168 (Derek Partridge and Yorick Wilks eds., 1990, 2006).
5
THOMAS HOBBES, LEVIATHAN OR THE MATTER, FORME AND POWER OF A COMMON WEALTH
ECCLESIASTICALL AND CIVIL III.xxxii.2 (1651):
When a man reasoneth, he does nothing else but conceive a sum total, from addition of
parcels; or conceive a remainder. . . These operations are not incident to numbers only, but
to all manner of things that can be added together, and taken one out of another. . . the
logicians teach the same in consequences of words; adding together two names to make an
affirmation, and to affirmations to make a syllogism; and many syllogisms to make a
demonstration.
6
GOTTFRIED WILHELM LEIBNIZ, CHARACTERISTICA UNIVERSALIS (1676).
1.1 Artificial Intelligence Technology 3
However, only when electricity was discovered for daily use, and electronic
computers were invented, the idea of “artificial intelligence” could be examined de
facto. In the 1950s major developments were established in machine-to-machine
translation, so machines could communicate with each other, and in human-to-
machine translation, so humans and machines could communicate through orders
given by human operators to computers. This communication was very limited, but
was adequate for optimal use of these computers. Computers scientists combined
the modern knowledge of natural language in their works, and consequently
knowledge representation has eventually developed.7
Electronic computers’ capability to store large amounts of information and
process that information at high speed challenged scientists to build systems,
which could exhibit human capabilities. Since the 1950s more and more human
abilities have been operated by electronic machines. The private computer has been
invented, and during time its size and cost were reduced, so it became available for
increasing part of the population. In addition, memory capacity, speed, reliability
and robustness of private computers increased dramatically. Thousands of useful
software tools were developed and are in daily practice. This progress made
artificial intelligence be available for population.
Artificial intelligence has been developed as separate sphere of research during
the vast developments of the 1950s. This sphere of research combined both
technological study, studies in logic and eventually cybernetics, i.e., the study of
communication in humans and machines. The studies in logic of the 1920s and
1930s enabled to produce formalized methods for reasoning. These methods
formed new form of logic known as propositional and predicate calculus, and
were based on the works of Church, Gödel, Post, Russell, Tarski, Whitehead,
Kleene and many others.8 Developments in psychology, neurology, statistics and
mathematics during the 1950s were combined as well to this growing research
sphere of artificial intelligence.9
By the end of the 1950s occurred several developments which signified for the
public the emergence of the artificial intelligence. The major one was the develop-
ment of chess-playing programs together with the General Problem Solver (GPS),
which was designed to solve wide range of problems from symbolic integration tom
word puzzles. Suddenly the public was exposed to the artificial intelligence abilities
in daily life. It caused enthusiasm, but also unrealistic expectations for those
times.10 The first science-fiction novels towards robot rebellions in humans and
robots taking control over humans have become very popular, based on these
unrealistic expectations.
7
N. P. PADHY, ARTIFICIAL INTELLIGENCE AND INTELLIGENT SYSTEMS 4 (2005, 2009).
8
DAN W. PATTERSON, INTRODUCTION TO ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS (1990).
9
GEORGE F. LUGER, ARTIFICIAL INTELLIGENCE: STRUCTURES AND STRATEGIES FOR COMPLEX PROBLEM
SOLVING (2001).
10
J. R. MCDONALD, G. M. BURT, J. S. ZIELINSKI AND S. D. J. MCARTHUR, INTELLIGENT KNOWLEDGE
BASED SYSTEM IN ELECTRICAL POWER ENGINEERING (1997).
4 1 Artificial Intelligence Technology and Modern Technological Delinquency
11
STUART J. RUSSELL AND PETER NORVIG, ARTIFICIAL INTELLIGENCE: A MODERN APPROACH (2002).
12
Patterson, supra note 8.
13
STEVEN L. TANIMOTO, ELEMENTS OF ARTIFICIAL INTELLIGENCE: AN INTRODUCTION USING LISP (1987).
1.1 Artificial Intelligence Technology 5
and refreshing older heuristics.14 Later on, the following challenge was to enable
newer technology to be incorporated into these expert systems very short after these
technologies become available.
The development of expert systems caused the basic mechanisms of machine
learning and problem solving to be studied thoroughly. Consequently, artificial
intelligence-based expert systems use was expanded to many more fields. More
traditional human abilities were replaced by artificial intelligence technology. This
expansion made industry be interested in the development of the artificial intelli-
gence technology beyond the academic research. The 1980s academic debate
towards the advantages of artificial intelligence technology and whether it proposes
any useful theory15 has been abandoned. The increasing use of artificial intelligence
technology created actual needs of developments.
Industry’s involvement in artificial intelligence research was increased during
the time for various reasons. First, the artificial intelligence achievements were
doubtless, especially in knowledge engineering. Second, the hardware evolution,
which made it become faster, cheaper, more comfortable, feasible and accessible
for users. Third, the growing needs of industry of problem solving faster and more
thoroughly through the attempt to increase productivity for the benefit of all. Since
artificial intelligence technology could provide suitable answers for these needs,
industry supported its development. Consequently, artificial intelligence technol-
ogy has been embraced in most industrial areas, especially in the aspects of factory
automation, programming industry, office automation and personal computing.16
The combination of growing abilities of artificial intelligence technology,
human curiosity and industrial needs direct the global trend to expansion of usage
of artificial intelligence technologies. This trend grows during the time. More and
more traditional human social functions are replaced by artificial intelligence
technologies.17 This global trend has been increased during the entrance to the
third millennium. South Korea is an example. South Korea government uses
artificial intelligence robots as soldier guards in the border with North Korea, as
teachers in schools and as prison guards.18
The US Air Force wrote in a report that outlines the future usage of drone
aircrafts, titled “Unmanned Aircraft Systems Flight Plan 2009–2047”, that
14
See, e.g., Edwina L. Rissland, Artificial Intelligence and Law: Stepping Stones to a Model of
Legal Reasoning, 99 YALE L. J. 1957, 1961–1964 (1990); ALAN TYREE, EXPERT SYSTEMS IN LAW
7–11 (1989).
15
ROBERT M. GLORIOSO AND FERNANDO C. COLON OSORIO, ENGINEERING INTELLIGENT SYSTEMS:
CONCEPTS AND APPLICATIONS (1980).
16
Padhy, supra note 7, at p. 13.
17
See e.g., Adam Waytz and Michael Norton, How to Make Robots Seem Less Creepy, The Wall
Street Journal, June 2, 2014.
18
Nick Carbone, South Korea Rolls Out Robotic Prison Guards, http://newsfeed.time.com/2011/
11/27/south-korea-debuts-robotic-prison-guards/; Alex Knapp, South Korean Prison To Feature
Robot Guards, http://www.forbes.com/sites/alexknapp/2011/11/27/south-korean-prison-to-fea
ture-robot-guards/.
6 1 Artificial Intelligence Technology and Modern Technological Delinquency
autonomous drone aircrafts are key “to increasing effects while potentially reducing
cost, forward footing and risk”. Much like a chess master can outperform proficient
chess players, future drones will be able to react faster than human pilots ever could,
the report argues. However, the report is aware of the potential legal problem:
“Increasingly humans will no longer be ‘in the loop’ but rather ‘on the loop’ –
monitoring the execution of certain decisions. . . .Authorizing a machine to make
lethal combat decisions is contingent upon political and military leaders resolving
legal and ethical questions”.19
Artificial Intelligence researchers have been trying to develop computer that actu-
ally think from the beginning of the artificial intelligence research.20 This is the
highest peak of artificial intelligence research. However, in order to develop
thinking machine it is necessary to define what exactly thinking is. Definition of
thinking, in relation to both humans and machines, has been found out as compli-
cated task for artificial intelligence researchers. Development of machines, which
have the independent ability of actual thinking, is an important event for mankind
who claims for monopoly over the high level of thinking on earth thinking machine
of that kind is analogous to not less than emergence of new species. Some
researchers even called it Machina Sapiens.
Does human science really want to create the new species? The research for
creation of new species matches this trend. The creation of this species may be for
the benefit of humans, but this is not necessarily the reason for the artificial
intelligence research. The reason may be much deeper, which touches the human
deepest and most latent quest, the very quest that its achievement has been
prevented from humans right after the human first sin.
One of the first moves towards the aim of achieving thinking machine is to define
artificial intelligence. Various definitions have been proposed. Bellman defined it as
“the automation of activities that we associate with human thinking, activities such
as decision-making, problem solving, learning,. . .”,21 Haugeland defined it as “the
exciting new effort to make computers think. . . machines with mind, in the full and
literal sense”,22 Charniak and McDermott defined it as “the study of mental
faculties through the use of computational models”.23
19
W.J. Hennigan, New Drone Has No Pilot Anywhere, So Who’s Accountable?, Los Angeles
Times, January 26, 2012. See also http://www.latimes.com/business/la-fi-auto-drone-
20120126,0,740306.story.
20
EUGENE CHARNIAK AND DREW MCDERMOTT, INTRODUCTION TO ARTIFICIAL INTELLIGENCE (1985).
21
RICHARD E. BELLMAN, AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE: CAN COMPUTERS THINK?
(1978).
22
JOHN HAUGELAND, ARTIFICIAL INTELLIGENCE: THE VERY IDEA (1985).
23
Charniak and McDermott, supra note 20.
1.1 Artificial Intelligence Technology 7
Artificial intelligence systems which act like humans are characterized by the
Turing test of 1950.30 This test was designed to provide a satisfactory operational
definition of intelligence. Turing defined intelligent behavior as the ability to
achieve human level performance in all cognitive tasks sufficient to mistake the
human interrogator. Generally, the Turing test proposes that human is listening
conversation between machine and human. The conversation may be done in
writing. The machine is passed the test if the listening human cannot clearly
identify who is the human and who is the machine.31 The Turing test assumes
equal cognitive abilities for all humans, but conversations between machine and a
child, mentally retarded person, tired person and the machine designer are likely to
be very different.32
Artificial intelligence systems which think like humans are difficult to be
identified unless human thinking is defined preliminarily. However, artificial intel-
ligence technologies which were designed as general problem solvers were traced
to make decisions which are very similar to human decisions under the same
24
ROBERT J. SCHALKOFF, ARTIFICIAL INTELLIGENCE: AN ENGINEERING APPROACH (1990).
25
RAYMOND KURZWEIL, THE AGE OF INTELLIGENT MACHINES (1990).
26
PATRICK HENRY WINSTON, ARTIFICIAL INTELLIGENCE (3rd ed., 1992).
27
GEORGE F. LUGER AND WILLIAM A. STUBBLEFIELD, ARTIFICIAL INTELLIGENCE: STRUCTURES AND
STRATEGIES FOR COMPLEX PROBLEM SOLVING (6th ed., 2008).
28
Elaine Rich and Kevin Knight, Artificial Intelligence (2nd ed., 1991).
29
Padhy, supra note 7, at p. 7.
30
Alan Turing, Computing Machinery and Intelligence, 59 MIND 433, 433–460 (1950).
31
Donald Davidson, Turing’s Test, MODELLING THE MIND 1 (1990).
32
Robert M. French, Subcognition and the Limits of the Turing Test, 99 MIND 53, 53–54 (1990).
8 1 Artificial Intelligence Technology and Modern Technological Delinquency
33
MASOUD YAZDANI AND AJIT NARAYANAN, ARTIFICIAL INTELLIGENCE: HUMAN EFFECTS (1985).
34
STEVEN L. TANIMOTO, ELEMENTS OF ARTIFICIAL INTELLIGENCE: AN INTRODUCTION USING LISP (1987).
35
JOHN R. SEARLE, MINDS, BRAINS AND SCIENCE 28–41 (1984); John R. Searle, Minds, Brains &
Programs, 3 BEHAVIORAL & BRAIN SCI. 417 (1980).
1.1 Artificial Intelligence Technology 9
(a) communication;
(b) internal knowledge;
(c) external knowledge;
(d) goal-driven conduct; and-
(e) creativity.36
36
Roger C. Schank, What is artificial intelligence, Anyway?, THE FOUNDATIONS OF ARTIFICIAL
INTELLIGENCE 3, 4–6 (Derek Partridge and Yorick Wilks eds., 1990, 2006).
10 1 Artificial Intelligence Technology and Modern Technological Delinquency
37
DOUGLAS R. HOFSTADTER, GÖDEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID 539–604
(1979, 1999).
38
See, e.g., DONALD A. WATERMAN, A GUIDE TO EXPERT SYSTEMS (1986):
It wasn’t until the late 1970s that artificial intelligence scientists began to realize something
quite important: The problem-solving power of a program comes from the knowledge it
possesses, not just from the formalisms and inference schemes it employs. The conceptual
breakthrough was made and can be quite simple stated. To make a program intelligent,
provide it with lots of high-quality, specific knowledge about some problem area.;
DONALD MICHIE AND RORY JOHNSTON, THE CREATIVE COMPUTER (1984); EDWARD A. FEIGENBAUM
ANDPAMELA MCCORDUCK, THE FIFTH GENERATION: ARTIFICIAL INTELLIGENCE AND JAPAN’S COMPUTER
CHALLENGE TO THE WORLD (1983).
1.1 Artificial Intelligence Technology 11
the person intended to drink the water under the intention not to be thirsty anymore.
Goal-driven conduct is not unique to humans. When a cat sees some milk behind an
obstacle, it plans to bypass the obstacle and get the milk. When executing the plan,
the cat commits a goal-driven conduct.
Nevertheless, different creatures may have different goals of different levels of
complexity. The more intelligent the entity is, the more complex its goals are. Some
animals may have goals to call for help for their master who is in distress situation,
but humans may have goals of reaching the outer space, curing deadly diseases,
forming genetic engineering and more. Computers have the ability to plan many of
these goals, and certain computers are already executing these plans. Reductionism
approach to goal-driven conduct dismantles the complicated goal into many simple
goals. Achieving both is considered goal-driven conduct. Computers may be
programmed with recorded goals and plans to achieve them. However, not all
humans have complicated goals at any time under any circumstances. The question
here is, what level of complexity is required in order to be considered intelligent?
Creativity relates to finding new ways of understanding or activity. An intelli-
gent entity is assumed to have some degree of creativity. When a bug tries to get out
of the room through a closed window, it will try it over and over again crashing on
that window time after time. Trying over and over again exactly the same conduct is
not a symptom of creativity. However, sometimes, at some point the bug will get
tired and seek for another way. This would be considered more creative, but in most
cases it will rest a while and try to get out the same way over and over again. For a
bug, it might take 20 times, for a dog it might take less, and for human it might take
much less. Consequently, dogs are considered more intelligent than most bugs.
A computer may be programmed not to repeat the same conduct more than once
and seek other ways to solve the problem. This kind of programs is essential in
general solving problems software. Nevertheless, there are problems that solving
them requires repeating the same behavior a few times. Creativity in these cases
would prevent the solution seeker from solving the problem. For example, calling
someone through the telephone and hearing a busy signal requires repeating the act
of calling the addressee over and over again until having the ability to speak to the
addressee.
In general, creativity has some degrees and levels and it is not homogenous. Not
all humans are considered to be thinking outside the box, and many humans do their
daily tasks exactly the same way day by day for years. Most people drive to their
working place on the same way day by day with no change. What makes their
creativity be different than the above bug’s creativity? Many factory workers do the
same acts for hours, day by day, and they are considered to be intelligent entities.
The question here, is what is the exact level of creativity required to identify
intelligence, especially in the context of artificial intelligence?
Not all humans share all of these five attributes and are still considered intelli-
gent. The irritating question was, why should society use different standards for
humans and machines to measure intelligence. Is intelligence not universally and
objectively measured? However, it seemed that any time a new software established
specific attribute, criticism rejected the achievement by regarding it as not real
12 1 Artificial Intelligence Technology and Modern Technological Delinquency
39
See, e.g., the criticism of Winograd, supra note 4, at pp. 178–181.
40
Schank, supra note 36, pp. 9–12.
41
See PHILLIP N. JOHNSON-LAIRD, MENTAL MODELS 448–477 (1983); But see also COLIN MCGINN,
THE PROBLEM OF CONSCIOUSNESS: ESSAYS TOWARDS A RESOLUTION 202, 209–213 (1991).
42
HOWARD GARDNER, THE MIND’S NEW SCIENCE: A HISTORY OF THE COGNITIVE REVOLUTION (1985);
MARVIN MINSKY, THE SOCIETY OF MIND (1986); ALLEN NEWELL AND HERBERT A. SIMON, HUMAN
PROBLEM SOLVING (1972); Winograd, supra note 4, at pp. 169–171.
43
MAX WEBER, ECONOMY AND SOCIETY: AN OUTLINE OF INTERPRETIVE SOCIOLOGY (1968); Winograd,
supra note 4, at pp. 182–183.
44
Daniel C. Dennett, Evolution, Error, and Intentionality, THE FOUNDATIONS OF ARTIFICIAL INTELLI-
GENCE 190, 190–211 (Derek Partridge and Yorick Wilks eds., 1990, 2006).
1.1 Artificial Intelligence Technology 13
45
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231, 1262
(1992); OWEN J. FLANAGAN, JR., THE SCIENCE OF THE MIND 254 (2nd ed., 1991); John Haugeland,
Semantic Engines: An Introduction to Mind Design, MIND DESIGN 1, 32 (John Haugeland
ed., 1981).
46
MONTY NEWBORN, DEEP BLUE (2002).
47
STEPHEN BAKER, FINAL JEOPARDY: MAN VS. MACHINE AND THE QUEST TO KNOW EVERYTHING (2011).
48
VOJISLAV KECMAN, LEARNING AND SOFT COMPUTING, SUPPORT VECTOR MACHINES, NEURAL
NETWORKS AND FUZZY LOGIC MODELS (2001).
14 1 Artificial Intelligence Technology and Modern Technological Delinquency
Artificial intelligence technology is being used both in private use and industrial use
for years. As noted above,50 artificial intelligence technology has been embraced in
advanced industry since the 1970s. However, whereas in the beginning the artificial
intelligence technology was used by industry because of the similarity to human
mind, later it has been used rather because of the differences from human mind. It
49
For example, in November 2009, during the Supercomputing Conference in Portland Oregon
(SC 09), IBM scientists and others announced that they succeeded in creating a new algorithm
named “Blue Matter,” which possesses the thinking capabilities of a cat. Chris Capps, “Thinking”
Supercomputer Now Conscious as a Cat, http://www.unexplainable.net/artman/publish/article_
14423.shtml; International Conference for High Performance Computing, Networking, Storage
and Analysis, SC09, http://sc09.supercomputing.org/. This algorithm collects information from
very many units with parallel and distributed connections. The information is integrated and
creates a full image of sensory information, perception, dynamic action and reaction, and cogni-
tion. B.G. FITCH ET AL., IBM RESEARCH REPORT, BLUE MATTER: AN APPLICATION FRAMEWORK FOR
MOLECULAR SIMULATION ON BLUE GENE (2003). This platform simulates brain capabilities, and
eventually, it is supposed to simulate real thought processes. The final application of this algorithm
contains not only analog and digital circuits, metal or plastics, but also protein-based biologic
surfaces.
50
Above at Sect. 1.1.1.
1.1 Artificial Intelligence Technology 15
has been understood by industry, that complete and perfect imitation of human
mind would not be as useful as incomplete ones.
The industry would encourage artificial intelligence technology development, as
long as it is not imitating human mind in complete. Since the way to complete
imitation of human mind is still far, industry and artificial intelligence research still
cooperate. Further cooperation, if complete imitation of human mind would be
relevant, is not guaranteed.
Industry has actually advantaged the disadvantages. For instance, when people
take a simple calculator and type “2+2¼” repeatedly, they will continue to get the
answer “4” each time. If people do that thousands of times, the answer would be
exactly the same every single time. The process activated by the calculator would
be very much of the same each time. However, if a human is asked the same
question of “2+2¼”, he may answer for the first time, if not thinking he is mocked
at, and perhaps for few more times, but not thousands of times. At some point the
human will stop answering for getting bored, irritated, nervous or for losing any
desire to keep on answering.
This phenomenon is considered as huge disadvantage of the machine in the
artificial intelligence researchers’ point of view. It emphasizes the point that human
mind may act arbitrarily, out of irrational reasons etc. However, how would people
react, if the calculator refuses to answer “2+2¼”, even if it is the thousandth time
they typed it? For this kind of tasks people rather someone or something that is not
bored of their quests or caprices, that is not irritated by their questions and that will
serve them well even if it is the thousandth time they ask for the very same thing.
It appears to be that most humans have no such ability because of possessing
human mind. Machines, which have not succeeded in complete imitation of human
mind, have the ability to make this service for us. The above example may seem
theoretical, since no one really types thousand times “2+2¼” on his calculator.
Moreover, to type so thousands of times requires itself non-human skills. However,
these machine skills are required in major part of industry.
Let us think of a costumers-service of large company who serves hundreds of
thousands of customers. The representatives of the customer-service are required to
be very polite and useful for each costumer, regardless the content of the costumer’s
application. How would such a representative act after one call? one hundred calls?
one thousand calls? How would it affect the quality of service? However, the
machine’s technological disadvantage of not getting bored, irritated or tired, is a
pure advantage for industry. An automatic customer-service system services the
thousandth costumer exactly the same way it serviced the first client: politely,
patiently, efficiently and accurately.
Expert systems of medical diagnosis are preferred not to get bored from repeat-
ing identical problems of different patients. Police robots are preferred not to be
frightened from taking apart highly dangerous explosives. Factory robots are
preferred not to get bored from repeating identical activity for thousands of times
everyday. The non-human ability of artificial intelligence technology has been
leveraged to industrial needs. The traditional disadvantages of the artificial intelli-
gence technology, which have been considered as such by artificial intelligence
16 1 Artificial Intelligence Technology and Modern Technological Delinquency
research, have been advantaged and they play a major role in making the decision to
use artificial intelligence technology in modern industry.51
In fact, these advantaged disadvantages are not considered as advantages exclu-
sively for industrial needs. The artificial intelligence technology research together
with industry have entered this technology to private consumption. Personal robot
assistants based on artificial intelligence technology are achievable. Moreover,
artificial intelligence robots are expected to enter the family and private life, even
in the most intimate situations. It has already been suggested “love and sex with
robots”.52 Sex-robots may pose much better alternatives for prostitution, which are
much healthier for the society. No shame, abuse, mental harm or physical harm
would occur through using the artificial intelligence alternative, and the robot will
never be disgusted from its clients’ sexual quests. This may cause a real social
change, in this context.
The same way, household robots are not insulted if asked to repeat over and over
again the same actions. Robots do not require vacations, raising the salary or ask for
favors. Teacher-robots are not likely to teach different matters than programmed
to. Prison-guard robots are not likely to be bribed for disregarding prisoner’s
escape. These non-human skills made artificial intelligence technology become
very popular for both industrial and private needs.
Characterizing the relevant artificial intelligence technology required for these
needs will place it under the complete and perfect imitation of human mind, but
with some of human skills and with partial, imperfect and incomplete imitation of
human mind. These artificial intelligence technology is not yet thinking machine,
but they do have some of the human skills of solving problems and they imitate
some of the human mind abilities. These existing skills of artificial intelligence
technology, which are already used for industrial and private needs, were the
relevant skills for the emergence of the delinquent thinking machine.
Advanced technology researchers’ reports indicate and predict that artificial intelli-
gence technology is towards wide usage. That means that human society prepares
itself to treat artificial intelligence technology as an integral part of its daily life as a
routine. Some researchers even point on a coexistence with artificial intelligence
technology as these machines become thinking machines rather than “thinking”
51
TERRY WINOGRAD AND FERNANDO C. FLORES, UNDERSTANDING COMPUTERS AND COGNITION: A NEW
FOUNDATION FOR DESIGN (1986, 1987); Tom Athanasiou, High-Tech Politics: The Case of Artificial
Intelligence, 92 SOCIALIST REVIEW 7, 7–35 (1987); Winograd, supra note 4, at p. 181.
52
DAVID LEVY, LOVE AND SEX WITH ROBOTS: THE EVOLUTION OF HUMAN-ROBOT RELATIONSHIPS (2007).
1.2 The Development of the Modern Technological Delinquency 17
machines. These researchers predict its beginning and establishment during the
third or fourth decade of this century.53
In fact, under the Fukuma World Robot Declaration issued in 2004, these
technologies are anticipated to co-exist with humans, assist human both physically
and psychologically, and contribute to the realization of a safe and peaceful
society.54 It is accepted that there are two major types of these technologies.55
The first is new generation industrial technologies, which are capable of
manufacturing wide range of products, performing multiple tasks and working
with human employees. The second is new generation service technologies,
which are capable of performing such tasks as house cleaning, security, nursing,
life-support and entertainment, all in co-existence with humans in homes and
business environment.
In most anticipations published to public, the authors added their evaluation
towards the level of danger to humans and society as a result of using these
technologies, whether in the tangible form (e.g., robots) or in the intangible form
(e.g., software that runs on certain computers or on the net).56 These evaluations
provoked the debate upon safety in using advanced technologies, regardless the real
level of danger evaluated. Most mature people think about safety of an object only
when it is considered to be dangerous. This is true for advanced technologies not
less than for any other object. The accelerated technological developments of
artificial intelligence technology caused many fears from it.
For instance, one of the first natural reactions to seeing an advanced robot as
medical caregiver which provide nursing care is fear from hurting the assisted
human. Would all humans be ready to let their babies and children be under nursing
services of such advanced non-human technologies? Most humans are not experts
in technological issues, and most humans fear from what they do not know. The
consequence is fear from this technology.57 Consequently, when people are
53
Yueh-Hsuan Weng, Chien-Hsun Chen and Chuen-Tsai Sun, Toward the Human-Robot Co-Ex-
istence Society: On Safety Intelligence for Next Generation Robots, 1 INT. J. SOC. ROBOT. 267, 267–
268 (2009).
54
International Robot Fair 2004 Organizing Office, World Robot Declaration (2004) available via
http://www.prnewswire.co.uk/cgi/news/release?id¼117957.
55
The word “technologies” refers to all types of applications using artificial intelligence
technologies, including physical-tangible robots and abstract software.
56
Stefan Lovgren, A Robot in Every Home by 2020, South Korea Says, National Geographic News
(2006) available via http://news.nationalgeographic.com/news/2006/09/060906-robots.html.
57
Moreover, the vacuum created by the lack of knowledge and of certainty is sometimes fed by
science fiction. In past, science fiction was rare and consumed by small group of people. Today,
most people consume science fiction through Hollywood. Most blockbusters of the 1980s, 1990s
and the twenty-first century are classified as science fiction movies. Analyzing most of these films
reveals mostly fear. If we go over these films, we might be able to understand what the public is
being fed by. In 2001: A Space Odyssey (1968), based on Clarke’s novel, Arthur C. Clarke, 2001:
A Space Odyssey (1968), the central computer of the spaceship is out of human control, autono-
mous, and attempts to assassinate the crew. Safety is returned only when the computer is shut
down, 2001: A Space Odyssey (Metro-Goldwyn-Mayer 1968). In the series of The Terminator the
18 1 Artificial Intelligence Technology and Modern Technological Delinquency
thinking of advanced technology, besides thinking of its utility and its unquestion-
able advantages, they think of how to be protected from them. People may accept
the idea of wide usage of advanced technology, only if they feel safe from that
technology.58
The derivative question is what mechanisms of protection may be used by
humanity for the safety of co-existence with artificial intelligence technology.59
The first to indicate dangerousness of this advanced technology was science fiction
literature, and it was the first to suggest protection from it consequently. The first
circle of protection suggested was ethics which is focused on safety. The ethics
issues were addressed to the designers and programmers of these entities in order to
construct built-in software, which would prevent any unsafe activity of this tech-
nology.60 One of the pioneer attempts to create advanced technology ethics was
Isaac Asimov’s.
Asimov stated three famous “laws” of robotics in his science fiction novel I,
Robot of 195061:
(1) A robot may not injure a human being or, through inaction, allow a human
being to come to harm;
(2) A robot must obey the orders given it by human beings, except where such
orders would conflict with the First Law;
machines are taking over humanity, which is almost extinct. Few survivors establish resistance
forces to oppose the machines. In order to survive, all machines must be shut down. Even the
savior, which happens to be a machine, shut down itself. Terminator 2: Judgment Day (TriStar
Pictures, 1991). In the trilogy of The Matrix the machines dominate the earth and enslave humans
to produce energy for the benefit of the machines. The machines control humans through mind-
control by creating illusion of fiction reality, the “matrix”. Only few could escape from the matrix,
they suffer from inferiority in relation to the machines, and they fight for their freedom from the
domination of the machines. The Matrix (Warner Bros. Pictures 1999); The Matrix Reloaded
(Warner Bros. Pictures 2003); The Matrix Revolutions (Warner Bros. Pictures 2003). In I, Robot
(2004), based on Isaac Asimov novel from 1950, advanced model of robots hurts people, one robot
is suspect for murder, and the hero is a detective who does not trust robots. The overall plot is an
attempt of robots to take over humans, I, Robot (20th Century Fox, 2004). The influence of
Hollywood is vast. If this is what the public is fed by, we should expect fear to dominate the public
mind towards artificial intelligence and robotics. As advanced the robot is, the more dangerous it
is. One of the popular issues in science fiction literature and films is the robots rebellion and
taking over.
58
See more in Dylan Matthews, How to Punish Robots when they Inevitably turn against Us?, The
Washington Post (March 5, 2013); Leon Neyfakh, Should We Put Robots on Trial?, The Boston
Globe (March 1, 2013); David Wescott, Robots Behind Bars, The Chronicle Review (March
29, 2013).
59
Yueh-Hsuan Weng and Chien-Hsun Chen and Chuen-Tsai Sun, The Legal Crisis of Next
Generation Robots: On Safety Intelligence, PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE
ON ARTIFICIAL INTELLIGENCE AND LAW 205–209 (2007).
60
See, e.g., Roboethics Roadmap Release 1.1, European Robotics Research Network (2006)
available via http://www.roboethics.org/atelier2006/docs/ROBOETHICS%20ROADMAP%
20Rel2.1.1pdf.
61
ISAAC ASIMOV, I, ROBOT 40 (1950).
1.2 The Development of the Modern Technological Delinquency 19
(3) A robot must protect its own existence, as long as such protection does not
conflict with the First or Second Laws.
When published, these so-called “laws” were considered innovative and could
have given disturbed and terrified public some calmness. After all, harming humans
is not allowed.
Although Asimov referred specifically to robots, these “laws” may easily be
generalized and be applicable to artificial intelligence technologies, regardless its
tangibility. Therefore, these “laws” may be applicable for both tangible robots and
abstract software. However, modern analysis of these “laws” describes a less
promising image. The first two “laws” represented a human-centered approach to
safety in relation to artificial intelligence technology. It represents the general
approach, that as artificial intelligence technology gradually take on greater num-
bers of intensive and repetitious jobs outside industrial factories, it is increasingly
significant for safety rules to support the concept of human superiority over
advanced technologies and over machines.62
The third “law” straddles the borderlines between human-centered and machine-
centered approaches to safety. The advanced technology’s functional purpose is to
satisfy human needs, therefore for the commission of these functions they should
protect themselves, functioning as human property. However, these ethic rules were
insufficient, ambiguous and not wide enough, as Asimov himself admitted.63
For instance, an artificial intelligence robot is in military service. The robot’s
task is to protect some hostages taken by specific terrorist. At some point the
terrorist intends to shoot one of the hostages. The robot understands the situation,
and under the relevant circumstances the only way to stop the innocent person’s
murder is to shoot down the terrorist. When focused on the first “law”, on one hand
the robot is prohibited to be killing or injuring the terrorist, and on the other hand
the robot is prohibited to let the terrorist kill the hostage. There are no other possible
ways. Under this first “law”, what exactly does society expect the robot to do? What
does the society expect any human instead to do? Any solution would breach the
first “law” whatsoever.
If the other two “laws” are examined at this point, the consequence would not be
changed. If the human military commander orders the robot to shoot the terrorist
down, it would contradict the first “law”. Even if the military commander himself is
under immediate danger, it would be impossible for the robot to act. Even if the
terrorist intends to explode ten hostages, the robot is not allowed to protect their
lives by injuring the terrorist. Of course, if the military commander itself is a robot
62
JERRY A. FODOR, MODULES, FRAMES, FRIDGEONS, SLEEPING DOGS AND THE MUSIC OF THE SPHERES,
THE ROBOT’S DILEMMA: THE FRAME PROBLEM IN ARTIFICIAL INTELLIGENCE (Zenon W. Pylyshyn
ed., 1987).
63
Isaac Asimov himself wrote in his introduction to The Rest of Robots that “[t]here was just
enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new
stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the
sixty-one words of the Three Laws.” ISSAC ASIMOV, THE REST OF ROBOTS 43 (1964).
20 1 Artificial Intelligence Technology and Modern Technological Delinquency
or an advanced software that makes the required decisions, that makes thing more
complicated, but the consequence would not be changed.
Under the third “law”, even if the robot itself is in danger, the consequence
would not be changed. This dilemma of the military robot is not rare in an advanced
technology modern society. Any activity of artificial intelligence technology in
such society is puzzled by such dilemmas. Artificial intelligence device in medical
service is required to perform an emergency surgical procedure. The procedure is
intrusive, intended to save the patient’s life, and if not committed within the
following few minutes, the patient dies. The patient objects the procedure. Any
action or inaction of the artificial intelligence device is in contradiction to the first
“law”. Any order of superior is irrelevant to solve the dilemma, since an order to act
causes injury to the patient, and an order to avoid action causes the patient’s sure
death.
Easier dilemmas remain still, although easier, when one option involves no
injury to human body. Above mentioned the usage of artificial intelligence devices
as prison guards,64 in that case, e.g., how exactly should such device act, when a
prisoner attempts to escape and the only way to stop it involves causing injury to the
prisoner? What should sex robot do when ordered to commit sadistic sexual
contact? If the answers are not to act, the question is why in first place do people
use these advanced devices. If the answers are to act, it contradicts the first “law”.
Industry defines the purposes of artificial intelligence devices (robots or any
other artificial entities) to serve human society (as society or as privates) in various
situations, these purposes may involve difficult decisions that should be made by
these entities. The terms “injury” and “harm” may be wider than specific bodily
harm, and these entities may harm people in other ways than bodily harm. More-
over, in various situations causing one sort of harm should be preferred in order to
prevent greater harm. In most cases, such decision involves complicated judgment
which exceeds dogmatic simple rules of ethics.65
The debate towards the Asimov “laws” arise the debate towards the machine
capability for moral accountability.66 The moral accountability of artificial intelli-
gence technology is part of the modern formation of the characteristics of the
thinking machine.
However, artificial intelligence technologies do exist, do participate in human
daily life in industrial environment and private environment, and they do cause
harm from time to time, regardless their capability or incapability for moral
accountability. Thus, ethics sphere is unsuitable for settling the issue. Ethics
requires moral accountability, complicated inner judgment and unsuitable rules.
64
Above at Sect. 1.1.1.
65
Susan Leigh Anderson, Asimov’s “Three Laws of Robotics” and Machine Metaethics, 22 ARTIFI-
CIAL INTELLIGENCE SOC. 477–493 (2008).
66
Stefan Lovgren, Robot Codes of Ethics to Prevent Android Abuse, Protect Humans, National
Geographic News (2007) available via http://news.nationalgeographic.com/news/2007/03/
070316-robot-ethics.html.
1.2 The Development of the Modern Technological Delinquency 21
Artificial intelligence technology enables its usage in various ways in both industry
and private use. It may be assumed that this technology would become more
advanced in the future as the artificial intelligence research is developing along
the time. The industrial and private uses of this technology widen the range of tasks
artificial intelligence technology can undertake. The more advanced and compli-
cated the tasks are, the higher chances for failure in accomplishing these tasks.
Failure is a wide term, which includes various situations in this context. The
common of these situations is that the task undertaken has not been accomplished
successfully.
However, some failure situations may involve harm and danger to individuals
and society. For instance, for prison guard artificial intelligence software the task
has been defined as preventing escape from prison using minimal power which may
harm the escaping prisoners. For that task, the software may use force through
tangible robots, electric system etc. At some point a prisoner attempts to escape.
The attempt is discovered by the prison guard software, and it sends there a tangible
robot to handle the situation. It is insignificant whether the tangible robot is part of
the artificial intelligence software and it acts according to its orders or it is
independent entity, which is equipped with an artificial intelligence software of
its own.
The tangible robot which was sent to handle the situation prevents the attempt of
the prisoner from becoming successful by holding the prisoner firmly. The prisoner
is injured and argues for over-usage of power. Analyzing the tangible robot actions
(or the software actions, if the robot acted due to specific orders from it) reveals that
although it could choose a more lenient action, it has chosen the specific harmful
action. The reason is that the robot has figured out and evaluated the risk as
unreasonably graver than what it actually was. Consequently, the legal question
in this situation would be who is the responsible for the injury.
Such questions provoke deep thoughts towards the responsibility of the artificial
intelligence entity and many more arguments. If analyzed through ethics and
morality, most scientists would argue that the failure is of the programmer or the
designer, but not of the software itself. The software itself is incapable to consoli-
date the required moral accountability to be responsible for any harm caused by its
actions. According to this point of view, only humans may consolidate such moral
accountability. The software is nothing but a tool in the hands of its programmer,
regardless the quality of its software or its cognitive abilities and regardless its
physical appearance, whether tangible or not.
This argument is related to the debate towards the existence of thinking machine.
Moral accountability is indeed too complicated, not only for machines, but for
humans as well. Morality, in general, has no common definition which is acceptable
by all societies and individuals. Deontological morality and teleological morality
are the most acceptable types of morality, and in very many situations they direct
22 1 Artificial Intelligence Technology and Modern Technological Delinquency
67
See, e.g., GILBERT RYLE, THE CONCEPT OF MIND 54–60 (1954); RICHARD B. BRANDT, ETHICAL
THEORY 389 (1959); JOHN J.C. SMART AND BERNARD WILLIAMS, UTILITARIANISM – FOR AND AGAINST
18–20 (1973).
68
See, e.g., Justine Miller, Criminal Law – An Agency for Social Control, 43 YALE L. J. 691
(1934); Jack P. Gibbs, A Very Short Step toward a General Theory of Social Control, 1985 AM.
B. FOUND RES. J. 607 (1985); K. W. Lidstone, Social Control and the Criminal Law, 27 BRIT.
J. CRIMINOLOGY 31 (1987); Justice Ellis, Criminal Law as an Instrument of Social Control,
17 VICTORIA U. WELLINGTON L. REV. 319 (1987).
1.2 The Development of the Modern Technological Delinquency 23
69
Above at Sect. 1.1.3.
70
Above at Sect. 1.1.2.
24 1 Artificial Intelligence Technology and Modern Technological Delinquency
liability is applicable for them. This kind of arguments may be relevant for the
debate on the research for thinking machine, but absolutely not for the question of
criminal liability. Human skills, which are irrelevant to the commission of the
specific offense, if not specifically required by the law, are not considered within
the legal process towards the criminal liability.
Thus, if the specific offender has been (or not) communicative through the
commission of the offense, that is irrelevant for the imposition of criminal liability,
since there is no such requirement for criminal liability. If the specific offender has
been (or not) communicative outside the commission of the offense, that is irrele-
vant as well, since the legal process is bound to concentrate only on the facts
involved in the commission of the offense. So is the situation with many other
attributes considered necessary to declare the recognition of thinking machines.
It seems that along the research to the creation of thinking machine, there has
been created a very significant byproduct. This byproduct has not the skills to be
considered thinking machine, for it has no full skills of imitating human mind in
complete. However, the skills of this byproduct are adequate for various activities
in industry and in private use. The absence of some skills is even considered
advantage when focusing on the industrial and private use of it, although the
artificial intelligence research considers that absence as disadvantage (“advantaging
the disadvantages”).71
The relevant type of activities that this byproduct is capable of is the commission
of offenses. This byproduct, perhaps, is not capable of very many types of creative
activities, but it is capable of committing offenses. The basic reason lies mostly on
the definitions and requirements of criminal law for imposition of both criminal
liability and punishments. These requirements are satisfied through capabilities
which are far lower than those required to create thinking machine. In the context
of criminal law, as long as its requirements are met, there is nothing to prevent the
imposition of criminal liability, whether the subject of criminal law is human or not.
In fact, this is the logic towards the criminal liability of corporations. Through
the eyes of criminal law, to the vast group of subjects to criminal law, a new type of
subjects may be added, in addition to human individuals and corporations, as long
as all relevant requirements of criminal law are met. These subjects to criminal law
may be called “delinquent thinking machines”. The delinquent thinking machine is
not one type of the general “thinking machine”, but it is an inevitable byproduct of
it. This byproduct is considered to be less technologically-advanced, since to
belong to this category of machines it requires no very high-level skills which
attribute the real thinking machine.72
This new offender is a stopping point on the race to the top. It is a less advanced
top, since the requirements for criminal liability are much lower than the race to the
top of achieving the ideal thinking machine. However, the new offender is not a
race to the bottom, but only a stopping point on the race to the top. From the
71
See above at Sect. 1.1.3.
72
HANS MORAVEC, ROBOT: MERE MACHINE TO TRANSCENDENT MIND (1999).
1.2 The Development of the Modern Technological Delinquency 25
criminal law point of view, the technological research may rest at this point, since
entities which the criminal law is applicable for them are already exist. No further
technological development is required in order to impose criminal liability upon
artificial intelligence technology.
As noted above, the imposition of criminal liability upon artificial intelligence
technology is dependent on the match between the criminal law requirements and
the relevant skills and abilities of the entity’s technology. Some researchers argue
that the current law is inadequate for dealing with artificial intelligence technology
and there is a necessary to develop a new legal sphere.73 However, when focusing
on the criminal law, current criminal law is adequate to deal with artificial intelli-
gence technology. Moreover, if technology would significantly advance towards
the creation of virtual offender, that would make the current criminal law much
relevant to deal with the artificial intelligence technology.
The reason is that such technology imitates human mind, and human mind is
already subject to current criminal law. The more closer this technology approaches
to complete imitation of human mind, the more relevant the current criminal law is
to deal with it. Subjecting artificial intelligence technology to the criminal law may
supply the relaxing cures for human fears from wide applicability of advanced
technology.
The criminal law plays a major role in ensuring the personal confidence of
individuals in the society. Each individual knows that all other individuals in the
society are bound to obey the law, especially the criminal law. If the law is breached
by any individual, the law is enforced by the society through its relevant coercive
powers. If any individual is not subject to the criminal law, the personal confidence
of the other individuals is severely harmed. The other individuals know that if the
specific individual breaches the law, nothing happens, and that this individual has
no incentive to obey the law.
This works the same way for all potential offenders, humans or not
(corporations, for instance).74 If any of these offenders is not subject to criminal
law, the other individuals’ personal confidence is harmed. In more comprehensive
manner, the personal confidence of the whole society is harmed. Consequently, the
society should make any required efforts to subject all relevant entities to its
criminal law. Sometimes it requires some conceptual changes in the general
insights of the relevant society.
Thus, when the fear from corporations which were not subject to criminal law
became efficient, criminal law became applicable for them in the seventeenth
century. So, perhaps, should be as to artificial intelligence technology for the
society to annihilate its fears from them.75
73
DAVID LEVY, ROBOTS UNLIMITED: LIFE IN A VIRTUAL AGE (2006).
74
For wider aspect see, e.g., Mary Anne Warren, On the Moral and Legal Status of Abortion,
ETHICS IN PRACTICE (Hugh Lafollette ed., 1997); IMMANUEL KANT, OUR DUTIES TO ANIMALS (1780).
75
David Lyons, Open Texture and the Possibility of Legal Interpretation, 18 LAW PHIL. 297, 297–
309 (1999).
26 1 Artificial Intelligence Technology and Modern Technological Delinquency
In the society other creatures than humans, corporations and artificial intelligence
technology live in co-existence with humans. These creatures are the animals. The
question that may arise accordingly is why should not the law embrace human legal
rules relating to animals towards artificial intelligence technology. Although most
animals are not considered by humans to be intelligent, humans and animals do live
in co-existence for several millenniums. Since humanity first domesticated some
animals for its own benefit, the co-existence of humans and animals is very
intensive. Human society eat animals’ products, fed by their flesh, protected by
them, employ them and use them both for industrial and private needs. Human
society is even responsible for the creation of some new species as a byproduct of
the process of domestication (the emergence of dogs from wolves, for instance).
Consequently, the law relates animals and human co-existence with them since
ancient days.76 Examination of the law towards animals reveals two major aspects
being related to. The first is the relation to animals as property of humans, and the
second is the duty to show mercy towards animals. The first aspect contains the
ownership, possession and other property rights of humans towards animals. Con-
sequently, if damage is caused by an animal, the legally responsible for it is the
human who has the relevant property rights towards the animal.
Generally, such cases are related to tort law and in some countries they are
related to criminal law as well. However, whether it is tort law, criminal law or
both, the legal responsibility is the human’s, not the animal’s, at least not directly.
Only if the animal is considered too dangerous to society, it is incapacitated. The
incapacitation is executed mostly through killing the animal. This was the case if an
ox gored human in ancient times,77 and this is the case when a dog bites humans
under certain circumstances in modern times. No legal system considers an animal
to be, itself, direct subject of the law, especially not criminal law, regardless the
animal’s rate of “intelligence”.
The second aspect of the duty to show mercy towards animals is directed to
humans. Humans are bound to treat animals mercifully. Since humans are regarded
as superior upon animals due to the human intelligence, animals are regarded as
helpless. Consequently, human law prohibits the abuse of power by humans against
animals for its cruelty. The subjects of these legal provisions are humans, not the
animals. The animals abused by humans have no standing in court. The legal
“victim” in these cases is the society, not the animals, therefore these legal
provisions are part of the criminal law in most cases.78
The prosecution accuses the offender who abused animals for it harms the
society, not the animals. The prosecution accuses offenders on behalf of the
76
See, e.g., Exodus 22:1,4, 9,10; 23:4,12.
77
Exodus 21:28–32.
78
See more generally in Gabriel Hallevy, Victim’s Complicity in Criminal Law, 2.2 INT’L
J. PUNISHMENT AND SENTENCING 72 (2006).
1.2 The Development of the Modern Technological Delinquency 27
whole society and not on behalf of the injured animal. This is also the situation with
human victims, when the prosecution does not accuse the offender on behalf of the
victims of the offense, but on behalf of the society in order to let the public order
prevail.
Consequently, this kind of indirect protection on animals is little different from
the protection on property. Most criminal codes prohibit damaging other’s property
in order to protect the property rights of the possessor or owner. However, this kind
of protection has nothing to do with property rights. The legal owner of a cow may
be indicted for abusing the cow cruelly, regardless the property rights towards the
cow. These legal provisions, which exist since ancient ages, form a legal model
which is supposed to be applicable for animals. The inevitable question at this point
is why would not this model be relevant for artificial intelligence technology.
There are three types of entities: humans, animals and artificial intelligence. If
people wish to subordinate the artificial intelligence technology to the criminal law
of humans, they should justify the resemblance between humans and artificial
intelligence technology in this context. In fact, they should explain why artificial
intelligence technology resembles more to humans than to animals. Otherwise, the
above legal model should be satisfactory and adequate for settling the artificial
intelligence activity. The interesting question is to whom artificial intelligence
technology is more resembling—to humans or to animals.
The above legal model has been previously examined for artificial intelligence
technology for controlling unmanned aircrafts,79 “new generation robots”80 and
other machines.81 For some of the legal issues occurred the zoological legal model
could supply answers, but the core problems were not solved by this model.82 When
the artificial intelligence entity could figure out alone, using its software, its
activity, something in the legal responsibility puzzle was still missing. Communi-
cation towards complicated ideas is much easier with artificial intelligence technol-
ogy than with animals. So is the situation towards external knowledge and quality
of reasonable conclusions in various situations.
An artificial intelligence entity is programmed by humans due to the human
formal logic reasoning. This is the core reason for the artificial intelligence entity
activity. Its calculations are explicable through human formal logic reasoning. Most
animals, in most situations, lack this type of reasoning. It is not that animals are not
reasonable, but their reasonability is not necessarily based on human formal logic.
79
JONATHAN M.E. GABBAI, COMPLEXITY AND THE AEROSPACE INDUSTRY: UNDERSTANDING EMERGENCE
BY RELATING STRUCTURE TO PERFORMANCE USING MULTI-AGENT SYSTEMS (Ph.D. Thesis, University of
Manchester, 2005).
80
Wyatt S. Newman, Automatic Obstacle Avoidance at High Speeds via Reflex Control,
PROCEEDINGS OF THE 1989 I.E. INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 1104
(1989).
81
Craig W. Reynolds, Herds and Schools: A Distributed Behavioral Model, 21 COMPUT. GRAPH.
25–34 (1987).
82
See, e.g., Gabriel Hallevy, Unmanned Vehicles – Subordination to Criminal Law under the
Modern Concept of Criminal Liability, 21 J. OF LAW, INFO. & SCI. 311 (2011).
28 1 Artificial Intelligence Technology and Modern Technological Delinquency
Emotionality plays major role in the activity of most living creatures, both animals
and humans. Emotionality may supply the drive and motivation to some of human
activity, as well as animal activity. This is not the case in relation to artificial
intelligence software.
If measured by emotionality, humans and animals are much closer to each other
rather than to artificial intelligence software. However, if measured by pure ratio-
nality, artificial intelligence software may be closer to humans rather than to
animals. Although emotionality affect rationality and rationality affect emotional-
ity, the law still distinguishes between them regarding its applicability. For the law,
especially the criminal law, rationality is the main factor to be considered. Emo-
tionality is being considered relatively in rare cases. For instance, for convicting a
person in rape, the feelings behind the rape are insignificant, but only the elements
of the offense.83
Moreover, the legal model for animals educates humans to be merciful towards
animals, as noted above. This consideration is major under this model. However, in
relation to artificial intelligence technology it is insignificant. Since artificial intel-
ligence technology lacks basic attributes of emotionality, they are not to be
sorrowed, suffering, disappointed or tortured in any emotional manner, this aspect
of the above legal model has no significance towards artificial intelligence technol-
ogy. For instance, wounding a cow for no medical reason is considered abuse, and
in some countries it is considered offense. However, no country considers offense
or abuse wounding a robot.84
As a result, since law prefers rationality on emotionality when evaluating legal
responsibility, and since rationality of artificial intelligence technology is based on
human formal logic reasoning, for the law and in legal aspect, artificial intelligence
technology is much closer to humans than to animals. Consequently, the legal
model that relates to animals mismatches artificial intelligence technology for
evaluation of legal responsibility.85 For the artificial intelligence technology to be
subject to criminal law, the basic concepts of criminal liability should be
introduced. These basic concepts form the general requirements of criminal liabil-
ity, which must be fulfilled for the imposition of any individual—human, corporate
or artificial intelligence technology.
83
See, e.g., State v. Stewart, 624 N.W.2d 585 (Minn.2001); Wheatley v. Commonwealth, 26 Ky.L.
Rep. 436, 81 S.W. 687 (1904); State v. Follin, 263 Kan. 28, 947 P.2d 8 (1997).
84
Andrew G. Brooks and Ronald C. Arkin, Behavioral Overlays for Non-Verbal Communication
Expression on a Humanoid Robot, 22 AUTON. ROBOTS 55, 55–74 (2007).
85
LAWRENCE LESSIG, CODE AND OTHER LAWS OF CYBERSPACE (1999).
Basic Requirements of Modern Criminal
Liability 2
Contents
2.1 Modern Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.1 The Offense’s Requirements (In Rem) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.2 The Offender’s Requirements (In Personam) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2 Legal Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.1 Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2.2 Punishments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Modern criminal law deals also with the question of personality, i.e. who is to be
considered an offender under modern criminal law. This is also a question of
applicability, as it relates to the possible applicability of criminal liability in the
personal aspect. When coming across the words “criminal” or “offender”, most
people associate it with “evil”. Criminals are considered socially evil. However,
outlines of criminality include not only severe offenses, but also other behaviors
which are not considered “evil” for most people.
For instance, some of “white collar crimes” are not considered “evil” by most
people in most societies, but rather sophistication or surrendering to complicated
bureaucracy. Most traffic offenses are not considered “evil” as well. Some of the
offenses committed under duty pressures are not considered “evil”. For instance, a
physician operates complicated emergency surgery to save the patient’s life. Under
the relevant circumstances he must hurry up. The operation fails in spite of the
sincere efforts, and the patient dies. Post mortem, it is discovered that one of the
physician acts may be considered negligent. Therefore, this physician is criminally
liable for negligent homicide. Nevertheless, the physician is still not considered
“evil” by most people in most societies.
In fact, even some of the most severe offenses, may not be considered “evil”
under the right circumstances. For example, a physician sees the daily suffer of his
patient, who is a dying patient. The patient asks the physician to unplug her from the
CPR machine so she could end her life honorably and end her suffer. Desperately
and reluctantly he agrees and unplugs her. This is considered to be murder in most
legal systems.1 It is not likely to consider the physician to be “evil” in most modern
societies. The criminal law (and society) may object euthanasia through including it
in homicide offenses, but that does not automatically turn the offender to be
considered “evil”.
More complicated situations in this context are when the physician refuses, just
in purpose to continue seeing her suffering. He enjoys her suffering and even
celebrates it, therefore he refuses to unplug her. In this case, the physician is not
an offender, as he did not commit any criminal offense. Nevertheless, most people
in most modern societies would consider such physician to be evil. The ultimate
conclusion is that evil is not measure for criminal liability. Sometimes the offender
is evil, but sometimes not. Sometimes evil persons are offenders, but
sometimes not.
Morality, regardless its specific type (deontological, teleological, etc.), and
criminal liability are extremely different. Sometimes they are coherent, but it is
not necessary for the imposition of criminal liability. An offender is any person,
morally evil or not, which criminal liability has been imposed upon him. When the
legal requirements of the specific criminal offense are met by the individual
behavior, criminal liability is imposed, and that individual is considered offender.
Sometimes evil is involved, but sometimes not.
As a result, imposition of criminal liability requires the examination of the
applicability of its basic requirements. Criminal law is considered as the most
efficient social measure for social control. The society, as an abstract body or
entity, controls its individuals. There are very many measures for the society to
control its individuals, such as moral, economical and cultural, but one of the most
efficient measures is the law. Since only criminal law include significant sanctions,
the criminal law is considered the most efficient measure to control individuals.
Controlling the individuals through criminal law is legal social control.
Imposition of criminal liability is the application and implementation of the legal
social control. The modern criminal liability is not dependent on morality, of any
kind, and on evil. It is imposed in a very organized pattern, almost mathematical.
There are two accumulative types of requirements in order to impose criminal
liability. One type is from the law (in rem) and the other is from the offender (in
personam). Both types must be fulfilled in order to impose criminal liability, but not
additional conditions are required.
1
See, e.g., Steven J. Wolhandler, Voluntary Active Euthanasia for the Terminally Ill and the
Constitutional Right to Privacy, 69 CORNELL L. REV. 363 (1984); Harold L. Hirsh & Richard
E. Donovan, The Right to Die: Medico-Legal Implications of In Re Quinlan, 30 RUTGERS L. REV.
267 (1977); Susan M. Allan, No Code Orders v. Resuscitation: The Decision to Withhold Life-
Prolonging Treatment from the Terminally Ill, 26 WAYNE L. REV. 139 (1980).
2.1 Modern Criminal Liability 31
The first type of requirements includes four major requirements, which are required
from the criminal law itself. If the specific offense, as defined by law, fails to fulfils
even one of these requirements, no court may impose criminal liability upon
individuals for that specific offense. The four requirements are:
(a) legality;
(b) conduct;
(c) culpability; and-
(d) personal liability.
Only when all of these conditions are fulfilled, the specific offense is considered
“legal”, i.e. criminal liability may be imposed for its commission. The specific
offense must have a legitimate legal source which creates and defines it.3 For
instance, in most countries the ultimate legitimate legal source of offenses is
legislation, whereas case-law is illegitimate. The reason is that legislation is enacted
by public representative which are elected and represent the relevant society. Since
criminal law is legal social control, it should reflect the will of the society. This will
may be reflected through the society’s representatives.
The specific offense must be applicable in time, so retroactive offenses are
illegal.4 For individuals to plan their moves, they must know about prohibitions
in advance and not retroactively. Only in rare cases retroactive offenses may be
considered legal. These cases are when the new offense is for the benefit of the
defendant (e.g., more lenient sanction, new defense, etc.) or when the offense
2
For the structure of the principle of legality in criminal law see GABRIEL HALLEVY, A MODERN
TREATISE ON THE PRINCIPLE OF LEGALITY IN CRIMINAL LAW 5–8 (2010).
3
Ibid, at pp. 20–46.
4
Ibid, at pp. 67–78.
32 2 Basic Requirements of Modern Criminal Liability
embraces cogent international custom, jus cogens (e.g., genocide, crimes against
humanity, war crimes, etc.).5
The specific offense must be applicable in place, so extraterritorial offenses are
illegal.6 The criminal law is based on the authority of the sovereign. The
sovereign’s authority is domestic, therefore the criminal law must be domestic as
well. For instance, the criminal law of France is applicable in France, but not in the
US. Thus, extraterritorial offense is illegal (e.g., French offense which is applicable
in the US). However, in rare cases the sovereign is authorized to protect itself or its
inhabitants abroad through extraterritorial offenses (e.g., foreign terrorists who
attack US embassy in Kenya may be indicted in the US under the US criminal
law, although they have never been to the US)7 or in cases of international
cooperation between states.
The specific offense must be formulated and phrased well. It must be general, for
it addresses unspecified public (e.g., “John Doe is not allowed to do. . .” is illegiti-
mate offense).8 It must be feasible, for legal social control must be realistic (e.g.,
“whoever does not fly, shall be guilty. . .” is illegitimate offense).9 It must also be
clear and précised, for individuals must know exactly what they are allowed to do,
and what is prohibited.10 When all these conditions are met, the requirement of
legality is fulfilled and satisfied.
Conduct is required from the specific offense for it to be considered legal
(nullum crimen sine actu). The modern society has no interest in punishing mere
thoughts (cogitationis poenam nemo patitur). Effective legal social control is not
achieved through minds’ police, and it is not really enforceable. The modern
society prefers the freedom of thoughts. Consequently, for the offense to be
considered legitimate it must include requirement of conduct. The conduct is the
objective-external expression of the commission of the offense. If the specific
offense lacks that requirement, it is illegitimate. Through human legal history,
only tyrant and totalitarian regimes used offenses which lack conduct.
Offenses whose conduct requirement is satisfied by inaction are considered
status offenses that criminalize the status of the individual, not his conduct. For
example, offenses that punish the relatives of traitors merely because they are
relatives, regardless of their conduct, are considered status offenses.11 So are
5
See, e.g., in Transcript of Proceedings of Nuremberg Trials, 41 AMERICAN JOURNAL OF INTERNA-
TIONAL LAW 1–16 (1947).
6
Hallevy, supra note 2, at pp. 97–118.
7
Ibid, at pp. 118–129.
8
Ibid, at pp. 135–137.
9
Ibid, at pp. 137–138.
10
Ibid, at pp. 138–141.
11
See, e.g., sub-article 58(c)(1) of the Soviet Penal Code of 1926 as amended in 1950. This
sub-article provided that mature relatives of the first degree of convicted traitor are punished with
five years of exile.
2.1 Modern Criminal Liability 33
offenses that punish individuals of certain ethnic origin.12 Most modern countries
have abolished these offenses, and defendants indicted for such offenses are
acquitted by the court because status offenses contradict the principle of conduct
in criminal law.13 Only when conduct is required, the offense may be considered
legal and legitimate, and criminal liability may be imposed according to it.
Culpability is required from the specific offense for it to be considered legal
(nullum crimen sine culpa). The modern society has no interest in punishing
accidental, thoughtless or random events, but only when the event is occurred due
to the individual’s culpability. If someone is dead, it does not necessarily require an
offender. For instance, one person passes near another exactly when he falls to a
deep hole in ground. The other person is dead, but the passer is not necessarily
culpable. For the imposition of criminal liability the specific offense must require
some level of culpability.
If not, imposition of criminal liability would be no more than cruel maltreatment
of the individuals by the society. Culpability relates to the mental state of the
offender, and it reflects the subjective-internal expression of the commission of
the offense. The required mental state of the offender, which forms the requirement
of culpability, may be reflected both in the particular requirement of the specific
offense and in the general defenses. For instance, the specific offense of manslaugh-
ter requires recklessness as its minimal level of culpability.14 However, if the
offender is insane (general defense of insanity),15 minor (general defense of minor-
ity),16 or acted under self-defense (general defense of self-defense),17 no criminal
liability is imposed. Such an offender is considered absent of adequate culpability.
Only when culpability is required, the offense may be considered legal and legiti-
mate, and criminal liability may be imposed according to it.
12
See above supra note 5.
13
Scales v. United States, 367 U.S. 203, 81 S.Ct. 1469, 6 L.Ed.2d 782 (1961); Larsonneur, (1933)
24 Cr. App. R. 74, 97 J.P. 206, 149 L.T. 542; ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW 106–
107 (5th ed., 2006); Anderson v. State, 66 Okl.Cr. 291, 91 P.2d 794 (1939); State v. Asher, 50 Ark.
427, 8 S.W. 177 (1888); Peebles v. State, 101 Ga. 585, 28 S.E. 920 (1897); Howard v. State, 73 Ga.
App. 265, 36 S.E.2d 161 (1945); Childs v. State, 109 Nev. 1050, 864 P.2d 277 (1993).
14
See, e.g., Smith v. State, 83 Ala. 26, 3 So. 551 (1888); People v. Brubaker, 53 Cal.2d 37, 346
P.2d 8 (1959); State v. Barker, 128 W.Va. 744, 38 S.E.2d 346 (1946).
15
See, e.g., Commonwealth v. Herd, 413 Mass. 834, 604 N.E.2d 1294 (1992); State v. Curry,
45 Ohio St.3d 109, 543 N.E.2d 1228 (1989); State v. Barrett, 768 A.2d 929 (R.I.2001); State
v. Lockhart, 208 W.Va. 622, 542 S.E.2d 443 (2000).
16
See, e.g., Beason v. State, 96 Miss. 165, 50 So. 488 (1909); State v. Nickelson, 45 La.Ann. 1172,
14 So. 134 (1893); Commonwealth v. Mead, 92 Mass. 398 (1865); Willet v. Commonwealth,
76 Ky. 230 (1877); Scott v. State, 71 Tex.Crim.R. 41, 158 S.W. 814 (1913); Price v. State, 50 Tex.
Crim.R. 71, 94 S.W. 901 (1906).
17
See, e.g., Elk v. United States, 177 U.S. 529, 20 S.Ct. 729, 44 L.Ed. 874 (1900); State v. Bowen,
118 Kan. 31, 234 P. 46 (1925); Hughes v. Commonwealth, 19 Ky.L.R. 497, 41 S.W. 294 (1897);
People v. Cherry, 307 N.Y. 308, 121 N.E.2d 238 (1954); State v. Hooker, 17 Vt. 658 (1845);
Commonwealth v. French, 531 Pa. 42, 611 A.2d 175 (1992).
34 2 Basic Requirements of Modern Criminal Liability
Each specific offense, which fulfils the requirements from it, determines the
requirements needed for the imposition of criminal liability in that specific offense.
Although different specific offenses require different requirements, the formal logic
behind all offenses and their structure is similar. The common formal logic and
structure are significant attributes of the modern criminal liability. In general, these
attributes may be characterized by posing the minimal requirement needed to
impose criminal liability. It means that the specific offense determines only the
lower threshold for the imposition of criminal liability.
18
GABRIEL HALLEVY, THE MATRIX OF DERIVATIVE CRIMINAL LIABILITY 1–61 (2012).
2.1 Modern Criminal Liability 35
Thus, the offender is required to fulfill at least the requirements of the specific
offense. The general requirements of any specific offense are two:
The first question refers to the substantive facts of the event (what has hap-
pened). The second question relates to the identity of the offender. The third
question addresses the time aspect. The fourth question specifies the location of
the event. In some offenses these questions are answered directly within the
definition of the offense. In other offenses some of the questions are answered
through the applicability of the principle of legality in criminal law.20
For instance, the offense “whoever kills another person. . .” does not relate
directly to questions (b), (c) and (d), but the questions are answered through the
applicability of the principle of legality. Because the offense is likely to be
general,21 the answer to the question “who has done it?” is any person who is
legally competent. As this type of offense may not be applicable retroactively,22 the
answer to the question “when has it been done?” is from the time the offense was
validated onward. And because this type of offense may not be applicable extrater-
ritorially under general expansions,23 the answer for the question “where has it been
done?” is within the territorial jurisdiction of the sovereign under general
expansions.
19
Dugdale, (1853) 1 El. & Bl. 435, 118 Eng. Rep. 499, 500: “. . .the mere intent cannot constitute a
misdemeanour when unaccompanied with any act”; Ex parte Smith, 135 Mo. 223, 36 S.W. 628
(1896); Proctor v. State, 15 Okl.Cr. 338, 176 P. 771 (1918); State v. Labato, 7 N.J. 137, 80 A.2d
617 (1951); Lambert v. State, 374 P.2d 783 (Okla.Crim.App.1962); In re Leroy, 285 Md. 508, 403
A.2d 1226 (1979).
20
For the principle of legality in criminal law see Hallevy, supra note 2.
21
Ibid at pp. 135–137.
22
Ibid at pp. 49–80.
23
Ibid at pp. 81–132.
36 2 Basic Requirements of Modern Criminal Liability
However, the answer to the question “what has happened?” must be incorporated
directly into the definition of the offense. This question addresses the core of the
offense, and it cannot be answered through the principle of legality. This approach
is the basis of the modern structure of the factual element requirement, which
consists of three main components: conduct, circumstances, and results. The con-
duct is a mandatory component, whereas circumstances and results are not. Thus, if
the specific offense is defined as having no conduct requirement, it is not legal, and
the courts may not convict individuals based on such a charge and no criminal
liability may be imposed on anyone accordingly. Thus, the conduct component is at
the heart of the answer to the question “what has happened?”.
Status offenses, in which the conduct component is absent, are considered
illegal, and in general they are abolished when discovered.24 But the absence of
circumstances or results in the definition of an offense does not invalidate the
offense.25 These components are aimed at meeting the factual element requirement
with greater accuracy than by conduct alone. Thus, there are four possible formulas
that can satisfy the factual element requirement:
(a) conduct;
(b) conduct + circumstances;
(c) conduct + results; and-
(d) conduct + circumstances + results.
For instance, the homicide offenses are very similar in their factual element
requirement, which may be expressed by the formula: “whoever causes the death of
another person, . . ..”. In this typical formula the word “causes” functions as
conduct, the word “person” functions as circumstances and the word “death”
functions as results. This offense is, consequently, considered to be requiring both
conduct, circumstances and results within its factual element requirement. Most
offenses’ definitions contain only the factual element requirement as the mental
element requirement may be easily deduced from the general provisions of the
criminal law.
The structure of the mental element requirement applies the fundamental princi-
ple of culpability in criminal law (nullum crimen sine culpa). The principle of
culpability has two main aspects: positive and negative. The positive aspect (what
should be in the offender’s mind in order to impose criminal liability) relates to the
mental element, whereas the negative aspect (what should not be in the offender’s
mind in order to impose criminal liability) relates to the general defenses.26
For instance, imposition of criminal liability for wounding another person
requires recklessness as mental element, but it also requires that the offender not
24
See, e.g., in the United States, Robinson v. California, 370 U.S. 660, 82 S.Ct. 1417, 8 L.Ed.2d
758 (1962).
25
GLANVILLE WILLIAMS, CRIMINAL LAW: THE GENERAL PART sec. 11 (2nd ed., 1961).
26
ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW 157–158, 202 (5th ed., 2006).
2.1 Modern Criminal Liability 37
be insane. Recklessness is part of the positive aspect of culpability, and the general
defense of insanity is part of the negative aspect. The positive aspect of culpability
in criminal law has to do with the involvement of the mental processes in the
commission of the offense. In this context, it exhibits two important aspects:
27
G.R. Sullivan, Knowledge, Belief, and Culpability, CRIMINAL LAW THEORY – DOCTRINES OF THE
GENERAL PART 207, 214 (Stephen Shute and A.P. Simester eds., 2005).
38 2 Basic Requirements of Modern Criminal Liability
The highest layer of the mental element is that of general intent, which requires
full cognition. The offender is required to be fully aware of the factual reality. This
form involves examination of the offender’s subjective mind. Negligence is cogni-
tive omission, and the offender is not required to be aware of the factual element,
although based on objective characteristics he could and should have had awareness
of it. Strict liability is the lowest layer of the mental element; it replaces what was
formerly known as absolute liability. Strict liability is a relative legal presumption
of negligence based on the factual situation alone, which may be refuted by the
offender.
Cognition relates the factual reality, as noted above. The relevant factual reality
in criminal law is that which is reflected by the factual element components. From
the perpetrator’s point of view, only the conduct and circumstance components of
the factual element exist in the present. The results components occur in the future.
Because cognition is restricted to the present and to the past, it can relate only to
conduct and circumstances.
Although results occur in the future, the possibility of their occurrence ensuing
from the relevant conduct exists in the present, so that cognition can relate not only
to conduct and circumstances, but also to the possibility of the occurrence of the
results. For example, in the case of a homicide, A aims a gun at B and pulls the
trigger. At this point he is aware of his conduct, of the existing circumstances, and
of the possibility of B’s death as a result of his conduct.
Volition is considered immaterial for both negligence and strict liability, and
may be added only to the mental element requirement of general intent, which
embraces all three basic levels of will. Because in most legal systems the default
requirement for the mental element is general intent, negligence and strict liability
offenses must specify explicitly the relevant requirement. The explicit requirement
may be listed as part of the definition of the offense or included in the explicit legal
tradition of interpretation.
If no explicit requirement of this type is mentioned, the offense is classified as a
general intent offense, which is the default requirement. The relevant requirement
may be met not only by the same form of mental element, but also by a higher level
form. Thus, the mental element requirement of the offense is the minimal level of
mental element needed to impose criminal liability.28 A lower level is insufficient
for imposing criminal liability for the offense.
According to the modern structure of mental element requirement, each specific
offense embodies the minimal requirements for the imposition of criminal liability,
28
See, e.g., article 2.02(5) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT
AND EXPLANATORY NOTES 22 (1962, 1985), which provides:
When the law provides that negligence suffices to establish an element of an offense, such
element also is established if a person acts purposely, knowingly or recklessly. When
recklessness suffices to establish an element, such element also is established if a person
acts purposely or knowingly. When acting knowingly suffices to establish an element, such
element also is established if a person acts purposely.
2.2 Legal Entities 39
and the fulfillment of these requirements is adequate for the imposition of criminal
liability. No additional psychological meanings are required. Thus, any individual
who fulfils the minimal requirements of the relevant offense is considered to be an
offender, and criminal liability may be imposed upon.
The offender under modern criminal law is not required to be immoral or evil,
but only to fulfill all requirements of the offense. This way, the imposition of
criminal liability is very technic and rational. This legal situation has two main
aspects: structural and substantive. For instance, if the mental element of the
specific offense requires only “awareness”, no other component of mental element
is required (structural aspect), and the required “awareness” is defined by criminal
law regardless its meaning in psychology, philosophy, theology, etc. (substantive
aspect).
However, it cannot be denied that the structure of criminal liability has been
designed for humans towards the capabilities of humans, and not for other creatures
or towards other creatures’ capabilities. Mental element requirement relies on the
human spirit, soul and mind. The inevitable question at this point would be whether
artificial intelligence technologies can be examined through human standards of
spirit, soul and mind. The deeper, but not legal, question is how can criminal liability
be imposed, when based upon these insights, upon spiritless and soulless entities.
It should be noted, that although insights of criminal liability rely on the human
spirit and soul, but the imposition of criminal liability itself is not dependent upon
these terms of deep psychological meaning. If an offender fulfills both factual and
mental element requirements of the specific offense, criminal liability may be
imposed, with or without spirit or soul.
In fact, this understanding is not very new and perhaps not very innovative in the
twenty-first century. It has already been preceded by the same understanding in the
seventeenth century through corporations. Although in the seventeenth century
modern artificial intelligence technology was not invented yet, but there were
other non-human creatures which committed offenses and it was necessary to
subject them to criminal law. These legal creatures had neither spirit nor soul, but
they have the ability to be imposed criminal liability. This type of imposition of
criminal liability may be used as a legal model for imposition of criminal liability
on artificial intelligence systems.
The full meaning of imposition of criminal liability upon any offender is combined
out of both the responsibility (criminal liability) and its consequences (punish-
ment). Imposing criminal liability without the capability of sentencing may often be
meaningless in social terms and in legal social control terms. These two aspects of
imposition of criminal liability upon legal entities are discussed below. The most
available legal entities in this context are corporations.
40 2 Basic Requirements of Modern Criminal Liability
29
William S. Laufer, Corporate Bodies and Guilty Minds, 43 EMORY L. J. 647 (1994); Kathleen
F. Brickey, Corporate Criminal Accountability: A Brief History and an Observation, 60 WASH.
U. L. Q. 393 (1983).
30
WILLIAM SEARLE HOLDSWORTH, A HISTORY OF ENGLISH LAW 475–476 (1923).
31
William Searle Holdsworth, English Corporation Law in the 16th and 17th Centuries, 31 YALE
L. J. 382 (1922).
32
WILLIAM ROBERT SCOTT, THE CONSTITUTION AND FINANCE OF ENGLISH, SCOTISH AND IRISH JOINT-
STOCK COMPANIES TO 1720 462 (1912).
33
BISHOP CARLETON HUNT, THE DEVELOPMENT OF THE BUSINESS CORPORATION IN ENGLAND 1800–1867
6 (1963).
34
See, e.g., 6 Geo. I, c.18 (1719).
2.2 Legal Entities 41
included criminal offenses. The relevant offense used was public nuisance.35 This
trend of legislation has been deepened as the revolution progressed, and in the
nineteenth century most developed countries already had very developed legisla-
tion towards corporations in various contexts. This legislation included criminal
offenses as well for it to be effective. The conceptual question which was raised
was, how can criminal liability be imposed upon corporations.
Criminal liability requires factual element, whereas corporations possesses no
physical body. Criminal liability requires also mental element, whereas
corporations have no mind, brain, spirit or soul.36 Some countries in Europe refused
to impose criminal liability upon non-human creatures, and revived the Roman rule
that corporations are not subject to criminal liability (societas delinquere non
potest). This approach was very problematic and created legal shelters for
offenders.
For instance, when an individual does not pay his taxes, he is criminally liable,
but when this individual is corporation, it is exempt. Consequently, there is an
incentive to work through corporations and evade tax payments. These countries
have eventually subjected corporations to criminal law, but not until the twentieth
century. However, the Anglo-American legal tradition preferred to accept the idea
of criminal liability upon corporations due to its vast social advantages and benefits.
Thus, in 1635 it was for the first time that a corporation was convicted and
criminal liability was imposed upon it.37 This was relatively primitive structure of
imposition of criminal liability for it relied on vicarious liability. However, this type
of liability enabled the courts to impose criminal liability upon corporations in
separate from the criminal liability of any owner, worker or shareholder of the
corporation. This structure continued to be relevant in both eighteenth and nine-
teenth centuries.38 The major disadvantage of this criminal liability structure based
on vicarious liability was that it required valid vicarious relations between the
corporation and another entity, which in most cases happened to be human,
although it could have been another corporation.39
35
New York & G.L.R. Co. v. State, 50 N.J.L. 303, 13 A. 1 (1888); People v. Clark, 8 N.Y.Cr.
169, 14 N.Y.S. 642 (1891); State v. Great Works Mill. & Mfg. Co., 20 Me. 41, 37 Am.Dec.38
(1841); Commonwealth v. Proprietors of New Bedford Bridge, 68 Mass. 339 (1854); Common-
wealth v. New York Cent. & H. River R. Co., 206 Mass. 417, 92 N.E. 766 (1910).
36
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981).
37
Langforth Bridge, (1635) Cro. Car. 365, 79 Eng. Rep. 919.
38
Clifton (Inhabitants), (1794) 5 T.R. 498, 101 Eng. Rep. 280; Great Broughton (Inhabitants),
(1771) 5 Burr. 2700, 98 Eng. Rep. 418; Stratford-upon-Avon Corporation, (1811) 14 East 348, 104
Eng. Rep. 636; Liverpool (Mayor), (1802) 3 East 82, 102 Eng. Rep. 529; Saintiff, (1705) 6 Mod.
255, 87 Eng. Rep. 1002.
39
Severn and Wye Railway Co., (1819) 2 B. & Ald. 646, 106 Eng. Rep. 501; Birmingham, &c.,
Railway Co., (1842) 3 Q. B. 223, 114 Eng. Rep. 492; New York Cent. & H.R.R. v. United States,
212 U.S. 481, 29 S.Ct. 304, 53 L.Ed. 613 (1909); United States v. Thompson-Powell Drilling Co.,
196 F.Supp. 571 (N.D.Tex.1961); United States v. Dye Construction Co., 510 F.2d 78 (10th
42 2 Basic Requirements of Modern Criminal Liability
Consequently, when the human entity acted with no permission (ultra vires), the
corporation was exempt. It was enough for the exempt of the corporation to include
general provision in the corporation’s papers which prohibits the commission of
any criminal offense on behalf of the corporation.40 As a result, the model of
criminal liability of the corporation should have been replaced, as happened in
the late nineteenth and early twentieth centuries in Anglo-American legal
systems.41 The new model was based on the identity theory.
In some types of cases the criminal liability of corporations derives from its
organs, and in other types its criminal liability is independent. When the criminal
offense requires an omission (e.g., not paying taxes, not fulfilling legal
requirements, not observing workers’ rights, etc.), and the duty to act is the
corporation’s, the corporation is criminally liable independently, regardless any
criminal liability of any other entity, human or not. When the criminal offense
requires an act, its organs’ acts are related to it if committed on behalf of it, by
permission or not.42 The same structure works for the mental element, both for
general intent, negligence and strict liability.43
As a result, the criminal liability of the corporation is direct, not vicarious or
indirect.44 If all requirements of the specific offense are met by the corporation, it is
indicted, regardless any proceedings against any human entity. If convicted, the
corporation is punished in separate from any human entity. Punishments on
corporations are considered not less effective than on humans. However, the main
significance of the modern legal structure of criminal liability of corporations is
conceptual.
Since the seventeenth century criminal liability is not unique for humans. Other
entities, non-human, are also subject to criminal law, and it really works most
efficiently. Indeed, some adjustments were necessary for this legal structure to be
applicable, but eventually non-human corporations are subject to criminal law. For
any modern society it seems natural, and so it should be. If the first barrier has been
Cir.1975); United States v. Carter, 311 F.2d 934 (6th Cir.1963); State v. I. & M. Amusements, Inc.,
10 Ohio App.2d 153, 226 N.E.2d 567 (1966).
40
United States v. Alaska Packers’ Association, 1 Alaska 217 (1901).
41
United States v. John Kelso Co., 86 F. 304 (Cal.1898); Lennard’s Carrying Co. Ltd. v. Asiatic
Petroleum Co. Ltd., [1915] A.C. 705.
42
Director of Public Prosecutions v. Kent and Sussex Contractors Ltd., [1944] K.B. 146, [1944]
1 All E.R. 119; I.C.R. Haulage Ltd., [1944] K.B. 551, [1944] 1 All E.R. 691; Seaboard Offshore
Ltd. v. Secretary of State for Transport, [1994] 2 All E.R. 99, [1994] 1 W.L.R. 541, [1994]
1 Lloyd’s Rep. 593.
43
Granite Construction Co. v. Superior Court, 149 Cal.App.3d 465, 197 Cal.Rptr. 3 (1983);
Commonwealth v. Fortner L.P. Gas Co., 610 S.W.2d 941 (Ky.App.1980); Commonwealth
v. McIlwain School Bus Lines, Inc., 283 Pa.Super. 1, 423 A.2d 413 (1980); Gerhard O. W.
Mueller, Mens Rea and the Corporation – A Study of the Model Penal Code Position on Corporate
Criminal Liability, 19 U. PITT. L. REV. 21 (1957).
44
Hartson v. People, 125 Colo. 1, 240 P.2d 907 (1951); State v. Pincus, 41 N.J.Super. 454, 125
A.2d 420 (1956); People v. Sakow, 45 N.Y.2d 131, 408 N.Y.S.2d 27, 379 N.E.2d 1157 (1978).
2.2 Legal Entities 43
crossed in the seventeenth century, the road for crossing another barrier may be
open for the imposition of criminal liability upon artificial intelligence systems.
2.2.2 Punishments
Given that sentencing considerations are relevant for corporations, the question is
how can society impose human punishments upon them. For instance, how can
society impose imprisonment, fine or capital penalty upon corporations. That
requires a legal technique of conversion from human penalties to corporation
penalties. The facts are that not only criminal liability is imposed upon corporations
for centuries, but corporations are sentenced as well, and not only by fines.
Corporations are punished by various punishments, including imprisonment. It
should be noted, that the corporation is punished separately from the human officers
(directors, managers, employees, etc.), exactly as criminal liability is imposed upon
it separately from the criminal liability of the human officers, if any. There is no
debate over the question whether corporations should be punished by various
punishments, as imprisonment for example, but the question is only on the actual
way to do that.45
For answering the question of “how”, there is a necessary with a general legal
technique of conversion. This general technique contains three major stages as
follows:
(a) The general punishment itself (e.g., imprisonment, fine, probation, death,
etc.) is analyzed as to its roots of meaning;
(b) These roots are searched for in the corporation; and-
(c) The punishment is adjusted to these roots in the corporation.
45
Stuart Field and Nico Jorg, Corporate Liability and Manslaughter: Should We Be Going
Dutch?, [1991] Crim. L.R. 156 (1991).
44 2 Basic Requirements of Modern Criminal Liability
46
Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Misconduct, 60 LAW &
CONTEMP. PROBS. 23 (1997); Richard Gruner, To Let the Punishment Fit the Organization:
Sanctioning Corporate Offenders Through Corporate Probation, 16 AM. J. CRIM. L. 1 (1988);
Steven Walt and William S. Laufer, Why Personhood Doesn’t Matter: Corporate Criminal
Liability and Sanctions, 18 AM. J. CRIM. L. 263 (1991).
47
United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988).
48
Ibid, at p. 858.
49
Ibid.
50
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry Into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981); STEVEN BOX, POWER, CRIME AND
MYSTIFICATION 16–79 (1983); Brent Fisse and John Braithwaite, The Allocation of Responsibility
for Corporate Crime: Individualism, Collectivism and Accountability, 11 SYDNEY L. REV.
468 (1988).
51
Allegheny Bottling Company case, supra note 48, at p. 861.
2.2 Legal Entities 45
At this point the court could have implemented the imprisonment penalty on
corporations accordingly. Thus, on its third and final stage, in this context, the
court made imprisonment applicable for corporations, and actually implemented
corporate imprisonment as follows:
Such restraint of individuals is accomplished by, for example, placing them in the custody
of the United States Marshal. Likewise, corporate imprisonment can be accomplished
by simply placing the corporation in the custody of the United States Marshal. The
United States Marshal would restrain the corporation by seizing the corporation’s
physical assets or part of the assets or restricting its actions or liberty in a particular
manner. When this sentence was contemplated, the United States Marshal for the
Eastern District of Virginia, Roger Ray, was contacted. When asked if he could
imprison Allegheny Pepsi, he stated that he could. He stated that he restrained
corporations regularly for bankruptcy court. He stated that he could close the physical
plant itself and guard it. He further stated that he could allow employees to come and go
and limit certain actions or sales if that is what the Court imposes.
Richard Lovelace said some three hundred years ago, ‘stone walls do not a prison make,
nor iron bars a cage.’ It is certainly true that we erect our own walls or barriers that
restrain ourselves. Any person may be imprisoned if capable of being restrained in
some fashion or in some way, regardless of who imposes it. Who am I to say that
imprisonment is impossible when the keeper indicates that it can physically be done?
Obviously, one can restrain a corporation. If so, why should it be more privileged than
an individual citizen? There is no reason, and accordingly, a corporation should not be
more privileged.
Cases in the past have assumed that corporations cannot be imprisoned, without any cited
authority for that proposition. . . . This Court, however, has been unable to find any case
which actually held that corporate imprisonment is illegal, unconstitutional or impossi-
ble. Considerable confusion regarding the ability of courts to order a corporation
imprisoned has been caused by courts mistakenly thinking that imprisonment necessar-
ily involves incarceration in jail. . . . But since imprisonment of a corporation does not
necessarily involve incarceration, there is no reason to continue the assumption, which
has lingered in the legal system unexamined and without support, that a corporation
cannot be imprisoned. Since the Marshal can restrain the corporation’s liberty and has
done so in bankruptcy cases, there is no reason that he cannot do so in this case as he
himself has so stated prior to the imposition of this sentence.52
Thus, imprisonment is actually applied not only for human offenders, but also for
corporate offenders. Through this approach, not only imprisonment is applicable
for corporations, but any other type of penalty, even if originally designed for
human offenders. Imprisonment is significantly human penalty, whereas fine may
be easily collected from corporations (the same way as taxes are collected, e.g.).
However, imprisonment is actually applied and imposed upon corporations.
52
Ibid, at p. 861.
External Element Involving Artificial
Intelligence Systems 3
Contents
3.1 The General Structure of the External Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.1 Independent Offenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.2 Derivative Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Commission of External Element Components by Artificial Intelligence Technology . . . 60
3.2.1 Conduct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.2 Circumstances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.3 Results and Causation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
The external element of the criminal liability is reflected by the factual element
requirement of the offense. The general structure of factual element requirement is
consolidated for all types of criminal liability. Nevertheless, it may be more
comfortable to divide the discussion to the general structure within independent
offenses and derivative criminal liability.
The structure of the factual element requirement is common to most modern legal
systems. This structure applies the fundamental principle of conduct of criminal
liability, and this structure is identical in relation to all types of offenses, regardless
their mental element requirement. The factual element requirement is the broad
These aspects form together the basic description of the commission of any
offense as required to the imposition of criminal liability. That basic and general
description may be stated in the general formula “Something (1) has been done by
someone (2) sometime (3) somewhere (4)”. Of course, there may be other factual
aspects of the event, but modern criminal law is content with these four aspects.
The first factual aspect refers to the substantive facts of the event (what has
happened). The second factual aspect relates to the identity of the offender. The
third factual aspect addresses the time aspect. The fourth factual aspect specifies the
location of the event. In some offenses these factual aspects are answered directly
within the definition of the offense. In other offenses some of the questions are
answered through the applicability of the principle of legality in criminal law.2
For instance, the offense “whoever kills another person. . .” does not relate
directly to factual aspects (b), (c) and (d), but these factual aspect are answered
through the applicability of the principle of legality. Because the offense is likely to
be general,3 the answer to the factual aspect towards the offender’s identity is any
person who is legally competent. As this type of offense may not be applicable
retroactively,4 the answer to the factual aspect of the event’s time is from the time
the offense was validated onward. And because this type of offense may not be
applicable extraterritorially under general expansions,5 the answer for the aspect of
the event’s location is within the territorial jurisdiction of the sovereign under
general expansions.
Nevertheless, the answer to the factual aspect towards the general description of
the occurrence must be incorporated directly into the definition of the offense. This
factual aspect addresses the core of the offense, and it cannot be answered through
the principle of legality. This approach is the basis of the modern structure of the
1
Dugdale, (1853) 1 El. & Bl. 435, 118 Eng. Rep. 499, 500:
. . .the mere intent cannot constitute a misdemeanour when unaccompanied with any act;
Ex parte Smith, 135 Mo. 223, 36 S.W. 628 (1896); Proctor v. State, 15 Okl.Cr. 338, 176 P. 771
(1918); State v. Labato, 7 N.J. 137, 80 A.2d 617 (1951); Lambert v. State, 374 P.2d 783 (Okla.
Crim.App.1962); In re Leroy, 285 Md. 508, 403 A.2d 1226 (1979).
2
For the principle of legality in criminal law see GABRIEL HALLEVY, A MODERN TREATISE ON THE
PRINCIPLE OF LEGALITY IN CRIMINAL LAW (2010).
3
Ibid at pp. 135–137.
4
Ibid at pp. 49–80.
5
Ibid at pp. 81–132.
3.1 The General Structure of the External Element 49
(a) conduct;
(b) conduct + circumstances;
(c) conduct + results; and-
(d) conduct + circumstances + results.
In all of these four formulas the conduct component must be required, otherwise
the offense is not considered to be legal, and no criminal liability may be imposed
for committing it. For instance, the homicide offenses are very similar in their
factual element requirement, which may be expressed by the formula: “whoever
causes the death of another person, . . ..”. In this typical formula the word “causes”
functions as conduct, the word “person” functions as circumstances and the word
“death” functions as results. This offense is, consequently, considered to be requir-
ing both conduct, circumstances and results within its factual element requirement.
Most offenses’ definitions contain only the factual element requirement as the
mental element requirement may be easily deduced from the general provisions
of the criminal law.
Each derivative criminal liability form must meet factual element requirements.
These requirements are formed within a general template into which content is
filled. That includes the templates for criminal attempt, joint-perpetration, perpe-
tration-through-another, incitement, and accessoryship, and the content that must
be filled into each of the templates.
6
See, e.g., Robinson v. California, 370 U.S. 660, 82 S.Ct. 1417, 8 L.Ed.2d 758 (1962).
7
GLANVILLE WILLIAMS, CRIMINAL LAW: THE GENERAL PART sec. 11 (2nd ed., 1961).
50 3 External Element Involving Artificial Intelligence Systems
rape.8 An example of the absence of the results component is the above case of
attempted murder.
The absence of one component may form the upper borderline of the factual
element requirement for the attempt. The question arises what is the minimal
factual element requirement for the criminal attempt. The answer overlaps the
lower borderline of the criminal attempt according to its general course. The
minimal factual element may consist of the absence of all three components:
conduct, circumstances, and results. Such sweeping absence, however, requires
that the delinquent event enter within the range of criminal attempt, so that only
after the attempter made the decision to execute the criminal plan can the delin-
quent event be considered an attempt.
If according to the criminal plan, after the decision to execute the plan has been
made, the attempter begins executing the plan by inaction, and none of the factual
element requirements are met, the action is still considered an attempt. Thus, the
general template for the factual element requirement of the criminal attempt is
relative to the object-offense. Absence of any factual element component of factual
element requirement of the object-offense can still satisfy the factual element
requirement of the criminal attempt, as long as the event has entered the range of
an attempt, i.e., the decision to commit the offense has been made.
The general template requirement of joint-perpetration is that all factual element
components of the object-offense be met in the case of joint-perpetration as if the
offense were perpetrated by one person. The factual element components are not
required to be present separately for each of the joint-perpetrators as if they were
separate principal perpetrators, but the required factual elements may be met jointly
by the joint-perpetrators provided that all components are present. The exact
division of the relevant components among the joint-perpetrators is immaterial;
some of the components may be fulfilled by one of the joint-perpetrators and some
by others.
There is no minimal portion of the factual element components mandatory to be
physically committed by each of the joint-perpetrators. This requirement treats all
members of the joint enterprise as one body that is liable for the commission of the
offense. This body may have different organs, but it is not necessary to relate
specific components to specific organs. By analogy, the different components of
murder need not be related to different organs of the murderer (e.g., stabbing to his
hand, standing in front of the victim to his legs, etc.); similarly, it is not required to
relate each of these components specifically to individuals among the joint-
perpetrators.
It is sufficient to relate the complete fulfillment of the factual element require-
ment jointly to the offenders as one body in order to impose criminal liability on the
joint-perpetrators.9 Commission of the offense through joint-perpetration is the
8
Gabriel Hallevy, Victim’s Complicity in Criminal Law, 2 INT’L J. OF PUNISHMENT & SENTENCING
72 (2006).
9
Manley, (1844) 1 Cox C.C. 104; State v. Bailey, 63 W.Va. 668, 60 S.E. 785 (1908).
52 3 External Element Involving Artificial Intelligence Systems
10
Nye & Nissen v. United States, 336 U.S. 613, 69 S.Ct. 766, 93 L.Ed. 919 (1949); Pollack
v. State, 215 Wis. 200, 253 N.W. 560 (1934); Baker v. United States, 21 F.2d 903 (4th Cir.1927);
Miller v. State, 25 Wis. 384 (1870); Anderson v. Superior Court, 78 Cal.App.2d 22, 177 P.2d
315 (1947); People v. Cohen, 68 N.Y.S.2d 140 (1947); People v. Luciano, 277 N.Y. 348, 14 N.
E.2d 433 (1938); State v. Bruno, 105 F.2d 921 (2nd Cir.1939); State v. Walton, 227 Conn. 32, 630
A.2d 990 (1993).
3.1 The General Structure of the External Element 53
hurt some reasonable doubt defenses based on the accurate identification of the
actual (physical) offender.
For example, A and B, two identical twins, conspire to murder C. According to
the plan, A is to kill C in a mall where he would be filmed by the security cameras,
while at the same time exactly B would commit theft in another mall where he
would also be filmed by security cameras. At their trial they argue for reasonable
doubt because the prosecution cannot identify the actual (physical) murderer
beyond reasonable doubt. Another example is of D and E who tie F to a tree.
Both of them shoot F, who is killed, but only one bullet is found in F’s corpse, and it
is not known whose bullet it was. At their trial they argue for reasonable doubt
because the prosecution cannot identify the actual murderer beyond reasonable
doubt.
In both examples, the legal consequences are conviction of both offenders as
joint-perpetrators of murder. The collective conduct concept eliminates the need for
relating the components of the factual element exactly to the individual physical
perpetrators. In both examples, the offenders conspired jointly to commit the
murder, and executed their plan accordingly. They are both joint-perpetrators,
regardless of their physical contribution to the commission of the offense.11
Analytically, there can be four types of collective conduct in joint-perpetration.
The first is non-overlapping division of the factual element components, with the
components divided between the joint-perpetrators without overlap of the same
components across different joint-perpetrators, so that one of the perpetrators is
responsible for each component. Consider the case of an offense that contains four
components of the factual element, and there are three joint-perpetrators.
This type of non-overlapping division refers to situations in which only one
perpetrator is responsible for each component, but jointly all components of the
factual element are covered by the delinquent group acting as one body. The legal
consequence is that all joint-perpetrators are criminally liable for the commission of
the offense, regardless of their particular role in the actual perpetration, as long as
they are classified as joint-perpetrators (they participated in the conspiracy, decided
to execute the criminal plan, and began its execution).
The second type is a partially-overlapping division of the factual element
components, with the components divided among the joint-perpetrators and partial
overlapping the same components across different perpetrators. Consequently,
several perpetrators are responsible for some of the components. This type of
partially-overlapping division refers to situations in which some joint-perpetrators,
but not all, are responsible for some of the components of the factual element, but
jointly all components of the factual element are covered by the delinquent group
acting as one body. The legal consequence is that all joint-perpetrators are crimi-
nally liable for the commission of the offense, regardless of their particular role in
the actual perpetration, as long as they are classified as joint-perpetrators.12
11
United States v. Bell, 812 F.2d 188 (5th Cir.1987).
12
Harley, (1830) 4 Car. & P. 369, 172 Eng. Rep. 744.
54 3 External Element Involving Artificial Intelligence Systems
In the third type all the factual element components are covered by one of the
joint-perpetrators who is responsible for all the components. Concentration of all
factual element components in one joint-perpetrator is typical of offenses
committed by hierarchical criminal organizations, when the executive level (lead-
ership, inspection, advising, etc.) is separated from the operative members of the
organization. This type of collective conduct does not necessarily result in the
concentration of all components in the person of one joint-perpetrator, but occa-
sionally they are covered by several joint-perpetrators.
Concentration of all the factual element components in several joint-perpetrators
is typical of offenses committed by criminal organizations, when the operational
level is separated from the leadership. In smaller criminal organizations the leader
directs the commission of the offense at the conspiracy stage, and after the joint
decision has been made he steps asides and does not participate in the actual
execution of the criminal plan. Nevertheless, all the components of the factual
element have been fulfilled by the delinquent group that acted as one body. The
legal consequence is that all joint-perpetrators are criminally liable for the commis-
sion of the offense, including the leader, regardless of their specific role in the
actual perpetration, as long as they are classified as joint-perpetrators.13
In the fourth type, all the joint-perpetrators cover all the factual element
components. This type of coverage of all factual element components by all the
joint-perpetrators refers to situations in which all the perpetrators fully complete the
offense separately. In this type of situation, criminal liability may be imposed on
each of the perpetrators as a principal perpetrator, regardless their joint enterprise,
and the advantage of the law of joint-perpetration with respect to the factual
element becomes insignificant. Participation in the conspiracy does not guarantee
the imposition of criminal liability in joint-perpetration, unless execution of the
criminal plan has begun.
If the offence has been committed separately by the various offenders, regardless
of any previous conspiracy, this is not joint-perpetration anymore but multi-
principal perpetration.14 But if the conspiracy produces a plan according to which
13
Bingley, (1821) Russ. & Ry. 446, 168 Eng. Rep. 890; State v. Adam, 105 La. 737, 30 So.
101 (1901); Roney v. State, 76 Ga. 731 (1886); Smith v. People, 1 Colo. 121 (1869); United States
v. Rodgers, 419 F.2d 1315 (10th Cir.1969).
14
Pinkerton v. United States, 328 U.S. 640, 66 S.Ct. 1180, 90 L.Ed. 1489 (1946); State v. Cohen,
173 Ariz. 497, 844 P.2d 1147 (1992); State v. Carrasco, 124 N.M. 64, 946 P.2d 1075 (1997);
People v. McGee, 49 N.Y.2d 48, 424 N.Y.S.2d 157, 399 N.E.2d 1177 (1979); State v. Stein,
94 Wash.App. 616, 972 P.2d 505 (1999); Commonwealth v. Perry, 357 Mass. 149, 256 N.E.2d
745 (1970); United States v. Buchannan, 115 F.3d 445 (7th Cir.1997); United States v. Alvarez,
755 F.2d 830 (11th Cir.1985); United States v. Chorman, 910 F.2d 102 (4th Cir.1990); United
States v. Moreno, 588 F.2d 490 (5th Cir.1979); United States v. Castaneda, 9 F.3d 761 (9th
Cir.1993); United States v. Walls, 225 F.3d 858 (7th Cir.2000); State v. Duaz, 237 Conn. 518, 679
A.2d 902 (1996); Harris v. State, 177 Ala. 17, 59 So. 205 (1912); Apostoledes v. State, 323 Md.
456, 593 A.2d 1117 (1991); State v. Anderberg, 89 S.D. 75, 228 N.W.2d 631 (1975); Espy v. State,
54 Wyo. 291, 92 P.2d 549 (1939); State v. Hope, 215 Conn. 570, 577 A.2d 1000 (1990).
3.1 The General Structure of the External Element 55
the offense is to be committed separately by all the offenders (e.g., one serving as a
backup for the other), the offense is joint-perpetration.
The general template requirement of perpetration-through-another is that all
factual element components of the offense are covered as if by one person. Thus,
the components of the physical acts of both the perpetrator-through-another and of
the other person are considered as if they were committed by one person. It is not
necessary for each person to separately meet the requirements for factual element
components as if they were separate principal perpetrators. The factual element
requirement may be met jointly by both persons, provided that all the components
are present. The exact division among the two persons of the relevant components is
immaterial. Some of the components may be covered by one person, some by the
other.
There is no minimal portion of the factual element components that must be
physically committed by either of the two. The template requirement regards the
perpetrator-through-another to be the person responsible for the commission of the
offense as principal perpetrator, whether he committed the offense directly or used
instrumentally another person, denying his free choice. The arm of the perpetrator
may be greatly extended through the other person, but the arm functions as a mere
instrument, without a real opportunity to choose. It is immaterial which person
physically fulfills the factual element requirement as long as the perpetrating body
as a whole covers all the components.15
The criminal liability of the perpetrator-through-another for the commission of
the offense is based on the execution of the criminal plan that was completed earlier
by the perpetrator-through-another, and on the execution of the plan by another
person being used instrumentally. For example, A wishes to rob a bank and plans
for B to break into the bank and remove the money from the safe. The plan is
executed while A is far away from the bank. Apparently, B provides the physical
fulfillment of the factual element components of robbery, whereas A did not
participate physically the robbery. But because of the instrumental use of B by A,
A is considered the perpetrator-through-another of the robbery, although he did not
account for any specific factual element component.
According to this view of factual element requirement, the perpetrator-through-
another is conceptualized as the principal perpetrator, functionally identical with
any other principal perpetrator who instrumentally uses other devices for the
commission of the offense. The fact that in this case the “device” happens to be
human is immaterial in this context. Analytically, the factual element requirement
can be met in four ways in the case of perpetration-through-another.
The first way is non-overlapping division of the factual element components,
with the components divided between the perpetrator-through-another and the other
person without overlap of the same components between them, so that one person is
responsible for each component. Consider the case of an offense that contains four
components of the factual element, and there are two persons: one perpetrator-
15
Dusenbery v. Commonwealth, 220 Va. 770, 263 S.E.2d 392 (1980).
56 3 External Element Involving Artificial Intelligence Systems
16
NICOLA LACEY AND CELIA WELLS, RECONSTRUCTING CRIMINAL LAW – CRITICAL PERSPECTIVES ON
CRIME AND THE CRIMINAL PROCESS 53 (2nd ed., 1998).
17
Butt, (1884) 49 J.P. 233, 15 Cox C.C. 564, 51 L.T. 607, 1 T.L.R. 103; Stringer and Banks, (1991)
94 Cr. App. Rep. 13; Manley, (1844) 1 Cox C.C. 104; Mazeau, (1840) 9 Car. & P. 676, 173 Eng.
Rep. 1006.
3.1 The General Structure of the External Element 57
18
Compare article 26 of the German penal code which provides:
Als Anstifter wird gleich einem Täter bestraft, wer vorsätzlich einen anderen zu dessen
vorsätzlich begangener rechtswidriger Tat bestimmt hat;
Est également complice la personne qui par don, promesse, menace, ordre, abus d’autorité
ou de pouvoir aura provoqué à une infraction ou donné des instructions pour la commettre;
Section 5.02(1) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT AND
EXPLANATORY NOTES 76 (1962, 1985) provides:
is not derived from the object-offense because incitement relies on external deriva-
tion and its components are not dependent on the factual element of the object-
offense. The conduct component consists of measures taken by the inciter to cause
the incited person to make the decision to commit the offense freely and with full
awareness of his acts. The purpose of these measures contradicts instrumental use
of the incited person in the commission of the offense. The measures of incitement
may include seduction, solicitation, convincing, encouragement, abetting, advice,
and more. As long as the measures do not contradict the aware and free choice of
the incited person, they may be considered to be part of the incitement.
The circumstances component of incitement consists of the identity of the
incited person. The incited person cannot be the same person as the inciter. This
requirement is intended not only to eliminate cases of self-principal-incitement,
which are classified as principal perpetration, the offender having persuaded him-
self to commit the offense; it is also meant to differentiate incitement from
conspiracy. If the inciter participates in the early planning of the offense, he is
not considered an inciter but a conspirator or joint-perpetrator.
For example, A solicits B to jointly commit an offense, and B agrees to A’s
proposal. This agreement is a clear expression of conspiracy, not of incitement. As
both perpetrators plan the commission of the offense together and agree to commit
it, A’s conduct is part of the conspiracy. Conspiracy may include the inner efforts of
some of the conspirators, which may resemble incitement but still be conspiracy.
The result component of incitement consists of the decision of the incited person to
commit the offense. Successful incitement does not necessarily include the actual
commission of the offense.
Because incitement consists of “planting” the delinquent idea in the incited
person’s mind, this purpose is fulfilled when the incited person enters the sphere
of social endangerment, which begins with the decision made freely and with full
awareness to commit the offense. It is immaterial, therefore, as far as the inciter’s
criminal liability is concerned, whether or not the incited person completed the
commission of the offense. The incitement is complete when the incited person
makes the decision to commit the offense. All three components must be present in
order to impose criminal liability for incitement.
The general template for the factual element requirement of accessoryship
includes any conduct characterized not by factual attributes but by its purpose, as
expressed by the mental element requirement of accessoryship.19 That purpose is to
render assistance to the perpetrators of the offense, and it does not relate directly to
19
See e.g., article 27(1) of the German penal code provides:
Als Gehilfe wird bestraft, wer vorsätzlich einem anderen zu dessen vorsätzlich begangener
rechtswidriger Tat Hilfe geleistet hat;
Article 121-7 of the French penal code provides:
Est complice d’un crime ou d’un délit la personne qui sciemment, par aide ou assistance, en
a facilité la préparation ou la consommation;
3.1 The General Structure of the External Element 59
the commission of the offense. In other words, the accessory’s purpose is not the
successful commission of the offense, but to render assistance to the perpetrators.
The accessory’s motive may be to contribute to the completion of the offense, but
this is not necessary for the imposition of criminal liability for accessoryship.
The factual element requirement of accessoryship is similar to that of conduct
offenses. The core of the factual element of accessoryship is the conduct that is
intended to render assistance to the perpetrators, which includes any measures taken
by the accessory to render assistance to the perpetrators. These measures may
include various types of assistance, according to the accessory’s understanding;
in different offenses, the accessories may use different types of conduct.
The circumstance component of accessoryship consists of the timing of the
accessoryship: the assisting conduct must be rendered before the commission of
the offense or simultaneously with it. If the assisting conduct has been rendered
after the completion of the offense, it is no longer accessoryship. If the accessory
renders the assistance after the completion of the offense, according to the early
planning in which he participated, he is a joint-perpetrator. If the accessory did not
participate in the early planning, he is considered to be an accessory after the fact.
The factual element of accessoryship does not require the component of results,
and therefore the accessory is not required to render effective assistance to the
perpetrators or to contribute to the actual commission of the offense. Even if the
accessory interferes with the commission of the offense or prevents its completion,
his action is still considered to be accessoryship. If the accessory subjectively
considered his conduct to be rendering assistance to the perpetrators and committed
his act with this purpose, the factual element of the accessoryship is satisfied.
For example, A knows that B intends to break into C’s house at a certain time.
With the purpose of assisting B, he calls C outside so that C would not oppose B. C
walks out of the house but locks the door behind him, making the burglary more
difficult to commit. Although A in practice hindered the commission of the offense,
he is still considered an accessory because according to A’s subjective understand-
ing, his conduct was intended to render assistance to B in committing the offense.
Because this was the purpose of A’s action and because it occurred before the
commission of the offense, A’s conduct is considered accessoryship. Both the
conduct and circumstance components must be present in order to impose criminal
liability for accessoryship.
Article 8 of the Accessories and Abettors Act, 1861, 24 & 25 Vict. c.94 as amended by the
Criminal Law Act, 1977, s. 65(4) provides:
Whosoever shall aid, abet, counsel, or procure the commission of any indictable offence,
whether the same be an offence at common law or by virtue of any Act passed, shall be
liable to be tried, indicted, and punished as a principal offender.
60 3 External Element Involving Artificial Intelligence Systems
The factual element requirement structure (actus reus) is identical in relation to all
types of offenses regardless their mental element requirement. This structure
contains one mandatory component (conduct) in all offenses and two possible,
but not mandatory, components in some offenses (circumstances and results). The
capability of artificial intelligence technology to fulfill the factual element require-
ment under this structure is discussed below.
3.2.1 Conduct
20
Fain v. Commonwealth, 78 Ky. 183, 39 Am.Rep. 213 (1879); Tift v. State, 17 Ga.App.
663, 88 S.E. 41 (1916); People v. Decina, 2 N.Y.2d 133, 157 N.Y.S.2d 558, 138 N.E.2d
799 (1956); Mason v. State, 603 P.2d 1146 (Okl.Crim.App.1979); State v. Burrell,
135 N.H. 715, 609 A.2d 751 (1992); Bonder v. State, 752 A.2d 1169 (Del.2000).
3.2 Commission of External Element Components by Artificial Intelligence Technology 61
This definition of act concentrates on the factual aspects of the act, and does not
involve mental aspects in the definition of factual elements.21 The definition is also
broad enough to include actions which originate in telekinesis, psychokinesis, etc.,
if these are possible,22 as long as they have factual–external presentation.23 If “act”
is restricted only to “willed muscular construction” or “willed bodily movement”,24
it would also prevent imposition of criminal liability in cases of perpetration-
through-another (e.g., B in the above example), since no act has been performed.
Thus according to these definitions, in order to assault anyone and be exempt from
criminal liability, the offender has to push an innocent person upon the victim.
In the past, the requirement of act consisted on willed bodily movement.25 Such
requirement involves mental aspects (will, in this case) within the factual element
requirement that is supposed to be purely objective-external requirement. There-
fore, the modern criminal law rejects such mongrel requirement. The question of
will belongs to the mental element requirement, and there it should be discussed.
Consequently, when examining the factual element requirement, no aspects of
mental element requirement should be taken into consideration. Consequently,
the criminal law considers act as any material performance through factual–exter-
nal presentation, whether willed or not.
Accordingly, artificial intelligence technology is capable of performing “acts”,
which satisfy the conduct requirement. This is true not only for strong artificial
intelligence technology, but for much lower technologies as well. When a machine,
(e.g., robot equipped with artificial intelligence technology) moves its hydraulic
arms or other devices of its, it is considered act. That is correct when the movement
is a result of inner calculations of the machine, but not only then. Even if the
machine is fully operated by human operator through remote control, any move-
ment of the machine is considered an act.
As a result, even sub-artificial intelligence technology machines have the factual
capability of performing acts, regardless the motives or reasons for the act. It does
not necessarily mean that these machines are criminally liable for these acts, since
the imposition of criminal liability is dependent in the mental element requirement
21
As did some other definitions, which the most popular of them is “willed muscular movement”.
See HERBERT L. A. HART, PUNISHMENT AND RESPONSIBILITY: ESSAYS IN THE PHILOSOPHY OF LAW
101 (1968); OLIVER W. HOLMES, THE COMMON LAW 54 (1881, 1923); ANTONY ROBIN DUFF, CRIMINAL
ATTEMPTS 239–263 (1996); JOHN AUSTIN, THE PROVINCE OF JURISPRUDENCE DETERMINED (1832, 2000);
GUSTAV RADBRUCH, DER HANDLUNGSBEGRIFF IN SEINER BEDEUTUNG FÜR DAS STRAFRECHTSSYSTEM 75, 98
(1904); CLAUS ROXIN, STRAFRECHT – ALLGEMEINER TEIL I 239–255 (4 Auf., 2006); BGH 3, 287.
22
See, e.g., Bolden v. State, 171 S.W.3d 785 (2005); United States v. Meyers, 906 F. Supp. 1494
(1995); United States v. Quaintance, 471 F. Supp.2d 1153 (2006).
23
Scott T. Noth, A Penny for Your Thoughts: Post-Mitchell Hate Crime Laws Confirm a Mutating
Effect upon Our First Amendment and the Government’s Role in Our Lives, 10 REGENT U. L. REV.
167 (1998); HENRY HOLT, TELEKINESIS (2005); PAMELA RAE HEATH, THE PK ZONE: A CROSS-
CULTURAL REVIEW OF PSYCHOKINESIS (PK) (2003).
24
JOHN AUSTIN, THE PROVINCE OF JURISPRUDENCE DETERMINED (1832, 2000).
25
OLIVER W. HOLMES, THE COMMON LAW 54 (1881, 1923).
62 3 External Element Involving Artificial Intelligence Systems
as well. However, for the question of performing an act in order to satisfy the
conduct component requirement, any material performance through factual–exter-
nal presentation is considered an act, whether the physical performer is strong
artificial intelligence entity or not.
Omission in criminal law is defined as inaction contradicting a legitimate duty to
act. According to this definition, the term “legitimate duty” is of great significance.
The opposite of action is not omission but inaction. If doing something is an act,
then not doing anything is inaction. Omission is an intermediate degree of conduct
between action and inaction. Omission is not mere inaction, but inaction that
contradicts a legitimate duty to act.26 Therefore, the omitting offender is required
to act but fails to do so. If no act has been committed, but no duty to act is imposed,
no omission has been committed.27
Therefore, punishing for omission is punishing for doing nothing in specific
situations where there should have been done something due to certain legitimate
duty. For instance, in most countries parents have the legal duty to take care of their
children. In these countries the breach of this duty may form a specific offense. The
parent in this situation is not punished for acting in a wrong way, but for not acting
although he had a legal duty to act is specific way. The requirement to act must be
legitimate in the given legal system, and in most legal systems the legitimate duty
may be imposed both by law and by contract.28
For the question of differences as to the quality of criminal liability, the modern
concept of conduct in criminal law acknowledges no substantive or functional
differences between acts and omissions for the imposition of criminal liability.29
Therefore, any offense may be committed both by act and by omission. Socially and
legally, commission of offenses by omission is no less severe than commission by
act.30 Most legal systems accept this modern approach, and there is no need to
explicitly require omission to be part of the factual element of the offense.
The offense defines the prohibited conduct, which may be committed both
through acts and through omissions.31 On that ground, artificial intelligence
26
See e.g., People v. Heitzman, 9 Cal.4th 189, 37 Cal.Rptr.2d 236, 886 P.2d 1229 (1994); State
v. Wilson, 267 Kan. 550, 987 P.2d 1060 (1999).
27
Rollin M. Perkins, Negative Acts in Criminal Law, 22 IOWA L. REV. 659 (1937); Graham
Hughes, Criminal Omissions, 67 YALE L. J. 590 (1958); Lionel H. Frankel, Criminal Omissions:
A Legal Microcosm, 11 WAYNE L. REV. 367 (1965).
28
P. R. Glazebrook, Criminal Omissions: The Duty Requirement in Offences Against the Person,
55 L. Q. REV. 386 (1960); Andrew Ashworth, The Scope of Criminal Liability for Omissions,
84 L. Q. REV. 424, 441 (1989).
29
Lane v. Commonwealth, 956 S.W.2d 874 (Ky.1997); State v. Jackson, 137 Wash.2d 712, 976
P.2d 1229 (1999); Rachel S. Zahniser, Morally and Legally: A Parent’s Duty to Prevent the Abuse
of a Child as Defined by Lane v. Commonwealth, 86 KY. L. J. 1209 (1998).
30
Mavji, [1987] 2 All E.R. 758, [1987] 1 W.L.R. 1388, [1986] S.T.C. 508, Cr. App. Rep.
31, [1987] Crim. L.R. 39; Firth, (1990) 91 Cr. App. Rep. 217, 154 J.P. 576, [1990] Crim. L.R. 326.
31
See, e.g., section 2.01(3) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT
AND EXPLANATORY NOTES (1962, 1985).
3.2 Commission of External Element Components by Artificial Intelligence Technology 63
3.2.2 Circumstances
Circumstances describe the conduct, but do not derive from it. They paint the
offense with its criminal colors. For instance, the circumstances of “without
consent” in the offense of rape describe the conduct “having sexual intercourse”
as criminal. Having sexual intercourse is not necessarily criminal, unless it is
32
GABRIEL HALLEVY, THE MATRIX OF DERIVATIVE CRIMINAL LIABILITY 171–184 (2012).
64 3 External Element Involving Artificial Intelligence Systems
33
See, e.g., Pierson v. State, 956 P.2d 1119 (Wyo.1998).
34
See, e.g., State v. Dubina, 164 Conn. 95, 318 A.2d 95 (1972); State v. Bono, 128 N.J.Super.
254, 319 A.2d 762 (1974); State v. Fletcher, 322 N.C. 415, 368 S.E.2d 633 (1988).
35
S.Z. Feller, Les Délits de Mise en Danger, 40 REV. INT. DE DROIT PÉNAL 179 (1969).
3.2 Commission of External Element Components by Artificial Intelligence Technology 65
to be a woman regardless the identity of the rapist. The raped woman is still
considered “woman” whether it has been attacked by human, machine or has not
been attacked at all.
In some offenses the circumstances are not external to the offender, but related
somehow to the offender’s conduct. These circumstances assimilate within the
conduct component in this context. For instance, in the above example of rape
the circumstances “without consent” describe the conduct as if they were part of it
(how exactly the offender had sexual intercourse with the victim?). For satisfying
the requirement of this type of circumstances, the offender just has to commit the
conduct, but in more particular way. Consequently, for this type of circumstances,
fulfilling the requirement is not different than committing the conduct.
36
This is the results component of all homicide offenses. See SIR EDWARD COKE, INSTITUTIONS OF
THE LAWS OF ENGLAND – THIRD PART 47 (6th ed., 1681, 1817, 2001):
Murder is when a man of sound memory, and of the age of discretion, unlawfully killeth
within any county of the realm any reasonable creature in rerum natura under the king’s
peace, with malice aforethought, either expressed by the party or implied by law, [so as the
party wounded, or hurt, etc die of the wound or hurt, etc within a year and a day after the
same].
37
E.g., legal causation as part of the mental element requirement.
38
Henderson v. Kibbe, 431 U.S. 145, 97 S.Ct. 1730, 52 L.Ed.2d 203 (1977); Commonwealth
v. Green, 477 Pa. 170, 383 A.2d 877 (1978); State v. Crocker, 431 A.2d 1323 (Me.1981); State
v. Martin, 119 N.J. 2, 573 A.2d 1359 (1990).
66 3 External Element Involving Artificial Intelligence Systems
According to this definition, the results are the ultimate consequences of the
conduct, i.e., causa sine qua non,39 or the ultimate cause. The factual causation
relates not only to the mere occurrence of the results but also to the way in which
they occurred. For example, A hits B and B dies. For B was terminally ill A may
argue that B would have died anyway in the near future, so B’s death is not the
ultimate result of A’s conduct, and the results would have occurred anyway even if
it were not for A’s conduct. But because the factual causation has to do with the way
in which the results occurred, the requirement is met in this example: B would not
have died the way he did had A not hit him.
As a result and on that ground, artificial intelligence technology is capable of
satisfying the results component requirement of the factual element. In order to
achieve the results the offender has to initiate the conduct. The commission of the
conduct forms the results, and the existence of the results is examined objectively if
derived from the very commission of the conduct.40 Thus, when the offender
commits the conduct, and the conduct is done, the conduct (not the offender) is
the cause for the results to occur, if occurred. The offender is not required to
commit, separately, any results, but only the conduct. Although the offender
initiates the factual process which forms the results, this process is initiated only
through the commission of the conduct component.41 Thus, since artificial intelli-
gence technology is capable of committing the conduct of all kinds, in the context
of criminal law, it is capable of causing results out of this conduct.
For instance, when an artificial intelligence system operates firing system and
makes it shoot a bullet towards a human individual, it is fulfillment of conduct
component in homicide offenses. At that point the conduct is examines whether it
caused that individual’s death through a causation test. If it did, the results compo-
nent is fulfilled as well as the conduct, although physically the system “did” nothing
but the conduct component. Since the conduct component fulfillment is within the
capabilities of artificial intelligence technologies, so is the results component.
39
See, e.g., Wilson v. State, 24 S.W. 409 (Tex.Crim.App.1893); Henderson v. State, 11 Ala.App.
37, 65 So. 721 (1914); Cox v. State, 305 Ark. 244, 808 S.W.2d 306 (1991); People v. Bailey,
451 Mich. 657, 549 N.W.2d 325 (1996).
40
Morton J. Horwitz, The Rise and Early Progressive Critique of Objective Causation, THE
POLITICS OF LAW: A PROGRESSIVE CRITIQUE 471 (David Kairys ed., 3rd ed., 1998); Benge, (1865)
4 F. & F. 504, 176 Eng. Rep. 665; Longbottom, (1849) 3 Cox C. C. 439.
41
Jane Stapelton, Law, Causation and Common Sense, 8 OXFORD J. LEGAL STUD. 111 (1988).
Positive Fault Element Involving Artificial
Intelligence Systems 4
Contents
4.1 Structure of Positive Fault Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.1.1 Independent Offenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.2 Derivative Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 General Intent and Artificial Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.1 Structure of General Intent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.2 Cognition and Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.2.3 Volition and Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2.4 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.5 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2.6 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.3 Negligence and Artificial Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3.1 Structure of Negligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3.2 Negligence and Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.3.3 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.3.4 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.3.5 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.4 Strict Liability and Artificial Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.4.1 Structure of Strict Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.4.2 Strict Liability and Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.4.3 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.4.4 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.4.5 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
The positive fault element of the criminal liability is reflected by the mental element
requirement of the offense. The general structure of mental element requirement is
consolidated for all types of criminal liability. Nevertheless, it may be more
comfortable to divide the discussion to the general structure within independent
offenses and derivative criminal liability.
The structure of the mental element requirement applies the fundamental principle
of culpability in criminal law (nullum crimen sine culpa). The principle of culpa-
bility has two main aspects: positive and negative. The positive aspect (what should
be in the offender’s mind in order to impose criminal liability) relates to the mental
element, whereas the negative aspect (what should not be in the offender’s mind in
order to impose criminal liability) relates to the general defenses.1
For instance, imposition of criminal liability for wounding another person
requires recklessness as mental element, but it also requires that the offender not
be insane. Recklessness is part of the positive aspect of culpability, and the general
defense of insanity is part of the negative aspect. The positive aspect of culpability
in criminal law has to do with the involvement of the mental processes in the
commission of the offense. In this context, it exhibits two important aspects:
1
ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW 157–158, 202 (5th ed., 2006).
2
G.R. Sullivan, Knowledge, Belief, and Culpability, CRIMINAL LAW THEORY – DOCTRINES OF THE
GENERAL PART 207, 214 (Stephen Shute and A.P. Simester eds., 2005).
4.1 Structure of Positive Fault Element 69
an intermediate level of volition. If the car driver absolutely would not have wanted
to cause any death to anyone, he would not have taken the unreasonable risk by
committing the dangerous detour.
Both cognitive and volitive aspects are combined to form the mental element
requirement as derived from the positive aspect of culpability in criminal law. In
most modern countries, there are three main forms of mental element, which are
differentiated based on the cognitive aspect. The three forms represent three layers
of positive culpability and they are:
The highest layer of the mental element is that of general intent, which requires
full cognition. General intent is occasionally referred to as mens rea. The offender
is required to be fully aware of the factual reality. This form involves examination
of the offender’s subjective mind. Negligence is cognitive omission, and the
offender is not required to be aware of the factual element, although based on
objective characteristics he could and should have had awareness of it. Strict
liability is the lowest layer of the mental element; it replaces what was formerly
known as absolute liability. Strict liability is a relative legal presumption of
negligence based on the factual situation alone, which may be refuted by the
offender.
Cognition relates the factual reality, as noted above. The relevant factual reality
in criminal law is that which is reflected by the factual element components. From
the perpetrator’s point of view, only the conduct and circumstance components of
the factual element exist in the present. The results components occur in the future.
Because cognition is restricted to the present and to the past, it can relate only to
conduct and circumstances.
Although results occur in the future, the possibility of their occurrence ensuing
from the relevant conduct exists in the present, so that cognition can relate not only
to conduct and circumstances, but also to the possibility of the occurrence of the
results. For example, in the case of a homicide, A aims a gun at B and pulls the
trigger. At this point he is aware of his conduct, of the existing circumstances, and
of the possibility of B’s death as a result of his conduct.
Volition is considered immaterial for both negligence and strict liability, and
may be added only to the mental element requirement of general intent, which
embraces all three basic levels of will. Because in most legal systems the default
requirement for the mental element is general intent, negligence and strict liability
offenses must specify explicitly the relevant requirement. The explicit requirement
may be listed as part of the definition of the offense or included in the explicit legal
tradition of interpretation.
If no explicit requirement of this type is mentioned, the offense is classified as a
general intent offense, which is the default requirement. The relevant requirement
may be met not only by the same form of mental element, but also by a higher level
70 4 Positive Fault Element Involving Artificial Intelligence Systems
form. Thus, the mental element requirement of the offense is the minimal level of
mental element needed to impose criminal liability.3 A lower level is insufficient
for imposing criminal liability for the offense.
According to the modern structure of mental element requirement, each specific
offense embodies the minimal requirements for the imposition of criminal liability,
and the fulfillment of these requirements is adequate for the imposition of criminal
liability. No additional psychological meanings are required. Thus, any individual
who fulfils the minimal requirements of the relevant offense is considered to be an
offender, and criminal liability may be imposed upon.
Each derivative criminal liability form requires the presence of the mental element.
This requirement is formed within the general template into which the content is
filled. The mental element requirement must match its corresponding factual basis,
which is embodied in the factual element requirement, as discussed above.4
The centrality of the mental element requirement of the criminal attempt derives
from the essence of the attempt, which helps explain its social justification.5 This
requirement may be defined as specific intent to complete the offense accompanied
by general intent components relating to existing factual element components of the
object-offense. The factual element requirement of the criminal attempt derives
from the object-offense, but lacks some of its components.
The mental element of the attempt must match the factual element, but the
criminal attempt is executed owing to the purpose to complete the commission of
the offense. Consequently, the mental element of the attempt must reflect the
mental relation of the offender to existing factual element components and to the
purpose to complete the offense. The purposefulness characteristic of derivative
criminal liability reflects the mental relation to the delinquent event, and should
therefore be reflected in the mental element requirement.
Thus, the central axis of the mental element requirement of the attempt is that of
purposefulness. The attempter’s purpose is to complete the commission of the
3
See, e.g., article 2.02(5) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT
AND EXPLANATORY NOTES 22 (1962, 1985), which provides:
When the law provides that negligence suffices to establish an element of an offense, such
element also is established if a person acts purposely, knowingly or recklessly. When
recklessness suffices to establish an element, such element also is established if a person
acts purposely or knowingly. When acting knowingly suffices to establish an element, such
element also is established if a person acts purposely.
4
Above at Chap. 3.
5
Robert H. Skilton, The Mental Element in a Criminal Attempt, 3 U. PITT. L. REV. 181 (1937); Dan
Bein, Preparatory Offences, 27 ISR. L. REV. 185 (1993); Larry Alexander and Kimberly
D. Kessler, General intent and Inchoate Crimes, 87 J. CRIM. L. & CRIMINOLOGY 1138 (1997).
4.1 Structure of Positive Fault Element 71
offense. For example, A aims his gun at B and pulls the trigger but the bullet misses
B. The act can be considered attempted murder only if A’s purpose was to kill B;
otherwise attempted murder is not relevant to these factual elements. The social
endangerment in delinquent events of this type focuses not on the facts but on the
offender’s mind.
The reflection of purposefulness in the mental element requirement of the
attempt is broad, and it includes both cognitive and volitive aspects. Volition
embodies the purposefulness and cognition supports it, so that the two form a
double, cumulative requirement of mental element components:
Specific intent in criminal law is the mental purpose, aim, target, and object of
the offender. It is the highest level of volition recognized by criminal law. The
purpose of the specific intent is the completion of the commission of the offense.6
As long as the offense has not been completed, completion of the offense exceeds
the factual element components that have taken place de facto in the course of the
delinquent event. The “regular” mental element components must relate to the
existing factual element components, as part of the mental element structure.
Accordingly, the purpose of the completion of the offense, which exceeds the
factual elements of the attempt, requires a special mental element component,
which is the specific intent.
Specific intent is “specific” for structural reasons, as it relates to objects that are
beyond the existing factual element and even beyond factual reality. In attempts,
the completion of the offense is not part of factual reality but of the offender’s will.
This will is so strong that it stands for the act (voluntas reputabitur pro facto). Such
strong will can be reflected only through the highest volition level accepted by
criminal law, which is embodied in the specific intent requirement. Lower levels do
not reflect such strong will. If A stabs B with the intent to kill him, the lethal will
cannot be reflected in indifference, rashness, or negligence.7 Only specific intent
can reflect that will.8
Some legal systems make a substantive distinction between the terms “specific
intent” and “general intent,” the latter relating to a broader sense of general intent.
6
Whybrow, (1951) 35 Cr. App. Rep. 141; Mohan, [1976] Q.B. 1, [1975] 2 All E.R. 193, [1975]
2 W.L.R. 859, 60 Cr. App. Rep. 272, [1975] R.T.R. 337, 139 J.P. 523; Pearman, (1984) 80 Cr. App.
Rep. 259; Hayles, (1990) 90 Cr. App. Rep. 226; State v. Harvill, 106 Ariz. 386, 476 P.2d
841 (1970); Bell v. State, 118 Ga.App. 291, 163 S.E.2d 323 (1968); Larsen v. State, 86 Nev.
451, 470 P.2d 417 (1970); State v. Goddard, 74 Wash.2d 848, 447 P.2d 180 (1968); People
v. Krovarz, 697 P.2d 378 (Colo.1985).
7
Donald Stuart, General intent, Negligence and Attempts, [1968] CRIM. L.R. 647 (1968).
8
Morrison, [2003] E.W.C.A. Crim. 1722, (2003) 2 Cr. App. Rep. 563; Jeremy Horder, Varieties of
Intention, Criminal Attempts and Endangerment, 14 LEGAL STUD. 335 (1994).
72 4 Positive Fault Element Involving Artificial Intelligence Systems
Other legal systems make a structural distinction between specific intent, which
relates to purposes and motives, from “intent,” which has to do with the occurrence
of results. But regardless of the term used by various legal systems, the relevant
mental element component that is required is the one that substantively reflects the
highest level of volition (positive will), and which structurally relates to the purpose
of the completion of the offense and not to a given component of the factual element
requirement.9 A lower level of will and lack of will to achieve the purpose are not
adequate for criminal attempt.
For example, A plays Russian roulette with B. When it is B’s turn, A treats B’s
life or death carelessly. Because A does not will B’s death, it is not considered an
attempt on A’s part. In another example, C drives behind a heavy and slow truck.
The way is marked by a solid white dividing line, which prohibits passing. C takes
an unreasonable risk and passes the truck and narrowly misses D, riding a motor-
cycle in the opposite direction. Because C did not will D’s death, the offense is not
considered an attempt. Only if the offender acts with the purpose of completing the
offense can the offense be considered a criminal attempt.
In general, the object of specific intent may be both a purpose and a motive, but
in attempts the object of specific intent is the purpose (not the motive) of the
completion of the offense. Because a high foreseeability of realization of the
purpose is accepted as a substitute for proof of specific intent, in attempts specific
intent may be proven by a proof of foreseeability.10 For example, A aims a gun at B
and pulls the trigger, but the bullet misses B. A argues that he did not intend to kill
B. But he knows (subjectively) that shooting a person creates a very high probabil-
ity for death. A is therefore presumed to foresee B’s death and is presumed to have
intended to kill B. Because B did not die, the presumed intent to kill B functions as
the specific intent to complete the offense (homicide), as required imposing crimi-
nal liability for attempted homicide.
If the specific intent is conditional, it is not different from specific intent to
complete the offense, in other words, conditional specific intent in attempts
9
RG 16, 133; RG 65, 145; RG 70, 201; RG 71, 53; BGH 12, 306; BGH 21, 14; Mohan, [1976]
Q.B. 1, [1975] 2 All E.R. 193, [1975] 2 W.L.R. 859, 60 Cr. App. Rep. 272, [1975] R.T.R. 337,
139 J.P. 523; State v. Ayer, 136 N.H. 191, 612 A.2d 923 (1992); State v. Smith, 170 Wis.2d
701, 490 N.W.2d 40 (App.1992); United States v. Dworken, 855 F.2d 12 (1st Cir.1988); Braxton
v. United States, 500 U.S. 344, 111 S.Ct. 1854, 114 L.Ed.2d 385 (1991); United States v. Gracidas-
Ulibarry, 231 F.3d 1188 (9th Cir.2000); Commonwealth v. Ware, 375 Mass. 118, 375 N.E.2d 1183
(1978).
10
People v. Harris, 72 Ill.2d 16, 17 Ill.Dec. 838, 377 N.E.2d 28 (1978); State v. Butler, 322 So.2d
189 (La.1975); State v. Earp, 319 Md. 156, 571 A.2d 1227 (1990); Flanagan v. State, 675 S.W.2d
734 (Tex.Crim.App.1982); Smallwood v. State, 106 Md.App. 1, 661 A.2d 747 (1995); Woollin,
[1999] A.C. 82, [1998] 4 All E.R. 103, [1998] 3 W.L.R. 382, [1998] Crim. L.R. 890; Pearman,
(1984) 80 Cr. App. Rep. 259; Mohan, [1976] Q.B. 1, [1975] 2 All E.R. 193, [1975] 2 W.L.R. 859,
60 Cr. App. Rep. 272, [1975] R.T.R. 337, 139 J.P. 523.
4.1 Structure of Positive Fault Element 73
functions as specific intent.11 For example, A is afraid that his car would be stolen.
He attaches a battery to his car, so that potential burglars who touch the car would
be electrocuted and die. B attempts to break into the car, receives an electric shock,
and survives. A argues that he did not intend to kill anyone. Indeed, he did not want
anyone to break into his car, and therefore had no specific intent to complete the
homicide offense. A had a conditional intent whereby anyone attempting to break
into the car will be electrocuted and killed. The condition having been met, the
specific intent to complete the offense is presumed to exist.12
The specific intent reflects the purposefulness of the criminal attempt quite
effectively. But specific intent relates to the unrealized purpose, not to the factual
element components. During the commission of the attempt, some of the factual
element components of the offense may be present. The question is what should the
mental state of the attempter’s mind be in relation to these components. This mental
state should be such that it can support the specific intent to carry out the purpose.
In general, specific intent is supported by general intent alone. The volitive basis
for specific intent is a fully aware will. Thus, for the will to be considered specific
intent the offender should be aware of it. Will that the offender is not aware of is
impulse or reflex. If he is not aware, the individual has no ability to activate his
internal resistance mechanisms, and the will turns into irresistible impulse. In most
legal systems, irresistible impulse is not an adequate basis for criminal liability.13
The only form of mental element that requires awareness is general intent.
Specific intent can be accompanied only by general intent. The criminal attempt
can therefore be classified as a general intent offense, which includes specific intent
in addition to the “regular” components of general intent. Structurally, general
intent components relate to existing factual element components (e.g., awareness of
the circumstances). This structure is relevant for both object-offenses and deriva-
tive criminal liability. Therefore, in addition to the specific intent, the mental
11
Bentham, [1973] 1 Q.B. 357, [1972] 3 All E.R. 271, [1972] 3 W.L.R. 398, 56 Cr. App. Rep.
618, 136 J.P. 761; Harvick v. State, 49 Ark. 514, 6 S.W. 19 (1887); People v. Connors, 253 Ill.
266, 97 N.E. 643 (1912); Commonwealth v. Richards, 363 Mass. 299, 293 N.E.2d 854 (1973);
State v. Simonson, 298 Minn. 235, 214 N.W.2d 679 (1974); People v. Vandelinder, 192 Mich.
App. 447, 481 N.W.2d 787 (1992).
12
Husseyn, (1977) 67 Cr. App. Rep. 131; Walkington, [1979] 2 All E.R. 716, [1979]
1 W.L.R. 1169, 68 Cr. App. Rep. 427, 143 J.P. 542; Haughton v. Smith, [1975] A.C. 476,
[1973] 3 All E.R. 1109, [1974] 3 W.L.R. 1, 58 Cr. App. Rep. 198, 138 J.P. 31; Easom, [1971]
2 Q.B. 315, [1971] 2 All E.R. 945, [1971] 3 W.L.R. 82, 55 Cr. App. Rep. 410, 135 J.P. 477.
13
George E. Dix, Criminal Responsibility and Mental Impairment in American Criminal Law:
Responses to the Hinckley Acquittal in Historical Perspective, 1 LAW AND MENTAL HEALTH:
INTERNATIONAL PERSPECTIVES 1, 7 (Weisstub ed., 1986); State v. Hartley, 90 N.M. 488, 565 P.2d
658 (1977); Vann v. Commonwealth, 35 Va.App. 304, 544 S.E.2d 879 (2001); State v. Carney,
347 N.W.2d 668 (Iowa 1984); ISAAC RAY, THE MEDICAL JURISPRUDENCE OF INSANITY 263 (1838);
FORBES WINSLOW, THE PLEA OF INSANITY IN CRIMINAL CASES 74 (1843); SHELDON S. GLUECK, MENTAL
DISORDERS AND THE CRIMINAL LAW 153, 236–237 (1925); Edwin R. Keedy, Irresistible Impulse as a
Defense in the Criminal Law, 100 U. PA. L. REV. 956, 961 (1952); Oxford, (1840) 9 Car. & P. 525,
173 Eng. Rep. 941; Burton, (1863) 3 F. & F. 772, 176 Eng. Rep. 354.
74 4 Positive Fault Element Involving Artificial Intelligence Systems
(a) awareness of the conduct, of the circumstances, and of the possibility of the
occurrence of the results (this is the cognitive aspect of recklessness); and-
(b) recklessness (indifference or rashness) in relation to the results (this is the
volitive aspect of recklessness).
If the attempted injury lacks the result component (the victim was not injured),
the mental element requirement consists of specific intent to injure the victim and of
general intent components in relation to the existing factual element components
(awareness of the conduct and of the circumstances). No additional mental element
component is required with relation to the results because these have not occurred.
All general intent components have substitutes that can facilitate their proof in
court. All the substitutes that are relevant to object-offenses are also relevant in
derivative criminal liability forms, including attempt. Therefore, awareness of the
conduct and circumstances may be proven by the willful blindness presumption,
awareness of the possibility of the occurrence of the results may be proven by the
awareness presumption, and all volitive components may be proven by the foresee-
ability presumption.
The combined mental element requirement of the attempt is not part of the
mental element requirement of the object-offense. These mental elements may
include different requirements. For example, the offense of injury requires reck-
lessness, whereas attempted injury requires specific intent. This difference may be
explained by the interaction between the specificity range of the factual element and
the adjustment range of the mental element. Because the factual element of the
attempted offense is characterized by the absence of some of the components
relative to the complete offense, the mental element “compensates” for this absence
through a higher level requirement. This compensation is the expression of the
maxim that the will stands for the act (voluntas reputabitur pro facto).
The factual and mental elements of joint-perpetration are derived from the
object-offense. The mental element requirement of joint-perpetration may be
defined as all general intent components of the object-offense must be covered by
all joint-perpetrators. The mental element requirement of joint-perpetration is
significantly different from the factual element requirement. Because the factual
element requirement is affected by the collective conduct concept, collective
14
Pigg, [1982] 2 All E.R. 591, [1982] 1 W.L.R. 762, 74 Cr. App. Rep. 352, 146 J.P. 298; Khan,
[1990] 2 All E.R. 783, [1990] 1 W.L.R. 813, 91 Cr. App. Rep. 29, 154 J.P. 805; G.R. Sullivan,
Intent, Subjective Recklessness and Culpability, 12 OXFORD J. LEGAL STUD. 380 (1992); John
E. Stannard, Making Up for the Missing Element: A Sideways Look at Attempts, 7 LEGAL STUD.
194 (1987); J.C. Smith, Two Problems in Criminal Attempts, 70 HARV. L. REV. 422 (1957).
4.1 Structure of Positive Fault Element 75
15
See e.g., United States v. Hewitt, 663 F.2d 1381 (11th Cir.1981); State v. Kendrick, 9 N.C.App.
688, 177 S.E.2d 345 (1970).
76 4 Positive Fault Element Involving Artificial Intelligence Systems
coordination and synchronization with the criminal plan general intent is the only
form of mental element that is sufficient. Thus, the mental element of joint-
perpetration is general intent in relation to the factual element. The specific
components of the required general intent depend upon the mental element require-
ment of the object-offense. For example, the offense of injury requires recklessness
(a cognitive aspect of awareness and a volitive aspect of recklessness). Each of the
joint-perpetrators of the offense is required to show recklessness.
How is the purposefulness expressed in joint-perpetration? Purposefulness
characterizes all forms of derivative criminal liability and may be expressed
through no less than intent or specific intent. But if the mental element requirement
of joint-perpetration is identical with that of the object-offense, and the mental
element requirement of the object-offense may be satisfied by less than intent, how
is purposefulness expressed in joint-perpetration? The answer is simple. The main
factor that makes joint-perpetration joint is participation of the offenders in the
early planning. The early planning is the conspiracy by which the criminal plan
comes into being. In most legal systems conspiracy functions as a specific offense
and as the early planning of joint-perpetration.
To impose criminal liability on conspirators, specific intent is required.16 To
classify the delinquent event as joint-perpetration, it is necessary to prove conspir-
acy. Therefore specific intent of conspiracy is needed in order to impose criminal
liability for joint-perpetration. The specific intent of conspiracy is for the purpose of
committing the offense by executing the criminal plan. This purpose matches the
purposefulness of derivative criminal liability. Thus, although specific intent is not
directly required for the criminal liability of joint-perpetration, it is required for the
classification of the delinquent event as joint-perpetration. This requirement
prevents imposing criminal liability for joint-perpetration for mistakes, incidental
circumstances, or unawareness.
All general intent components have substitutes that may facilitate their proof in
court. These substitutes, which are relevant for the offenses, are also relevant for the
derivative criminal liability forms, including joint-perpetration. Thus, awareness of
conduct and of circumstances may be proven by the willful blindness presumption,
awareness of the possibility of the occurrence of results may be proven by the
awareness presumption, and all volitive components may be proven by the foresee-
ability presumption.
16
Albert J. Harno, Intent in Criminal Conspiracy, 89 U. PA. L. REV. 624 (1941); United States
v. Childress, 58 F.3d 693 (D.C.Cir.1995); Bolton, (1991) 94 Cr. App. Rep. 74, 156 J.P. 138;
Anderson, [1986] 1 A.C. 27, [1985] 2 All E.R. 961, [1985] 3 W.L.R. 268, 81 Cr. App. Rep. 253;
Liangsiriprasert v. United States Government, [1991] 1 A.C. 225, [1990] 2 All E.R. 866, [1990]
3 W.L.R. 606, 92 Cr. App. Rep. 77; Siracusa, (1989) 90 Cr. App. Rep. 340. For the purposefulness
of conspiracy see e.g., Blamires Transport Services Ltd. [1964] 1 Q.B. 278, [1963] 3 All E.R. 170,
[1963] 3 W.L.R. 496, 61 L.G.R. 594, 127 J.P. 519, 47 Cr. App. Rep. 272; Welham v. Director of
Public Prosecutions, [1961] A.C. 103, [1960] 1 All E.R. 805, [1960] 2 W.L.R. 669, 44 Cr. App.
Rep. 124; Barnett, [1951] 2 K.B. 425, [1951] 1 All E.R. 917, 49 L.G.R. 401, 115 J.P. 305, 35 Cr.
App. Rep. 37, [1951] W.N. 214; West, [1948] 1 K.B. 709, [1948] 1 All E.R. 718, 46 L.G.R. 325,
112 J.P. 222, 32 Cr. App. Rep. 152, [1948] W.N. 136.
4.1 Structure of Positive Fault Element 77
17
United States v. Tobon-Builes, 706 F.2d 1092 (11th Cir.1983); United States v. Ruffin, 613 F.2d
408 (2nd Cir.1979).
78 4 Positive Fault Element Involving Artificial Intelligence Systems
relation of the other person to the offense is immaterial for the criminal liability of
perpetration-through-another.
The other person’s mental relation to the commission of the offense is significant
for his own criminal liability, if any. If the other person functions as a “semi-
innocent agent,” he may be criminally liable for negligence offenses associated
with the same factual element. But the other person’s criminal liability, if any, does
not affect the criminal liability of the perpetrator-through-another for the commis-
sion of the object-offense. Regardless the factual-physical role, if any, of the
perpetrator-through-another, he must consolidate the mental element of the
object-offense.
For example, a surgical nurse wishes to kill a patient and pollutes the surgical
instruments with lethal bacteria. After the surgery the patient dies as a result of the
infection caused by the bacteria. Given that the surgeon was not aware of the
polluted instruments, the nurse used the surgeon instrumentally to kill the patient.
But because it was the surgeon’s duty to make sure the instruments are sterilized,
the surgery was performed with the surgeon’s negligence (negligence does not
require awareness). The nurse is criminally liable for perpetration-through-another
of murder, and the surgeon is criminally liable for negligent homicide. The
surgeon’s criminal liability does not affect the nurse’s criminal liability as
perpetrator-through-another of murder.
In perpetration-through-another, the perpetrators act as one body to commit the
object-offense. The perpetrator-through-another must, therefore, act according to
the criminal plan, which includes the instrumental use of the other person. The other
person functions as the long arms of the perpetrator-through-another in order to
commit the offense. Instrumental use of another person may be accidental or
negligent, but instrumental use in accordance with a criminal plan requires at
least awareness of both the criminal plan and of the instrumental use. Consequently,
the perpetrator-through-another must be aware of these two factors as part of his
mental relation to the delinquent event.
Thus, the mental element requirement of the object-offense is mandatory for the
perpetrator-through-another and it should relate to all factual element components
of the offense, regardless of the factual role of the perpetrator-through-another
within the specific delinquent enterprise. For example, A instrumentally uses B for
the commission of offense, which includes two factual components: A committed
component 1 and B committed component 2. Thus, the factual element requirement
is satisfied because as one body all factual element components are present. A,
however, must also cover the mental element components relating to both factual
components. B’s mental element, if any, is immaterial for A’s criminal liability.
Because the perpetrator-through-another must be aware of the criminal plan and
of the instrumental use of the other person owing to the criminal plan, general intent
is the only form of mental element that is sufficient. Thus, the mental element of
perpetration-through-another is general intent in relation to the factual element. The
specific components of the required general intent depend upon the mental element
requirement of the object-offense. For example, the offense of injury requires
4.1 Structure of Positive Fault Element 79
18
People v. Miley, 158 Cal.App.3d 25, 204 Cal.Rptr. 347 (1984).
80 4 Positive Fault Element Involving Artificial Intelligence Systems
For example, the offense of injury requires recklessness (the cognitive aspect of
awareness and the volitive aspect of recklessness). But incitement to commit injury
requires intent (the cognitive aspect of awareness and the volitive aspect of intent).
The components of the mental element of incitement are identical with those of any
result offense that requires intent. This analysis is required because the factual
element of incitement requires a result component.
Although the default volitive requirement of result offenses is recklessness,
incitement has volitive requirement of intent. In most legal systems this require-
ment is explicitly included in the definition of incitement, but in other legal systems
the incitement is interpreted as requiring intent.19 The reason for requiring intent is
the general characteristic of purposefulness, which characterizes all forms of
derivative criminal liability, including incitement. The purpose of incitement is to
cause the incited person to make a free decision, with full awareness, to commit the
offense. Recklessness cannot support such a level of will, but intent can.
Intent and specific intent embody the highest-level will accepted by criminal
law. Some legal systems make a substantive distinction between the terms “specific
intent” and “general intent,” the latter relating to a broader sense of general intent.
Other legal systems make a structural distinction between specific intent, which
relates to purposes and motives, from “intent,” which has to do with the occurrence
of results. Thus, the more accurate term for incitement would be intent because it
relates to the factual component of the results and not to purposes, which are
beyond the factual element.
The mental element required for incitement is general intent because only
general intent can support intent. The intent to cause the incited person to make a
free-choice decision in accordance with the inciter’s criminal plan requires aware-
ness of the plan and of its aim. It is possible that negligent acts could also cause a
person to commit an offense, but negligent acts are not sufficient to be considered
incitement. The inciter is considered as such only if the incitement is the factual
19
See e.g., article 26 of the German penal code, which provides:
Als Anstifter wird gleich einem Täter bestraft, wer vorsätzlich einen anderen zu dessen
vorsätzlich begangener rechtswidriger Tat bestimmt hat;
Last part of article 121-7 of the French penal code, which provides:
Est également complice la personne qui par don, promesse, menace, ordre, abus d’autorité
ou de pouvoir aura provoqué à une infraction ou donné des instructions pour la commettre;
And article 5.02(1) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT AND
EXPLANATORY NOTES 76 (1962, 1985), which provides:
expression of the execution of criminal plan to cause the incited person to make a
free decision to commit the offense.
The factual and mental elements of accessoryship are not derived from the
object-offense. The mental element requirement of accessoryship may be defined
as specific intent to render assistance to the perpetration of an offense accompanied
by general intent components related to the factual element components of the
accessoryship. The mental element requirement of accessoryship is independent of
that of the object-offense and may be different from it.20
For example, the offense of manslaughter requires recklessness (a cognitive
aspect of awareness and a volitive aspect of recklessness). But accessoryship to
manslaughter requires specific intent (a cognitive aspect of awareness and a volitive
aspect of specific intent). The components of the mental element of accessoryship
are identical with the mental element components of any conduct offense that
requires specific intent. This analysis is required because the factual element of
the accessoryship requires no result component.
In most legal systems the specific intent requirement is explicitly included in the
definition of accessoryship, but in some legal systems accessoryship is interpreted
as requiring intent.21 The reason for requiring specific intent is the general charac-
teristic of purposefulness, which characterizes all forms of derivative criminal
liability, including accessoryship. The purpose of the accessoryship is to render
assistance to the perpetration and not necessarily the commission of the offense
(by the perpetrator). A mental element of a lower level than specific intent cannot
support the level of will required for this purpose.
20
Lynch v. Director of Public Prosecutions for Northern Ireland, [1975] A.C. 653, [1975] 1 All
E.R. 913, [1975] 2 W.L.R. 641, 61 Cr. App. Rep. 6, 139 J.P. 312; Gillick v. West Norfolk and
Wisbech Area Health Authority, [1984] Q.B. 589; Janaway v. Salford Health Authority, [1989]
1 A.C. 537, [1988] 3 All E.R. 1079, [1988] 3 W.L.R. 1350, [1989] 1 F.L.R. 155, [1989] Fam. Law
191, 3 B.M.L.R. 137; Gordon, [2004] E.W.C.A. Crim. 961; Rahman, [2007] E.W.C.A. Crim.
342, [2007] 3 All E.R. 396, but compare the American rulings of Mowery v. State, 132 Tex.Cr.R.
408, 105 S.W.2d 239 (1937); United States v. Hewitt, 663 F.2d 1381 (11th Cir.1981); State
v. Kendrick, 9 N.C.App. 688, 177 S.E.2d 345 (1970).
21
See e.g., article 27(1) of the German penal code, which provides:
Als Gehilfe wird bestraft, wer vorsätzlich einem anderen zu dessen vorsätzlich begangener
rechtswidriger Tat Hilfe geleistet hat;
First part of article 121-7 of the French penal code, which provides:
Est complice d’un crime ou d’un délit la personne qui sciemment, par aide ou assistance, en
a facilité la préparation ou la consommation;
Article 8 of the Accessories and Abettors Act, 1861, 24 & 25 Vict. c.94 as amended by the
Criminal Law Act, 1977, c.45, s. 65(4), which provides:
Whosoever shall aid, abet, counsel, or procure the commission of any indictable offence,
whether the same be an offence at common law or by virtue of any Act passed, shall be
liable to be tried, indicted, and punished as a principal offender.
82 4 Positive Fault Element Involving Artificial Intelligence Systems
Intent and specific intent embody the highest-level will accepted in criminal law.
Some legal systems make a substantive distinction between the terms “specific
intent” and “general intent,” the latter relating to a broader sense of general intent.
Other legal systems make a structural distinction between specific intent, which
relates to purposes and motives, from “intent,” which has to do with the occurrence
of results. Thus, the more accurate term for accessoryship would be specific intent,
because it relates to the purpose of rendering assistance to the perpetration, which is
beyond the factual element and not part of it.
The mental element required for accessoryship is general intent because only
general intent can support specific intent. The specific intent to render assistance to
the perpetration according to the accessory’s criminal plan requires awareness of
the plan and of its aim. It is possible that negligent acts could also assist the
perpetrators, but such negligent acts are not sufficient to be considered
accessoryship, as a form of derivative criminal liability. The accessory is consid-
ered as such only if accessoryship is the factual expression of the execution of a
criminal plan to render assistance to the perpetration.22
Under modern criminal law of most legal systems, general intent (mens rea)
expresses the basic type of mental element, since it embodies the idea of culpability
most effectively. This is the only mental element which enables the combination of
both cognition and volition. The general intent requirement expresses the internal-
subjective relation of the offender to the physical commission of the offense.23 In
most legal systems the general intent requirement functions as the default option of
the mental element requirement.
Therefore, unless explicitly negligence or strict liability are required as mental
elements of the specific offense, general intent would be the required mental
element. This default option is also known as the presumption of mens rea.24
Accordingly, all offenses are presumed to require general intent, unless explicitly
deviated. Since general intent is the highest level of known mental element require-
ment, this presumption is very significant. Consequently, indeed, most offenses in
criminal law do require general intent and not negligence or strict liability. All
22
State v. Harrison, 178 Conn. 689, 425 A.2d 111 (1979); State v. Gerbe, 461 S.W.2d
265 (Mo.1970).
23
JEROME HALL, GENERAL PRINCIPLES OF CRIMINAL LAW 70–77 (2nd ed., 1960, 2005); DAVID
ORMEROD, SMITH & HOGAN CRIMINAL LAW 91–92 (11th ed., 2005); G., [2003] U.K.H.L. 50,
[2004] 1 A.C. 1034, [2003] 3 W.L.R. 1060, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep.
21, (2003) 167 J.P. 621, [2004] Crim. L. R. 369.
24
Sweet v. Parsley, [1970] A.C. 132, [1969] 1 All E.R. 347, [1969] 2 W.L.R. 470, 133 J.P. 188,
53 Cr. App. Rep. 221, 209 E.G. 703, [1969] E.G.D. 123.
4.2 General Intent and Artificial Intelligence Systems 83
mental elements components, including general intent components, are not inde-
pendent or stand alone for themselves.
For instance, dominant component of general intent is awareness. If the require-
ment from the offender is to be aware, the question would be: “aware of what?”,
since awareness cannot stand alone. Otherwise, it would be meaningless. Conse-
quently, all mental elements components must relate to facts or to any factual
reality. The relevant factual aspect for criminal liability is, of course, the factual
element components (conduct, circumstances and results). Of course, factual reality
contains much more facts than these components of factual element, but all other
facts are irrelevant for the imposition of criminal liability.
For example, in rape the relevant facts are “having sexual intercourse with a
woman without consent”.25 The rapist is required to be aware of these facts. If the
offender was or was not aware of other facts as well (e.g., the color of the woman’s
eyes, her pregnancy, her suffer, etc.), it is immaterial for the imposition of criminal
liability. Thus, for the question of imposition of criminal liability, the object of the
mental elements requirement is nothing but the factual element components. Of
course, this object is much narrower than the whole factual reality, but the factual
element represents the decision of the society on what is relevant for criminal
liability and what is not.
However, the other facts and the mental relation to them may affect the punish-
ment, although insignificant for the imposition of criminal liability. For instance,
rapist who raped the victim in a very cruel way would be convicted in rape, whether
he was cruel or not. However, his punishment is very likely to be much harsher than
a more gentle rapist. Identifying the factual element components as the object of
general intent components is the basis for the structure of general intent.
General intent has two layers of requirement:
The layer of cognition consists of awareness. Some legal systems use the term
“knowledge” to express the layer of cognition, but it seems that awareness is more
accurate. However, both awareness and knowledge function the same way and
mean the same meaning in this context. A person is capable of being aware only to
facts which occurred in the past or are occurring at present, but is not capable of
being aware of future facts.
For instance, a person can be aware of the fact that A ate his ice-cream 2 min ago
and he can be aware of the fact that B is eating his ice-cream right now. C said that
he intend to eat his ice-cream, therefore most persons can predict it, foresee it or
estimate the probability it will happen, but no person is capable of being aware of it,
simply because it did not occur yet. If the criminal law would have required
25
See, e.g., State v. Dubina, 164 Conn. 95, 318 A.2d 95 (1972); State v. Bono, 128 N.J.Super.
254, 319 A.2d 762 (1974); State v. Fletcher, 322 N.C. 415, 368 S.E.2d 633 (1988).
84 4 Positive Fault Element Involving Artificial Intelligence Systems
26
This component of awareness functions also as the legal causal connection in general intent
offenses, but this function has no additional significance in this context.
4.2 General Intent and Artificial Intelligence Systems 85
The first represents positive will (the offender wanted the results to occur), the
second represents nullity (the offender was indifferent to the occurrence of the
results), and the third is negative will (the offender did not want the results to occur,
but has taken unreasonable risk which caused them to occur).
For example, in homicide offenses, at the moment in which the conduct is
committed, if the offender-
Intent is the highest level of will accepted by criminal law. Intended homicide is
considered murder in most countries. Indifference is intermediate level, and rash-
ness is the lowest level of will. Both indifference and rashness are known as
“recklessness”. Reckless homicide is considered manslaughter in most countries.
Consequently, if the specific offense requires recklessness, this requirement may be
fulfilled through proof of intent, since higher level of will covers lower levels.
However, if the specific offense requires intent or specific intent, this requirement
may be fulfilled only through intent or specific intent.
Summing up the structure of general intent is much easier if the offenses are
divided into conduct-offenses and results-offenses. Conduct-offenses are offenses
27
“Specific intent” is sometimes mistakenly referred to “intent” in order to differ it from “general
intent”, which is generally used to express general intent.
86 4 Positive Fault Element Involving Artificial Intelligence Systems
which their factual element requires no results, whereas the factual element of
results-offenses requires.28 This division eases the understanding of the general
intent structure, since volition is required only in relation to the results. Therefore,
results require both cognition and volition, whereas conduct and circumstances
require only cognition. Thus, in conduct-offenses, which their factual element
requirement contains conduct and circumstances, the general intent requirement
contains awareness to these components.
In results-offenses, which their factual element requirement contains conduct,
circumstances and results, the general intent requirement contains awareness to the
conduct, to the circumstances and to the possibility of the occurrence of the results.
In addition, the general intent requirement contains in relation to the results intent
or recklessness, according to the particular definition of the specific offense. This
general structure of general intent is a template which contains terms from the
mental terminology (awareness, intent, recklessness, etc.). In order to explore
whether artificial intelligence technology are capable of fulfilling the general intent
requirement in the particular offenses, the definition of these mental terms must be
explored.
28
SIR GERALD GORDON, THE CRIMINAL LAW OF SCOTLAND 61 (1st ed., 1967); Treacy v. Director of
Public Prosecutions, [1971] A.C. 537, 559, [1971] 1 All E.R. 110, [1971] 2 W.L.R. 112, 55 Cr.
App. Rep. 113, 135 J.P. 112.
29
William G. Lycan, Introduction, MIND AND COGNITION 3, 3–13 (William G. Lycan ed., 1990).
30
See, e.g., WILLIAM JAMES, THE PRINCIPLES OF PSYCHOLOGY (1890).
4.2 General Intent and Artificial Intelligence Systems 87
of internal and external stimulations that the individual is aware of in specific point
in time. Consequently, it has been understood that human mind is not constant, but
dynamic and changing regularly. The human mind has been described as a flow of
feelings, thoughts and emotions (“stream of consciousness”).31
It has also been understood that human mind is selective. It means that humans
are capable of focusing their mind on certain stimulations while ignoring others.
The ignored stimulations would not enter the human mind at all. If the human mind
would have included all internal and external stimulations, it could not have been
able to function normally for being too busy in paying attention to each of the
stimulations. The function of the senses system of humans and of any animal is to
absorb the stimulations (light, sound, heat, pressure, etc.) and to transfer them to the
brain for processing this factual information.
Processing the information is executed in the brain as an internal process. The
factual data is processed from the stimulations up to the creation of relevant general
image of the factual data in the human brain. The process is, in fact, the process of
perception. Perception is considered as one of the basic skills of human mind. At
any time, many stimulations are active. In order to enable the creation of organized
image of the factual data, the human brain must focus on some of the stimulations
and ignore others, as aforesaid. This is done through a process of attention.
The process of attention enables the brain to concentrate on some stimulations
whereas others are ignored. In fact, the other stimulations are not totally ignored,
but they still exist in the background of the perception process. The nervous system
is still in situation of adequate vigilance in order to absorb other stimulations when
the process of attention is still working.
For instance, A is reading a book and is very focused on it. B calls him since he
wants to speak with him. When he first calls him, he does not react. When he calls
him for the second time much louder, he reacts, asks him what does he want and
says that he must go to the bathroom. When A is focused on reading the book many
of the existing stimulations are ignored (e.g., the sound of his heartbeats, the smells
from the kitchen, the pressure on his bladder, etc.) so he would be able to focus on
the book, but nervous system is still in situation of adequate vigilance in order to
absorb them.
When B calls him for the first time, it is another sound to be ignored through the
process of attention. However, when B calls him for the second time, the attention
process of focusing on the book is stopped and the other stimulations are absorbed
and get some attention. This is why he suddenly “recalls” that he must go to
bathroom. Perception includes not only absorbing stimulations, but also processing
them into relevant general image.
This relevant general image generally creates the meaning of the accumulation
of the stimulations. Processing the factual data into relevant general image is done
through unconscious inference, so awareness of this process it totally not
31
See, e.g., BERNARD BAARS, IN THE THEATRE OF CONSCIOUSNESS (1997).
88 4 Positive Fault Element Involving Artificial Intelligence Systems
required.32 However, the results of this process (the relevant general image) are
conscious results. Thus, whereas the human mind is not conscious of most of the
process, it is conscious of its results when the relevant general image is accepted. As
a result, the human mind is considered to be aware only when the relevant general
image is accepted. This process is the essence of human awareness.
Awareness is the final stage of perception. Perception by senses of factual data
and its understanding ends up by the creation of the relevant general image. The
creation of the relevant general image is, in fact, the awareness of the factual data.
Thus, for instance, eyes are not the sight organs of humans, but the human brain.
Eyes function as nothing but sensors which deliver the factual data to the brain.
Only when the brain creates the relevant general image, the human is considered
aware of the relevant sight. Consequently, human in vegetative situation, when his
eyes function, is not considered to be seeing (or aware of what he sees), unless the
sights are combined into a relevant general image.
As a result, for human to be considered aware of certain factual data, two
accumulative conditions are required:
32
HERMANN VON HELMHOLTZ, THE FACTS OF PERCEPTION (1878).
33
United States v. Youts, 229 F.3d 1312 (10th Cir.2000); State v. Sargent, 156 Vt. 463, 594 A.2d
401 (1991); United States v. Spinney, 65 F.3d 231 (1st Cir.1995); State v. Wyatt, 198 W.Va.
530, 482 S.E.2d 147 (1996); United States v. Wert-Ruiz, 228 F.3d 250 (3rd Cir.2000).
34
United States v. Jewell, 532 F.2d 697 (9th Cir.1976); United States v. Ladish Malting Co.,
135 F.3d 484 (7th Cir.1998).
4.2 General Intent and Artificial Intelligence Systems 89
awareness seems to be the more accurate term. Proving the full awareness of the
offender in court beyond any reasonable doubt, as required in criminal law, is not an
easy task.
Awareness relates to internal processes of the mind, which not necessarily have
external expressions. Therefore, criminal law has developed evidential substitutes
for this task. These substitutes are presumptions, which in certain types of situations
presume the existence of awareness. Two major presumptions are recognized in
most legal systems:
35
Paul Weiss, On the Impossibility of Artificial Intelligence, 44 REV. METAPHYSICS 335, 340 (1990).
36
See, e.g., TIM MORRIS, COMPUTER VISION AND IMAGE PROCESSING (2004); MILAN SONKA, VACLAV
HLAVAC AND ROGER BOYLE, IMAGE PROCESSING, ANALYSIS, AND MACHINE VISION (2008).
37
WALTER W. SOROKA, ANALOG METHODS IN COMPUTATION AND SIMULATION (1954).
90 4 Positive Fault Element Involving Artificial Intelligence Systems
However, simple technology sensors do not “guess” this factual data, but they
absorb it very accurately and transfer the information to the relevant processors for
processing the information. Consequently, artificial intelligence technology has the
capability of fulfilling the first stage of awareness. In fact, it does that much better
than humans do. The second stage of the awareness process is creating a relevant
general image towards this data in brain (full perception). Of course, most artificial
intelligence technologies, robots and computers do not possess biological brains,
but they possess artificial “brains”.
Most of these “brains” are embodied in the relevant hardware (processors, disks,
etc.) used by the relevant technology. Have these “brains” the capability of creating
relevant general image out of the absorbed factual data? Creation of relevant
general image is done by humans through analysis of the factual data so it enables
us to use the information, transfer it, integrate it with other information, act
according to it, or, in fact, understand it.38
Let us take an example of security robots which are based on artificial intelli-
gence technology, and go step by step. Their task is to identify intruders and call the
human troops (police, army) or stop the intruders by themselves. The relevant
sensors (cameras and microphones) absorb the factual data to the processors. The
processor is supposed to identify the intruder as such. For this task it analyzes the
factual data. It must not be confused with the state’s policemen or soldiers, who
walk there. Therefore, it must analyze the factual data to identify the change in sight
and sound. It may compare the shape and color of clothes, and use other attributes to
identify the change in sight and sound. This process is very short.
Now it assesses the probabilities. If the probabilities do not form an accurate
identification, it starts a process of vocal identification. The software poses the
phrase: “Identify yourself, please”, “Your password, please” or anything else
relevant to the situation. The figure’s answers and the sound are compared to
other sounds in its memory. Now, it has the adequate factual data to make decision
to act. In fact, this robot has created a relevant general image out of the factual data
absorbed by its sensors. The relevant general image enabled it to use the informa-
tion, transfer it, integrate it with other information, act according to it, or, in fact,
understand it.
For comparison, how would a human guard act in this situation? He would
probably act the same way. The human guard sees or hears suspicious figure or
sound. For the human it is suspicious, but for the robot, any change in the current
image or sound is examined. This is why robot-guards equipped with artificial
intelligence technology were preferred. They work much thoroughly and not get
asleep while guarding. The human guard uses his memory if he can identify the
figure or the sound as one of his friends’, the robot compares it to its memory. The
robot cannot forget figures, humans can. The human guard is not sure, whereas the
robot assesses the probabilities. The human guards shouts for identification, pass-
38
See, e.g., DAVID MANNERS AND TSUGIO MAKIMOTO, LIVING WITH THE CHIP (1995).
4.2 General Intent and Artificial Intelligence Systems 91
word, etc., and so does the robot. The answer is compared to the existing informa-
tion in the memory both by human and robot guards, only that it is done more
accurately by the robot.
Consequently, the relevant decision is made. The human guard understood the
situation, and so did the robot. It may be said that the human guard was aware of the
relevant factual data. Cannot it be said that towards the robot guard as well? In fact,
there is no reason why not to. Their internal processes were pretty much of the
same, only that the robot was much more accurate, faster, and worked thoroughly.
The human guard was aware of the figure or sound he absorbed and acted accord-
ingly, so was the robot guard.
Some may argue that the human guard may have absorbed much more factual
information in addition, such as fear signs, and that he is capable of filtering
irrelevant information, such as background sounds. This type of arguments does
not weaken the above analysis. First, artificial intelligence technology is capable of
absorbing factual data as fear signs. However, as discussed above, humans use the
attention process so they can focus on part of the factual data. Although humans
have the capability of absorbing wider factual data, it would only disturb their
daily life.
Artificial intelligence technology may be programmed for that. If fear signs are
considered as irrelevant for guarding tasks, artificial intelligence technology, if well
designed, would not consider this data. If it was human, it would have been
described as it would not pay attention to this data. Filtering irrelevant data may
be done through the process of attention that runs in background of the human
mind, but artificial intelligence technology may filter it not through background
process.
The artificial intelligence technology would examine all factual data, and would
eliminate the irrelevant options only after analyzing the factual data thoroughly. At
this point the modern society may ask itself, who is to be preferred to function as
guards—those who unconsciously do not pay attention to factual data, or those
which examine all factual data thoroughly. As a result, artificial intelligence
technology has also the capability of fulfilling the second stage of awareness.
Since these two stages analyzed above are the only stages of the awareness process
in criminal law, it may be concluded that artificial intelligence technology has the
capability of fulfilling the awareness requirement in criminal law.
Some may have the feeling that something is still missing for concluding that
machines are capable of awareness. It may be right, if awareness in its broader
sense, as used in psychology, philosophy, cognitive sciences etc., was discussed
above. However, criminal law is supposed to examine the criminal liability of
artificial intelligence technology, and not the wide meanings of cognition in
psychology, philosophy, cognitive sciences etc. Therefore, the only standards of
awareness that may be relevant for examination are the standards of criminal law.
All other standards are irrelevant for assessment of criminal liability imposed
both upon humans and artificial intelligence technology. The criminal law defini-
tion for awareness is, indeed, much narrower than the parallel definitions in the
92 4 Positive Fault Element Involving Artificial Intelligence Systems
other spheres of knowledge. But this is true not only for the imposition of criminal
liability upon artificial intelligence technology, but also upon humans.39
As aforesaid, awareness itself is very difficult to be proven in courts, especially
in criminal cases where it should be proved beyond any reasonable doubt.40
Therefore, criminal law has developed two evidential substitutes for this task:
39
Perhaps, the definitions of awareness in psychology, philosophy, cognitive sciences etc. may be
relevant for the research for thinking machines, but not for the imposition of criminal liability,
which is fed by the definitions of criminal law.
40
In re Winship, 397 U.S. 358, 90 S.Ct. 1068, 25 L.Ed.2d 368 (1970).
41
United States v. Heredia, 483 F.3d 913 (2006); United States v. Ramon-Rodriguez, 492 F.3d
930 (2007); Saik, [2006] U.K.H.L. 18, [2007] 1 A.C. 18; Da Silva, [2006] E.W.C.A. Crim. 1654,
[2006] 4 All E.R. 900, [2006] 2 Cr. App. Rep. 517; Evans v. Bartlam, [1937] A.C. 473, 479, [1937]
2 All E.R. 646; G.R. Sullivan, Knowledge, Belief, and Culpability, CRIMINAL LAW THEORY –
DOCTRINES OF THE GENERAL PART 207, 213–214 (Stephen Shute and A.P. Simester eds., 2005).
42
State v. Pereira, 72 Conn. App. 545, 805 A.2d 787 (2002); Thompson v. United States, 348 F.
Supp.2d 398 (2005); Virgin Islands v. Joyce, 210 F. App. 208 (2006).
4.2 General Intent and Artificial Intelligence Systems 93
Volition, or in this context, the volitive aspect of general intent contains three levels
of will:
94 4 Positive Fault Element Involving Artificial Intelligence Systems
(a) intent;
(b) indifference; and-
(c) rashness.
43
See, e.g., United States v. Doe, 136 F.3d 631 (9th Cir.1998); State v. Audette, 149 Vt. 218, 543
A.2d 1315 (1988); Ricketts v. State, 291 Md. 701, 436 A.2d 906 (1981); State v. Rocker, 52 Haw.
336, 475 P.2d 684 (1970); State v. Hobbs, 252 Iowa 432, 107 N.W.2d 238 (1961); State v. Daniels,
236 La. 998, 109 So.2d 896 (1958).
44
People v. Disimone, 251 Mich.App. 605, 650 N.W.2d 436 (2002); Carter v. United States,
530 U.S. 255, 120 S.Ct. 2159, 147 L.Ed.2d 203 (2000); State v. Neuzil, 589 N.W.2d 708 (Iowa
1999); People v. Henry, 239 Mich.App. 140, 607 N.W.2d 767 (1999); Frey v. United States,
708 So.2d 918 (Fla.1998); United States v. Randolph, 93 F.3d 656 (9th Cir.1996); United States
v. Torres, 977 F.2d 321 (7th Cir.1992).
4.2 General Intent and Artificial Intelligence Systems 95
particular offense, they are insignificant for the imposition of criminal liability.45
For instance: (1) A killed B out of hatred; and (2) A killed B out of merci are both
considered murders, since the particular offense of murder does not require specific
intent towards certain purposes or motives.
However, if the particular offense would have required explicitly specific intent
towards purposes or motives, proving the specific intent would have been a
condition to the imposition of criminal liability in that offense. As aforesaid,
requirement of specific intent in particular offenses is relatively rare. Since the
only difference between intent and specific intent relates to their objects, and the
level of will of both is identical, the following analysis of intent would be relevant
to specific intent as well.
Intent is defined as aware will, which accompanies the commission of conduct,
that the results derived from that conduct will occur. In the definition of specific
intent results would be replaced by motives and purposes as follows: aware will,
which accompanies the commission of conduct, that the motive for the conduct
would be satisfied or that the purpose of the conduct would be achieved.
Intent is an expression of positive will, i.e., the will that a factual event would
occur.46 Although there are higher levels of will than intent (e.g., lust, longing,
desire, etc.), intent has been accepted in criminal law as the highest level of will that
may be required for the imposition of criminal liability in particular offenses.47
Consequently, no particular offense requires higher level of will than intent, and if
intent is proven, it satisfies all other levels of will.
It should be distinguished between aware will and unaware will. Unaware will is
an internal urge, impulse or instinct, in which the human is not aware of. Unaware
will is naturally uncontrollable. An individual is incapable of controlling his will,
unless he is aware of it. Being aware of the will does not guarantee capability of
controlling it, but controlling the will requires being aware of that will. Controlling
the will requires activating conscious processes in the human mind that may cause
the relevant activity to cease, to be initiated or not to be interfered. Imposing
criminal liability due to intent requires the will to be aware for the will to be
controllable. Intent does not require an abstract aware will.
The aware will is required to be focused on certain targets: results, motives or
purposes. The intent is an aware will which is focused on these certain targets. For
instance, the murderer’s intent is an aware will which is focused on causing the
victim’s death. This will is required to exist simultaneously with the commission of
the conduct and accompany it for the intent to be relevant to the imposition of
criminal liability. If one killed another by mistake and after the victim’s death he
45
Schmidt v. United States, 133 F. 257 (9th Cir.1904); State v. Ehlers, 98 N.J.L. 263, 119 A. 15
(1922); United States v. Pomponio, 429 U.S. 10, 97 S.Ct. 22, 50 L.Ed.2d 12 (1976); State v. Gray,
221 Conn. 713, 607 A.2d 391 (1992); State v. Mendoza, 709 A.2d 1030 (R.I.1998).
46
LUDWIG WITTGENSTEIN, PHILOSOPHISCHE UNTERSUCHUNGEN }629-}660 (1953).
47
State v. Ayer, 136 N.H. 191, 612 A.2d 923 (1992); State v. Smith, 170 Wis.2d 701, 490 N.W.2d
40 (App.1992).
96 4 Positive Fault Element Involving Artificial Intelligence Systems
has the will that the victim is death, it is not intent. Only when the murder is actually
committed accompanied simultaneously by the relevant will, it may be considered
intent.
Proving intent is much more difficult than proving awareness. Although both of
them are internal processes of the human mind, whereas awareness relates to
current facts, intent relates to future factual situation. Awareness is rational and
realistic, whereas intent is not necessarily. For instance, a person may intent to
become an elephant, but that person cannot be aware of being an elephant, as he is
not one. The deep difficulties in proving the intent made criminal law to developed
evidential substitute for this task.
The common used substitute is the foreseeability rule (dolus indirectus). Fore-
seeability rule is a legal presumption which is purposed to prove the existence of
intent. The foreseeability rule presumption defines that the offender is presumed to
intend the occurrence of the results, if the offender, during the aware commission of
the conduct, has foreseen the occurrence of the results as a very high probability
option.48 This presumption is relevant also to specific intent, if the object of results
is replaced by purpose.49 The rationale of this presumption is that believing that the
probability of certain factual event to occur out of the conduct is extremely high,
and committing that conduct, expresses that the offender wanted that factual event
to occur.
For example, A holds a loaded gun pointed to B’s head. A knows that B’s death
out of shooting him in the head is a factual event of very high probability.
Consequently, A pulls the trigger. In court A argues that he did not want it to
occur, therefore the required component of intent is not fulfilled and he should be
acquitted. If the court exercises the foreseeability rule presumption, the shooter is
presumed to intend the occurrence of the results. Since the shooter assessed the
death as results of very high probability and conducted accordingly, he is presumed
to want these results to occur.50
Using this presumption in courts for the proof of intent is very common. In fact,
unless the defendant confesses the existence of the intent explicitly in the
interrogation, the prosecution would rather prove the intent through this
presumption.
48
Studstill v. State, 7 Ga. 2 (1849); Glanville Williams, Oblique Intention, 46 CAMB. L. J.
417 (1987).
49
People v. Smith, 57 Cal. App. 4th 1470, 67 Cal. Rptr. 2d 604 (1997); Wieland v. State, 101 Md.
App. 1, 643 A.2d 446 (1994).
50
Stephen Shute, Knowledge and Belief in the Criminal Law, CRIMINAL LAW THEORY – DOCTRINES
OF THE GENERAL PART 182–187 (Stephen Shute and A.P. Simester eds., 2005); ANTONY KENNY,
WILL, FREEDOM AND POWER 42–43 (1975); JOHN H. SEARLE, THE REDISCOVERY OF MIND 62 (1992);
ANTONY KENNY, WHAT IS FAITH? 30–31 (1992); State v. VanTreese, 198 Iowa 984, 200 N.W. 570
(1924) but see Montgomery v. Commonwealth, 189 Ky. 306, 224 S.W. 878 (1920); State
v. Murphy, 674 P.2d 1220 (Utah.1983); State v. Blakely, 399 N.W.2d 317 (S.D.1987).
4.2 General Intent and Artificial Intelligence Systems 97
The question is, whether artificial intelligence technology has the capability of
having intent, in the context of criminal law.51 Since will may be vague and general
term, even in criminal law, the artificial intelligence capability of having intent
should be examined through the foreseeability rule presumption. In fact, this is the
core reason for using this presumption to prove the human intent. There are two
conditions to be followed within this rule:
(1) the occurrence of the results has been foreseen as a very high probability
option;
(2) the conduct has been committed under awareness.
51
See, e.g., Ned Block, What Intuitions About Homunculi Don’t Show, 3 BEHAVIORAL & BRAIN SCI.
425 (1980); Bruce Bridgeman, Brains + Programs ¼ Minds, 3 BEHAVIORAL & BRAIN SCI.
427 (1980).
52
See, e.g., FENG-HSIUNG HSU, BEHIND DEEP BLUE: BUILDING THE COMPUTER THAT DEFEATED THE
WORLD CHESS CHAMPION (2002); DAVID LEVY AND MONTY NEWBORN, HOW COMPUTERS PLAY CHESS
(1991).
98 4 Positive Fault Element Involving Artificial Intelligence Systems
this is true not only for the imposition of criminal liability upon artificial intelli-
gence technology, but also upon humans.53 The actual evidence for the artificial
intelligence technology’s intent is based on the ability to monitor and record all the
software’s activities. Each stage in the consolidation of intent or foreseeability is
monitored and recorded as part of the computer’s activity. Assessing probabilities
and making the relevant decisions are part of the computer’s activity.
Consequently, there will always be direct evidence for proving artificial intelli-
gence technology’s criminal intent, if proven through foreseeability rule presump-
tion. If intent is proven, directly or through the foreseeability rule presumption, all
other forms of volition may be proven accordingly. Since recklessness, combined
out of indifference or rashness, is a lower degree of will, it may be proven through
direct proof of recklessness or through proof of intent.
There are no offenses which require recklessness and nothing but recklessness.
In general, any requirement in the law represents only the minimum condition for
the imposition of criminal liability. The prosecution may choose between proving
the mentioned requirement or any other higher one, but not lower one. Conse-
quently, specific offenses, which require recklessness as their mental element
requirement, may be satisfied through the proof of intent (directly or through the
foreseeability rule presumption) or recklessness.
Thus, if the specific artificial intelligence technology has the capability
described above towards foreseeability, it has the capability of fulfilling the mental
element requirements of both intent offenses and recklessness offenses. However,
artificial intelligence technology has also the capability of fulfilling the recklessness
requirement directly as well. By analogy, if the capability of intent exists, the
capability of recklessness, which is a lower capability, exists as well.
Indifference is the higher level of recklessness and it consists of aware volition
neutrality towards the occurrence of the factual event. For the indifferent person,
the option the factual event occurs and the option the factual event would not occur,
are of the same significance. This has nothing to do with the actual probability of
the factual event to occur, but only to the offender’s internal volition towards the
occurrence of the factual event.
For instance, A and B are playing Russian roulette.54 At B’s turn, A is indifferent
as to the possibility of B’s death. He is careless whether B will die or live. For
indifference to be considered as such in the context of criminal law it should be
aware. The offender is required to be aware of the relevant options and have no
certain preference between them.55 Strong artificial intelligence technology
decision-making process is based on assessing probabilities, as discussed above.
53
The definitions of intent in psychology, philosophy, cognitive sciences etc. may be relevant for
the research for thinking machine, but not for the criminal liability, which is fed by the definitions
of criminal law.
54
Russian roulette is a game of chance in which participants place a single round in a gun, spin the
cylinder, place the muzzle against their head and pull the trigger.
55
G., [2003] U.K.H.L. 50, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep. 237, 167 J.P. 621, [2004]
Crim. L.R. 369, [2004] 1 A.C. 1034; Victor Tadors, Recklessness and the Duty to Take Care,
100 4 Positive Fault Element Involving Artificial Intelligence Systems
When the artificial intelligence technology makes a decision to act in a certain way,
but this decision does not take into consideration the probability of one specific
factual event to occur, it is indifferent as to the occurrence of that factual event.
In general, complicated processes of decision-making for artificial intelligence
technology are characterized by large sum of factors to be considered. Humans in
such situations tend to ignore part of the factors and not take them into consider-
ation. So do computers. Some of them are programmed to ignore certain factors, but
strong artificial intelligence technology has the capability of learning the ability to
ignore factors. Otherwise, the decision-making process would be impossible. This
learning process of strong artificial intelligence technology is based on “machine
learning”, which is inductive learning from examples. The more examples are
analyzed, the more effective learning it is, and this is sometimes referred as
“experience”.56
Since the decision-making process is monitored whenever the decision-maker is
artificial intelligence technology, there is no evidential problem in proving the
artificial intelligence technology’s indifference towards the occurrence of the
relevant factual event. The awareness of the artificial intelligence technology to
the possibility of the event’s occurrence is monitored as well as the factors which
were taken into consideration in the decision-making process. This data enables
direct proof of the artificial intelligence technology’s indifference. Indifference
may be proven through the foreseeability rule presumption as well.
Rashness is the lower level of recklessness and it consists of aware volition for
the relevant factual event not to occur, but yet committing aware unreasonable
conduct which causes it to occur. For the rash person, the occurrence of the factual
event is undesired, however he conducts with unreasonable risk for this event to
occur. Rashness is considered degree of will, since if the rash offender would not
have wanted the event to occur at all, he would not have taken the unreasonable risk
through his conduct. This is the major reason for rashness to be part of the volitive
aspect of the general intent. Otherwise, the negative will itself does not justify
criminal liability.
For instance, a car driver is driving in a narrow road behind a very slow truck.
The road has two routes in opposite directions and they are divided by continuous
separation line. The driver is very hurry. Eventually, he decides to bypass the truck
through crossing the line. He does not want to kill anyone, but only to bypass the
truck. However, a motorcycle comes across, the car hits it and the motorcycle driver
is killed. If the car driver would have wanted him to be dead, he was criminally
liable for murder. It would not be true to say he was indifferent as to the motorcycle
driver’s death. However, he was rash. He did not want to hit the motorcycle, but has
taken unreasonable risk for the death to occur. In this case, he would be criminally
liable for manslaughter.
CRIMINAL LAW THEORY – DOCTRINES OF THE GENERAL PART 227 (Stephen Shute and A.P. Simester
eds., 2005); Gardiner, [1994] Crim. L.R. 455.
56
VOJISLAV KECMAN, LEARNING AND SOFT COMPUTING, SUPPORT VECTOR MACHINES, NEURAL
NETWORKS AND FUZZY LOGIC MODELS (2001).
4.2 General Intent and Artificial Intelligence Systems 101
57
K., [2001] U.K.H.L. 41, [2002] 1 A.C. 462; B. v. Director of Public Prosecutions, [2000]
2 A.C. 428, [2000] 1 All E.R. 833, [2000] 2 W.L.R. 452, [2000] 2 Cr. App. Rep. 65, [2000]
Crim. L.R. 403.
102 4 Positive Fault Element Involving Artificial Intelligence Systems
In general, when an offender fulfills both the factual and mental elements
requirements of a specific offense, criminal liability for that offense is imposed.
When doing this, the court has no need to investigate whether the offender was
“evil” or whether other attribute characterized the commission of the offense. The
fulfillment of these requirements is the only condition for the imposition of criminal
liability. Other information may affect the punishment, but not the criminal
liability.58
The factual and mental elements are neutral, in this context. They do not
necessarily content “evil” or “good”.59 Their fulfillment is much more “technical”
than the detection of “evil”. For example, the society prohibits murder. Murder is
causing death to human with awareness and intent to cause the death. If an
individual factually caused another person’s death, the factual element requirement
is fulfilled. If the conduct has been committed under awareness and intent, the
mental element is fulfilled. At this point that individual is criminally liable for
murder, unless any general defense is applicable (e.g., self-defense, insanity, etc.).
The reason for the murder is immaterial for the imposition of criminal liability. It
is insignificant whether the murder has been committed out of merci (euthanasia) or
out of evil. This is the way criminal liability is imposed on human offenders. If this
standard is embraced in relation to artificial intelligence technology, the criminal
law would be able to impose criminal liability upon artificial intelligence technol-
ogy as well. This is the basic idea behind the criminal liability of artificial intelli-
gence technology. This idea is different than their moral accountability, social
responsibility or even civil legal personhood.60
The narrow definitions of criminal liability enable the artificial intelligence
technology to become subject to criminal law. Nevertheless, some may feel that
something is missing in this analysis, and perhaps this analysis may fall short.
Those feelings may be refuted by rational arguments. One feeling is that the
capacity of an artificial intelligence technology to follow a program is not sufficient
to enable the system to make moral judgment and exercise discretion, although the
program may contain tremendously elaborate and complex system of rules.61 This
feeling may relate eventually to moral choice of the offender.
The deeper argument is that no formal system could adequately make the moral
choices with which an offender may be confronted. Two answers may be relevant to
58
Robert N. Shapiro, Of Robots, Persons, and the Protection of Religious Beliefs, 56 S. CAL.
L. REV. 1277, 1286–1290 (1983); Nancy Sherman, The Place of the Emotions in Kantian Morality,
Identity, Character, and Morality 149, 145–162 (Owen Flanagan & Amelie O. Rotry eds., 1990);
Aaron Sloman, Motives, Mechanisms, and Emotions, The Philosophy of Artificial Intelligence
231, 231–232 (Margaret A. Boden ed., 1990).
59
JOHN FINNIS, NATURAL LAW AND NATURAL RIGHTS 85–90 (1980).
60
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231, 1262
(1992).
61
OWEN J. FLANAGAN, JR., THE SCIENCE OF THE MIND 224–241 (2nd ed., 1991).
4.2 General Intent and Artificial Intelligence Systems 103
this argument. First, it is not that sure that formal systems are morally blind. There
are very many types of morality and moral values. Teleological morality, such as
the utilitarianism, for example, deals with the utility values of the conduct.62 These
values may be measurable, compared and decided upon according to their quanti-
tative comparison. For instance, an artificial intelligence technology controls a
heavy wagon. The wagon malfunctions, and there are two possible paths to drive
the wagon. The software calculates the probabilities and one involves the death of
one person, the other of 50.
Teleological morality would direct any human to choose the first, and he would
be considered moral person. So can the artificial intelligence technology. Its
morality is dictated by its program, which makes it evaluate the consequences in
each chosen way. In fact, this is the way the human mind acts. Second, even if the
first answer is not convincing, and formal systems are incapable of morality of any
kind, still, the criminal liability is neither dependent on nor fed by any morality.
Morality is not even a precondition for the imposition of criminal liability.
Criminal courts do not assess the human offenders’ morality for the imposition
of criminal liability. The offender may be very moral through the court’s perspec-
tive, but still be convicted (e.g., euthanasia), and the offender may be very immoral
through the court’s perspective, but still be acquitted (e.g., adultery). Since morality
of any kind is not required for the imposition of criminal liability upon human
offenders, the question is why should it be considerable when an artificial intelli-
gence technology is involved. Another feeling is more banal. Accordingly, artificial
intelligence technology is not human, and criminal liability is designed for humans
only, since it involves constitutional human rights, that only humans may have.63
In this context, it is immaterial whether the constitutional rights refer to substan-
tial rights or to procedural rights. The answer here is that perhaps criminal law was
originally designed for humans, but since the seventeenth century, it is not exclu-
sive for humans, as discussed above.64 Corporations, which are non-human
creatures, are also subject to criminal law, and not only to criminal law. There
may be some matches and adjustments that should be done, but criminal liability
and punishments are imposed upon corporations for the past four centuries. Some
argue that although corporations have been recognized as subject to criminal law,
the personhood of artificial intelligence technology should not be recognized, and
that is for the human benefit, since humans have no interest in recognizing it.65
This argument cannot be considered applicable in analytic legal discussion on
the criminal liability of artificial intelligence technology. There are very many cases
in daily life, that the imposition of criminal liability has no benefit for human
society, and still criminal liability is imposed. The famous example is Kant’s, who
claimed that even if the last human person on Earth is an offender, he should be
62
See, e.g., DAVID LYONS, FORMS AND LIMITS OF UTILITARIANISM (1965).
63
See Solum, supra note 60, at pp. 1258–1262.
64
Above at paragraph 2.2.
65
EDWARD O. WILSON, SOCIOBIOLOGY: THE NEW SYNTHESIS 120 (1975).
104 4 Positive Fault Element Involving Artificial Intelligence Systems
punished, even though that leads to the extinction of humanity.66 Human benefit has
not been recognized as valid component of criminal liability. Other feeling is that
the concept of awareness presented above is too shallow for artificial intelligence
technology to be called to account, blamed and faulted for the factual harm they
may cause.67
This feeling is based on confusion between the concept of awareness and
consciousness in psychology, philosophy, theology and cognitive sciences and
the concept of awareness in criminal law. In those spheres of knowledge, besides
criminal law, the society lacks a clear notion of what awareness is. Lacking such
notions disables serious answers to the question of artificial intelligence capability
of awareness. It would be correct in most cases, that such answers, whenever given,
were bases on intuition, but not science.68 However, criminal law must be accurate
and criminal liability must be accurate when imposed.
Based on criminal law definitions, people may be jailed for all their lives, their
property may be taken, and even their very lives. Therefore, the criminal law
definitions must be accurate and proven beyond any reasonable doubt. Imposition
of criminal liability for general intent offenses is based on awareness as the major
and dominant component of the mental element requirement. The fact, that the term
“awareness” in psychology, philosophy, theology and cognitive sciences has not
been developed adequately for the creation of accurate definition, does not exempt
the criminal law from developing such definition of its own for the purpose of
imposing criminal liability.
The definition of criminal law for awareness, as any other legal definition, might
be extremely different than its daily meaning or its meaning in psychology,
philosophy, theology and cognitive sciences, if any. The criminal law definitions
are designed and adapted to fulfill the needs of criminal law, and nothing beyond
that. These definitions represent the necessary requirements for the imposition of
criminal liability. They also represent the minimal conditions both structurally and
substantively. Consequently, the definitions of criminal law, including the defini-
tion of awareness, have a relative value, which is relevant only to the criminal law.
Since these definitions are formed due to the concept of minimum requirement,
as discussed above,69 they might be regarded as shallow. However, that is true only
when examined through the perspectives of psychology, philosophy, theology and
cognitive sciences, but not of criminal law. If there would occur a significant
development in those scientific spheres towards awareness in the future, the crimi-
nal law may embrace newer and more complicated definitions for awareness
inspired by these scientific spheres. For the time being, there are none.
66
ROGER J. SULLIVAN, IMMANUEL KANT’S MORAL THEORY 68 (1989).
67
RAY JACKENDOFF, CONSCIOUSNESS AND THE COMPUTATIONAL MIND 275–327 (1987); COLIN MCGINN,
THE PROBLEM OF CONSCIOUSNESS: ESSAYS TOWARDS A RESOLUTION 202–213 (1991).
68
DANIEL C. DENNETT, BRAINSTORMS 149–150 (1978).
69
Above at Sect. 2.1.2.
4.2 General Intent and Artificial Intelligence Systems 105
(a) lack of attributes which are not required for imposition of criminal liability
(e.g., soul, evil, good, etc.); and-
(b) shallowness of criminal law definitions in the perspective of some spheres
of science other than criminal law (e.g., psychology, philosophy, theology
and cognitive sciences).
70
DANIEL C. DENNETT, THE INTENTIONAL STANCE 327–328 (1987).
106 4 Positive Fault Element Involving Artificial Intelligence Systems
towards the criminal liability of artificial intelligence technology for general intent
offenses is not ended in commission of the particular offense as principal offender.
The commission of general intent offenses may be through complicity as well. The
accomplices are required to form not less than general intent for their criminal
liability as accomplices. There is no complicity through negligence or through strict
liability.71
Thus, a joint-perpetrator may be considered as such only in relation to general
intent offenses. Other general forms of complicity (e.g., inciters and accessories)
require at least general intent as well. Consequently, since all general forms of
complicity require at least general intent, an artificial intelligence system may be
considered as an accomplice only if it actually formed general intent. Opening the
gate for imposition of criminal liability upon artificial intelligence technology as
direct offenders opens up the gate for accepting artificial intelligence technology as
accomplices, joint-perpetrators, inciters, accessories, etc., as well, as long as both
factual and mental elements requirements are met at full.
71
See, e.g., People v. Marshall, 362 Mich. 170, 106 N.W.2d 842 (1961); State v. Gartland,
304 Mo. 87, 263 S.W. 165 (1924); State v. Etzweiler, 125 N.H. 57, 480 A.2d 870 (1984); People
v. Kemp, 150 Cal.App.2d 654, 310 P.2d 680 (1957); State v. Hopkins, 147 Wash. 198, 265 P. 481
(1928); State v. Foster, 202 Conn. 520, 522 A.2d 277 (1987); State v. Garza, 259 Kan. 826, 916
P.2d 9 (1996); Mendez v. State, 575 S.W.2d 36 (Tex.Crim.App.1979).
72
Francis Bowes Sayre, Criminal Responsibility for the Acts of Another, 43 HARV. L. REV.
689, 689–690 (1930).
4.2 General Intent and Artificial Intelligence Systems 107
conduct but also for that of all his subjects (slaves, workers, family, etc.). When one
of his subjects committed an offense, it was considered as if the master himself had
committed the offense, and the master was obligated to respond to the indictment
(respondeat superior). The legal meaning of this obligation was that the master was
criminally liable for offenses physically committed by his subjects.
The rationale for this concept was that the master should enforce the criminal
law among his subjects. If the master failed to do so, he was personally liable for the
offenses committed by his subjects. As the master’s subjects were considered to be
his property, he was liable for the harms committed by them both under criminal
and civil law. A subject was considered as an organ of the master, as his long arm.
The legal maxim that governed vicarious liability stated that whoever acts through
another is considered to be acting for himself (qui facit per alium facit per se).
The physical appearance of the commission of the offense was insignificant for
the imposition of criminal liability in this context. This legal concept was accepted
in most ancient legal systems. Based on it, the Roman law developed the function of
the father of the family (paterfamilias), who was responsible for any crime or tort
committed by members of the family, its servants, guards, and slaves.73 Conse-
quently, the father of the family was responsible for the prevention of criminal
offenses and civil torts among his subjects. The incentive for doing so was the fears
of the father of the family criminal or tort liability for the actions of members of his
household. The legal concept of vicarious liability was absorbed into medieval
European law.
The concept of vicarious liability was formally and explicitly accepted in
English common law in the fourteenth century,74 based on legislation enacted in
the thirteenth century.75 Between the fourteenth and seventeenth centuries, English
common law amended the concept and ruled that the master was liable for the
servants’ offenses (under criminal law) and torts (under civil law) only if he
explicitly ordered the servant to commit the offenses, explicitly empowered them
to do so, or consented to their doing so before the commission of the offense (ex
ante), or after the commission of the tort (ex post).76 Since the end of the seven-
teenth century, this firm requirement was replaced by a much weaker one.
Criminal and civil liability could be imposed on the master for offenses and torts
committed by the servants even if the orders of the master were implicit or the
73
Digesta, 9.4.2; Ulpian, 18 ad ed.; OLIVIA F. ROBINSON, THE CRIMINAL LAW OF ANCIENT ROME 15–
16 (1995).
74
Y.BB. 32–33 Edw. I (R. S.), 318, 320 (1304); Seaman v. Browning, (1589) 4 Leonard
123, 74 Eng. Rep. 771.
75
13 EDW. I, St. I, c.2, art. 3, c.II, c.43 (1285). See also FREDERICK POLLOCK AND FREDERICK WILLIAM
MAITLAND, THE HISTORY OF ENGLISH LAW BEFORE THE TIME OF EDWARD I 533 (rev. 2nd ed., 1898);
Oliver W. Holmes, Agency, 4 HARV. L. REV. 345, 356 (1891).
76
Kingston v. Booth, (1685) Skinner 228, 90 Eng. Rep. 105.
108 4 Positive Fault Element Involving Artificial Intelligence Systems
empowerment of the servant was general.77 This was the result of an attempt by
English common law to deal with the many tort cases against workers at the dawn
of the first industrial revolution in England and of the commercial developments of
that time. The actions committed by the master’s workers were considered to be
actions of the master because he enjoyed the benefits. And if the master enjoyed the
benefits of these actions, he should be legally liable, both in criminal and civil law,
for the harm that may be caused by them.
In the nineteenth century, the requirements were further weakened, and it was
ruled that if the worker’s actions were committed through or as part of the general
course of business, the master was liable for them even if no explicit or implicit
orders had been given. Consequently, the defense argument of the worker having
exceeded his authority (ultra vires) was rejected. Thus, even if the worker acted in
contradiction to the specific order of his superior, the superior was still liable for the
worker’s actions if they were carried out in the general course of business. This
approach was developed in tort law, but the English courts did not restrict it to tort
law and applied it to criminal law as well.78
Nevertheless, that vicarious liability was developed under very specific social
conditions, in which only individuals of the upper classes had the required compe-
tence to be considered legal technology. In the Roman law only the father of the
family could become a prosecutor, plaintiff, or defendant. When the concept of
social classes began to fade, in the nineteenth century, vicarious liability faded
away with it. In the criminal law at the beginning of the nineteenth century, the
cases of vicarious liability were divided into three main types of criminal liability.
The first type was that of classic complicity. If the relations between the parties
were based on real cooperation, they were classified as joint-perpetration even if the
parties had an employer–employee or some other hierarchical relation. However, if
within the hierarchical relations, information gaps between the parties or the use of
power made one of the parties lose its ability to commit an aware and willed
offense, the act could not be considered as joint-perpetration. The party that lost
the ability to commit an aware and willed offense was considered an “innocent
agent” who functions as a mere instrument in the hands of the other party.
The innocent agent was not criminally liable. The offense was considered
“perpetration-through-another,” and another party had full criminal liability for
77
Boson v. Sandford, (1690) 2 Salkeld 440, 91 Eng. Rep. 382:
The owners are liable in respect of the freight, and as employing the master; for whoever
employs another is answerable for him, and undertakes for his care to all that make use of
him;
Turberwill v. Stamp, (1697) Skinner 681, 90 Eng. Rep. 303; Middleton v. Fowler, (1699)
1 Salkeld 282, 91 Eng. Rep. 247; Jones v. Hart, (1699) 2 Salkeld 441, 91 Eng. Rep. 382; Hern
v. Nichols, (1708) 1 Salkeld 289, 91 Eng. Rep. 256.
78
Sayre, supra note 72, at pp. 693–694; WILLIAM PALEY, A TREATISE ON THE LAW OF PRINCIPAL AND
AGENT (2nd ed., 1847); Huggins, (1730) 2 Strange 882, 93 Eng. Rep. 915; Holbrook, (1878)
4 Q.B.D. 42; Chisholm v. Doulton, (1889) 22 Q.B.D. 736; Hardcastle v. Bielby, [1892] 1 Q.B. 709.
4.2 General Intent and Artificial Intelligence Systems 109
the actions of the innocent agent.79 This was the basis for the emergence of
perpetration-through-another from vicarious liability, and it was also the second
type of criminal liability derived from vicarious liability. The third type was the
core of the original vicarious liability. In most modern legal systems, this type is
embodied in specific offenses and not in the general formation of criminal liability.
Since the emergence of the modern law of complicity, the original vicarious
liability is no longer considered a legitimate form of criminal liability.
Since the end of the nineteenth century and the beginning of the twentieth
century, the concept of the innocent agent has been widened to include also parties
that have no hierarchical relations between them. Whenever a party acts without
awareness of its actions or without will it is considered an innocent agent. The acts
of the innocent agent could be the results of another party’s initiative (e.g., using the
innocent agent through threats, coercion, misleading, lies, etc.) or another party’s
abuse of an existing factual situation that eliminates the awareness or will of the
innocent agent (e.g., abuse of a factual mistake, insanity, intoxication, minority
etc.).
During the twentieth century the concept of perpetration-through-another has
been applied also to “semi-innocent agents”, typically a negligent party that is not
fully aware of the factual situation while any other reasonable person could have
been aware of it under the same circumstances. Most modern legal systems accept
the semi-innocent agent as part of perpetration-through-another, so that the other
party is criminally liable for the commission of the offense, and the semi-innocent
agent is criminally liable for negligence.
If the legal system contains an appropriate offense of negligence (i.e., the same
factual element requirement, but a mental element of negligence instead of aware-
ness, knowledge, or intent), the semi-innocent agent is criminally liable for that
offense. If no such offense exists, no criminal liability is imposed, although the
other party is criminally liable for the original offense. For the criminal liability of
the perpetration-through-another, the factual element may be fulfilled through the
innocent agent, but the mental element requirement should be fulfilled actually and
subjectively by the perpetrator-through-another himself, including as to the instru-
mental use of the innocent agent.80
Accordingly, the question is, if an artificial intelligence technology is used by
another entity (human, corporation or another artificial intelligence technology) as
mere instrument for the commission of the offense, how would the criminal liability
for the commission of the offense be divided between them. Perpetration-through-
another does not consider the artificial intelligence technology which physically
committed the offense as possessing any human attributes. The artificial intelli-
gence technology is considered an innocent agent. However, one cannot ignore an
79
Glanville Williams, Innocent Agency and Causation, 3 CRIM. L. F. 289 (1992); Peter Alldridge,
The Doctrine of Innocent Agency, 2 CRIM. L. F. 45 (1990).
80
State v. Silva-Baltazar, 125 Wash.2d 472, 886 P.2d 138 (1994); GLANVILLE WILLIAMS, CRIMINAL
LAW: THE GENERAL PART 395 (2nd ed., 1961).
110 4 Positive Fault Element Involving Artificial Intelligence Systems
81
Maxey v. United States, 30 App. D.C. 63, 80 (App. D.C. 1907).
82
Johnson v. State, 142 Ala. 70, 71 (1904).
83
United States v. Bryan, 483 F.2d 88, 92 (3d Cir. 1973).
84
Maxey, 30 App. D.C. at 80 (App. D.C. 1907); Commonwealth v. Hill, 11 Mass. 136 (1814);
Michael, (1840) 2 Mood. 120, 169 Eng. Rep. 48.
85
Johnson v. State, 38 So. 182, 183 (Ala. 1904); People v. Monks, 24 P.2d 508, 511 (Cal. Dist.
Ct. App. 1933).
86
United States v. Bryan, 483 F.2d 88, 92 (3d Cir. 1973); Boushea v. United States, 173 F.2d
131, 134 (8th Cir. 1949); People v. Mutchler, 140 N.E. 820, 823 (Ill. 1923); State v. Runkles,
605 A.2d 111, 121 (Md. 1992); Parnell v. State, 912 S.W.2d 422, 424 (Ark. 1996); State
v. Thomas, 619 S.W.2d 513, 514 (Tenn. 1981).
87
Dusenbery v. Commonwealth, 772 263 S.E.2d 392 (Va. 1980).
88
United States v. Tobon-Builes, 706 F.2d 1092, 1101 (11th Cir. 1983); United States v. Ruffin,
613 F.2d 408, 411 (2d Cir. 1979).
4.2 General Intent and Artificial Intelligence Systems 111
technology. The robot identifies the specific user as the master, and the master
orders the robot to assault any invader of the house. The robot executes the order
exactly as ordered. This is not different than a person who orders his dog to attack
any trespasser. The robot committed the assault, but the user is deemed the
perpetrator.
In both scenarios, the actual offense was physically committed by the artificial
intelligence technology. The programmer or the user did not perform any action
conforming to the definition of a specific offense; therefore, they do not meet the
factual element requirement of the specific offense. The perpetration-through-
another liability considers the physical actions committed by the artificial intelli-
gence technology as if it had been the programmer’s, the user’s or any other
person’s, who is instrumentally using the artificial intelligence technology.
The legal basis for this criminal liability is the instrumental use of the artificial
intelligence technology as an innocent agent.89 No mental attribute, required for the
imposition of criminal liability, is attributed to the artificial intelligence technol-
ogy.90 When programmers or users use an artificial intelligence technology instru-
mentally, the commission of an offense by the artificial intelligence technology is
attributed to them. The mental element required in the specific offense already
exists in their minds. The programmer had criminal intent when he ordered the
commission of the arson, and the user had criminal intent when he ordered the
commission of the assault, even though these offenses were physically committed
through a robot, an artificial intelligence technology.
When an end-user makes instrumental use of an innocent agent to commit an
offense, the end-user is deemed the actual perpetrator of that very offense.
Perpetration-through-another does not attribute any mental capability, or any
human mental capability, to the artificial intelligence technology. Accordingly,
there is no legal difference between an artificial intelligence technology and a
screwdriver or an animal, both instrumentally used by the actual perpetrator.
When a burglar uses a screwdriver in order to open up a window, he instrumentally
uses the screwdriver, and the screwdriver is not criminally liable. The screwdriver’s
“action” is, in fact, the burglar’s. This is the same legal situation when using an
animal instrumentally. An assault committed by a dog by order of its master is, in
fact, an assault committed by the master.
This kind of criminal liability might be suitable for two types of scenarios. The
first scenario is using an artificial intelligence technology, even a strong artificial
intelligence technology, to commit an offense without using its advanced
capabilities. The second scenario is using a weak version of an artificial intelligence
technology, which lacks the modern advanced capabilities of the modern artificial
89
See Solum, supra note 60, at p. 1237.
90
The artificial intelligence technology is used as an instrument and not as a participant, although
it uses its features of processing information. See, e.g., George R. Cross & Cary G. Debessonet, An
Artificial Intelligence Application in the Law: CCLIPS, A Computer Program that Processes Legal
Information, 1 HIGH TECH. L.J. 329 (1986).
112 4 Positive Fault Element Involving Artificial Intelligence Systems
The first type of criminal liability presented above related the artificial intelligence
technology as the perpetrator of the offense.95 The second related the artificial
intelligence technology as mere instrument in the hands of the legally-considered
perpetrator.96 The second type of liability is not the only type possible to describe
91
Andrew J. Wu, From Video Games to Artificial Intelligence: Assigning Copyright Ownership to
Works Generated by Increasingly Sophisticated Computer Programs, 25 AIPLA Q.J. 131 (1997);
Timothy L. Butler, Can a Computer Be an Author – Copyright Aspects of Artificial Intelligence,
4 COMM. ENT. L.S. 707 (1982).
92
NICOLA LACEY AND CELIA WELLS, RECONSTRUCTING CRIMINAL LAW – CRITICAL PERSPECTIVES ON
CRIME AND THE CRIMINAL PROCESS 53 (2d ed. 1998).
93
People v. Monks, 133 Cal. App. 440, 446 (Cal. Dist. Ct. App. 1933).
94
See Solum, supra note 60, at pp. 1276–1279.
95
Above at Sect. 4.2.4.
96
Above at Sect. 4.2.5.
4.2 General Intent and Artificial Intelligence Systems 113
legal relations between humans and artificial intelligence technology towards the
commission of the offense. The second type dealt with adhered artificial intelli-
gence technology, but what if the artificial intelligence technology, which was not
programmed to commit the offense, calculates a decision to act, and the act is
considered offense.
The question here is towards the human liability rather than the criminal liability
of the artificial intelligence technology. For instance, the programmer of sophisti-
cated artificial intelligence technology designs it not to commit certain offenses. In
the beginning of its activation, the artificial intelligence system commits no
offenses. In time, the machine learning through induction is widened and new
paths of activity are opened. At some point, an offense is committed.
Another instance, a bit different, the programmer designs the artificial intelli-
gence system to commit one certain offense. Expectedly, the offense is committed
through the artificial intelligence system. However, the artificial intelligence system
deviates from the original plan of the programmer and continues in its delinquent
activity. The deviation might be quantitative (more offenses of the same kind),
qualitative (more offenses of different kinds) or both.
If the programmer would have been programming it from the beginning to
commit the additional offenses, it would have been considered as perpetration-
through-another at most. However, the programmer did not do this. If the artificial
intelligence system consolidated both factual and mental elements of the additional
offenses, the artificial intelligence system was criminally liable through the first
type of liability. However, the question here would be towards the criminal liability
of the programmer. This is the main issue of the third type of liability discussed
below. The most appropriate criminal liability in such cases is the probable
consequence liability.
By origin, probable consequence liability in criminal law relates to the criminal
liability of parties to criminal offenses that have been committed in practice, but
these offenses were not part of the original criminal plan. For example, A and B
plan to commit bank robbery. According to the plan, A’s role is to break into the
safe and B’s role is to threaten the guard with a loaded gun. During the robbery the
guard resists and B shoots him to death. The killing of the guard was not part of the
original criminal plan. When the guard was shot A was not there, did not know
about it, did not agree to it, and did not commit it.
The legal question in the above example concerns A’s criminal liability for
homicide, in addition to his certain criminal liability for robbery. A does not satisfy
either the factual or the mental element of homicide, since he neither physically
committed it nor was aware of it. The homicide was not part of their criminal plan.
The question may be expanded also to inciters and accessories of the robbery, if
any. In general, the question of the probable consequence liability refers to the
criminal liability of one person for unplanned offenses that were committed by
another person. Before applying the probable consequence liability on Human–
artificial intelligence offenses, its features should be explored.
There are two opposite extreme approaches to this general question. The first
calls for imposition of full criminal liability upon all parties. The other calls for
114 4 Positive Fault Element Involving Artificial Intelligence Systems
broad exemption from criminal liability for any party that does not meet the factual
and mental element requirements of the unplanned offense. The first is considered
problematic for over-criminalization, whereas the second is considered problematic
for under-criminalization. Consequently, moderate approaches were developed and
embraced.
The first extreme approach does not consider at all the factual and mental
elements of the unplanned offense. This approach originates in Roman civil law,
which has been adapted to criminal cases by several legal systems. According to
this approach, any involvement in the delinquent event is considered to include
criminal liability for any further delinquent event derived from it (versanti in re
illicita imputantur omnia quae sequuntur ex delicto).97 This extreme approach
requires neither factual nor mental elements for the unplanned offense from the
other parties, besides the party who actually committed the offense and possessed
both factual and mental elements.
According to this extreme approach, the criminal liability for the unplanned
offense is an automatic derivative. The basic rationale of this approach is deterrence
of potential offenders from participating in future criminal enterprises by widening
the criminal liability not only to include the planned offenses but the unplanned
ones as well. The potential party must realize that his personal criminal liability
may not be restricted to specific types of offenses, and that he may be criminally
liable for all expected and unexpected developments that are derived directly or
indirectly from his conduct. Potential parties are expected to be deterred and avoid
involvement in delinquent acts.
This approach does not distinguish between various forms of involvement in the
delinquent event. The criminal liability for the unplanned offense is imposed
regardless of the role of the offender in the commission of the planned offense as
perpetrator, inciter, or accessory. The criminal liability imposed for the unplanned
offense is not dependent on the fulfillment of factual and mental element
requirements by the parties. If the criminal liability for the unplanned offense is
imposed on all parties of the original enterprise, including those who could have no
control over the commission of the unplanned offense, the deterrent value of this
approach is extreme.
Prospectively, this approach educates people to keep away from involvement in
delinquent events, regardless of the specific role they may potentially play in the
commission of the offense. Any deviation from the criminal plan, even if not under
the direct control of the party, is basis for criminal liability for all persons involved,
as if it had been fully perpetrated by all parties. The effect of this extreme approach
can be broad and encompassing. Parties to another (third) offense, different from
the unplanned offense, who were not direct parties of the unplanned offense, may
be criminally liable for the unplanned offense as well if there is the slightest
connection between the offenses.
97
Digesta, 48.19.38.5; Codex Justinianus, 9.12.6; REINHARD ZIMMERMANN, THE LAW OF OBLIGATIONS
– ROMAN FOUNDATIONS OF THE CIVILIAN TRADITION 197 (1996).
4.2 General Intent and Artificial Intelligence Systems 115
The criminal liability for the unplanned offense is uniform for all parties and
requires no factual and mental elements. Most western legal systems consider such
a deterrent approach too extreme and have therefore rejected it.98 The other extreme
approach is the exact opposite of the former and focuses on the factual and mental
elements of the unplanned offense. Accordingly, to impose criminal liability for the
unplanned offense, it is a necessary to examine that both factual and mental element
requirements are met by each party. Only if both requirements of the unplanned
offense are met by the specific party, it is legitimate to impose criminal liability
upon him. Naturally, as the unplanned offense was not planned, it is most likely that
none of the parties would be criminally liable for that offense, besides the party who
actually committed it.
This extreme approach ignores the social endangerment inherent in the criminal
enterprise. This social endangerment includes not only planned offenses but the
unplanned ones as well. Because of this extreme approach, offenders have no
incentive to restrict their involvement in the delinquent event. Prospectively, any
party, who wishes to escape from criminal liability for the probable consequences
of the criminal plan needs only to avoid participation in the factual aspect of any
further offense.
Such offenders would tend to share and involve more parties in the commission
of the offense in order to increase the chance for the commissioning of further
offenses, and therefore most modern legal systems prefer not to adopt this extreme
approach either. Several moderate approaches have been developed to meet the
difficulties raised by these extreme approaches. The core of these moderate
approaches lies in the creation of probable consequence liability, i.e., criminal
liability for the unplanned offense, whose commission is the probable consequence
of the planned original offense. “Probable consequence” means both mentally
probable from the point of view of the party and factual consequence derived
from the planned offense.
Thus, probable consequence liability generally requires two major conditions to
impose criminal liability for the unplanned offense:
98
See, e.g., United States v. Greer, 467 F.2d 1064 (7th Cir.1972); People v. Cooper, 194 Ill.2d
419, 252 Ill.Dec. 458, 743 N.E.2d 32 (2000).
116 4 Positive Fault Element Involving Artificial Intelligence Systems
B shoots the guard to death, an act that is incidental to the robbery and to his role in
it. Had it not been for the committed robbery, no homicide would have been
committed. Therefore, the homicide is the factual consequence of the robbery and
it was committed incidentally to the robbery.
An incidental offense is one has not been part of the criminal plan, and the
parties did not conspire to commit it. If the offense is part of the criminal plan, the
probable consequence liability is irrelevant and the general rules of complicity
apply as to parties to offense. Unplanned offenses fall short and create an under-
criminalization problem. The probable consequence liability is an attempt to
address this difficulty by expanding the criminal liability for unplanned offenses
despite the fact that they are unplanned.
The unplanned offense may be a different offense from the planned one, but not
necessarily. The offense may also be an additional identical offense. For example,
A and B conspire to rob a bank by breaking into one of its safes. A is intended to
break into the safe and B to watch the guard. They execute their plan but in addition
B shoots and kills the guard, and A breaks into yet another safe. The unplanned
homicide is a different offense from the planned robbery. The unplanned robbery is
identical to the planned robbery.
Both unplanned offenses are incidental consequences of the planned offense,
although one is different from the planned offense and the other is identical with
it. The planned offense serves as the causal background for both unplanned
offenses, as they incidentally derive from it.99 The mental condition (“probable”)
requires that the occurrence of the unplanned offense be probable in the eyes of the
relevant party, meaning that it could have been foreseen and reasonably predicted.
Some legal systems prefer to examine the actual and subjective foreseeability
(the party has actually and subjectively foreseen the occurrence of the unplanned
offense), whereas others prefer to evaluate the ability to foresee through an objec-
tive standard of reasonability (the party has not actually foreseen the occurrence of
the unplanned offense, but any reasonable person in his state could have). Actual
foreseeability parallels the subjective general intent, whereas objective foreseeabil-
ity parallels the objective negligence.
For example, A and B conspire to rob a bank. A is intended to break into the safe
and B to watch the guard. They execute the plan and B shoots and kills the guard
while A breaks into the safe. In some legal systems A is criminally liable for the
killing only if he had actually foreseen the homicide, and in others, if a reasonable
person could have foreseen the forthcoming homicide in these circumstances.
Consequently, if the relevant accomplice did not actually foresee the unplanned
offense, or any reasonable person in the same condition could not have foreseen it,
he is not criminally liable for the unplanned offense.
99
State v. Lucas, 55 Iowa 321, 7 N.W. 583 (1880); Roy v. United States, 652 A.2d 1098 (D.C.
App.1995); People v. Weiss, 256 App.Div. 162, 9 N.Y.S.2d 1 (1939); People v. Little, 41 Cal.
App.2d 797, 107 P.2d 634 (1941).
4.2 General Intent and Artificial Intelligence Systems 117
100
State v. Linscott, 520 A.2d 1067 (Me.1987): “a rule allowing for a murder conviction under a
theory of accomplice liability based upon an objective standard, despite the absence of evidence
that the defendant possessed the culpable subjective mental state that constitutes an element of the
crime of murder, does not represent a departure from prior Maine law” (emphasis in original).
101
People v. Prettyman, 14 Cal.4th 248, 58 Cal.Rptr.2d 827, 926 P.2d 1013 (1996); Chance
v. State, 685 A.2d 351 (Del.1996); Ingram v. United States, 592 A.2d 992 (D.C.App.1991);
Richardson v. State, 697 N.E.2d 462 (Ind.1998); Mitchell v. State, 114 Nev. 1417, 971 P.2d
813 (1998); State v. Carrasco, 122 N.M. 554, 928 P.2d 939 (1996); State v. Jackson, 137 Wash.2d
712, 976 P.2d 1229 (1999).
102
United States v. Powell, 929 F.2d 724 (D.C.Cir.1991).
118 4 Positive Fault Element Involving Artificial Intelligence Systems
103
State v. Kaiser, 260 Kan. 235, 918 P.2d 629 (1996); United States v. Andrews, 75 F.3d 552 (9th
Cir.1996); State v. Goodall, 407 A.2d 268 (Me.1979). Compare: People v. Kessler, 57 Ill.2d
493, 315 N.E.2d 29 (1974).
104
People v. Cabaltero, 31 Cal.App.2d 52, 87 P.2d 364 (1939); People v. Michalow, 229 N.Y. 325,
128 N.E. 228 (1920).
105
Anderson, [1966] 2 Q.B. 110, [1966] 2 All E.R. 644, [1966] 2 W.L.R. 1195, 50 Cr. App. Rep.
216, 130 J.P. 318:
Put the principle of law to be invoked in this form: that where two persons embark on a joint
enterprise, each is liable for the acts done in pursuance of that joint enterprise, that that
includes liability for unusual consequences if they arise from the execution of the agreed
joint enterprise.
106
English, [1999] A.C. 1, [1997] 4 All E.R. 545, [1997] 3 W.L.R. 959, [1998] 1 Cr. App. Rep.
261, [1998] Crim. L.R. 48, 162 J.P. 1; Webb, [2006] E.W.C.A. Crim. 2496, [2007] All
E.R. (D) 406; O’Flaherty, [2004] E.W.C.A. Crim. 526, [2004] 2 Cr. App. Rep. 315.
107
BGH 24, 213; BGH 26, 176; BGH 26, 244.
108
Above at Sect. 4.2.5.
4.2 General Intent and Artificial Intelligence Systems 119
109
Above at Sect. 4.2.4.
120 4 Positive Fault Element Involving Artificial Intelligence Systems
would occur, the mental element of general intent is irrelevant for him. The
programmer’s criminal liability in this type of cases is to be examined by standards
of negligence, and his criminal liability would be for negligence offenses, at most.
110
See REUVEN YARON, THE LAWS OF ESHNUNNA 264 (2nd ed., 1988).
111
Collatio Mosaicarum et Romanarum Legum, 1.6.1-4, 1.11.3-4; Digesta, 48.8.1.3, 48.19.5.2;
Ulpian, 7 de off. Proconsulis. Pauli Sententiae, 1 manual: “magna neglegentia culpa est; magna
culpa dolus est”.
112
HENRY DE BRACTON, DE LEGIBUS ET CONSUETUDINIBUS ANGLIAE 278 (1260; G. E. Woodbine ed.,
S. E. Thorne trans., 1968–1977).
113
Hull, (1664) Kel. 40, 84 Eng. Rep. 1072, 1073.
114
Williamson, (1807) 3 Car. & P. 635, 172 Eng. Rep. 579.
4.3 Negligence and Artificial Intelligence Systems 121
Death cases on roads became more common, and manslaughter was not appropriate
for these cases. A lower level of homicide was required, and negligent homicide
was considered appropriate.115
When negligence came into common use, the confusion has begun. Negligence
has been interpreted as requiring unreasonable conduct, and that caused confusion
with recklessness of the lower level (rashness), which required taking unreasonable
risk. That confusion caused creation of unnecessary terms of “gross negligence”
and “wicked negligence”.116 Many misleading rulings were given on that basis in
English law,117 until the House of Lords made the distinction clear, not before
2003.118 The American law developed negligence as mental element in criminal
law parallel to and inspired by the English common law.119
The negligence has been accepted as an exception to general intent during the
nineteenth century, but more accurately than in English law.120 The main distinc-
tion between recklessness and negligence has been developed towards the cognitive
aspect of recklessness. Whereas recklessness requires cognitive aspect of aware-
ness, as part of general intent requirement, negligence requires none.121 Both
reckless and negligent offenders are required to take unreasonable risks. However,
the reckless offender is required to be aware of the factual element components,
whereas the negligent offender is not required to.122 The negligence functions as
omission of awareness, and it creates social standard of conduct.
The individual is required to take only reasonable risks.123 Reasonable risks are
measured objectively through the perspective of an abstract reasonable person. The
reasonable person is aware of his factual behavior and takes only reasonable
risks.124 Of course, the reasonability is determined by the court, and this is done
115
Knight, (1828) 1 L.C.C. 168, 168 Eng. Rep. 1000; Grout, (1834) 6 Car. & P. 629, 172 Eng. Rep.
1394; Dalloway, (1847) 2 Cox C.C. 273.
116
Finney, (1874) 12 Cox C.C. 625.
117
Bateman, [1925] All E.R. Rep. 45, 94 L.J.K.B. 791, 133 L.T. 730, 89 J.P. 162, 41 T.L.R. 557,
69 Sol. Jo. 622, 28 Cox. C.C. 33, 19 Cr. App. Rep. 8; Leach, [1937] 1 All E.R. 319; Caldwell,
[1982] A.C. 341, [1981] 1 All E.R. 961, [1981] 2 W.L.R. 509, 73 Cr. App. Rep. 13, 145 J.P. 211.
118
G., [2003] U.K.H.L. 50, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep. 237, 167 J.P. 621, [2004]
Crim. L.R. 369, [2004] 1 A.C. 1034.
119
JEROME HALL, GENERAL PRINCIPLES OF CRIMINAL LAW 126 (2nd ed., 1960, 2005).
120
Commonwealth v. Thompson, 6 Mass. 134, 6 Tyng 134 (1809); United States v. Freeman,
25 Fed. Cas. 1208 (1827); Rice v. State, 8 Mo. 403 (1844); United States v. Warner, 28 Fed. Cas.
404, 6 W.L.J. 255, 4 McLean 463 (1848); Ann v. State, 30 Tenn. 159, 11 Hum. 159 (1850); State
v. Schulz, 55 Ia. 628 (1881).
121
Lee v. State, 41 Tenn. 62, 1 Cold. 62 (1860); Chrystal v. Commonwealth, 72 Ky. 669, 9 Bush.
669 (1873).
122
Commonwealth v. Pierce, 138 Mass. 165 (1884); Abrams v. United States, 250 U.S. 616, 63 L.
Ed. 1173, 40 S.Ct. 17 (1919).
123
Commonwealth v. Walensky, 316 Mass. 383, 55 N.E.2d 902 (1944).
124
See, e.g., People v. Haney, 30 N.Y.2d 328, 333 N.Y.S.2d 403, 284 N.E.2d 564 (1972); Leet
v. State, 595 So.2d 959 (1991); Minor v. State, 326 Md. 436, 605 A.2d 138 (1992); United States
v. Hanousek, 176 F.3d 1116 (9th Cir.1999).
122 4 Positive Fault Element Involving Artificial Intelligence Systems
125
See, e.g., State v. Foster, 91 Wash.2d 466, 589 P.2d 789 (1979); State v. Wilchinski, 242 Conn.
211, 700 A.2d 1 (1997); United States v. Dominguez-Ochoa, 386 F.3d 639 (2004).
126
See, e.g., Jerome Hall, Negligent Behaviour Should Be Excluded from Penal Liability,
63 COLUM. L. REV. 632 (1963); Robert P. Fine and Gary M. Cohen, Is Criminal Negligence a
Defensible Basis for Criminal Liability?, 16 BUFF. L. REV. 749 (1966).
127
Above at Sect. 2.1.1.
4.3 Negligence and Artificial Intelligence Systems 123
our ancestors would have been still staring the burning branch after being stroke by
lightning, and afraid of taking the risk of getting closer to it, grab it, and use it for
our needs. People are constantly pushed by the society to take risks, but reasonable
risks. The question is, how can modern society identify the unreasonable risk and
distinguish it from the reasonable risks, which are legitimate.
For instance, scientists propose an advanced device that would significantly ease
our daily lives. It is comfortable, fast, elegant and accessible. However, using it may
cause death of about 30,000 people per year in the US alone. Would it be considered
reasonable risk to use this device or unreasonable? It may be thought that 30,000
victims each year is enormous number, and that makes the use of this device
completely unreasonable. However, it is used to call that device “car”.128 Driving
a car is not considered unreasonable in most countries in the world, today.
However, in the late nineteenth century it was. So is the situation with trains,
planes, ships, and many other of our daily instruments. The reasonability of the risk
is relative by its nature, and it is relatively determined in respect to time, place,
society, culture and other circumstances. Different courts in different countries are
determining different reasonable persons in this context of negligence. The reason-
able person must be measured not only as general abstract person, but it should be
adapted to the relevant circumstances of the specific offender.
For example, it is not enough to compare medical malpractice of physician to the
behavior of an abstract reasonable person. This behavior should be compared to that
of reasonable physician, of the same expertise, of the same experience, of the same
circumstances of treatment (emergency treatment or other), the same sources etc.
This may focus the standard of reasonability, and make it more subjective standard
rather than pure objective. This process in relation to artificial intelligence systems
is discussed in detail below. Most negligence offenses are results-offenses, since the
society prefer to use negligence to protect from factual harms involved in unrea-
sonable risk takings. However, negligence may be required for conduct-offenses
as well.
The general structure of negligence includes no volitive aspect, but only cogni-
tive. Since volition is supported by cognition, and since negligence does not require
awareness, it cannot require components of volition. The cognitive aspect of
negligence consists of omission of awareness in relation to all factual element
components. The negligence requirements in relation to conduct and circumstances
are identical. Both require unawareness of the component (conduct/circumstances)
in spite of the capability to form awareness, when reasonable person could and
should have been aware of that component.
The reasonability in these components of negligence is examined in relation to
the capability and duty to form awareness, although actually no awareness has been
formed by the offender. The negligence requirement in relation to the results
requires unawareness of possibility of the results’ occurrence in spite of the
capability to form awareness, when reasonable person could and should have
128
For car accidents statistics in US see, e.g., http://www.cdc.gov/motorvehiclesafety/.
124 4 Positive Fault Element Involving Artificial Intelligence Systems
129
E.g., in France. See article 121-3 of the French penal code.
4.3 Negligence and Artificial Intelligence Systems 125
awareness process has been existed, but the process has not been accomplished at
full, the person is regarded as unaware of the relevant factual data. This is true for
both human and artificial intelligence offenders.
However, for the unawareness to be considered omission of awareness, and not
mere unawareness, it should be in spite of the capability to form awareness, when
reasonable person could and should have been aware. These are, in fact, two
conditions:
The first condition deals with physical capability. If the offender lacks these
capabilities, regardless the offense, no criminal liability for negligence may be
imposed. It would not be different than punishing the blind for not seeing.
Consequently, negligent offenders are only those who possess the capabilities of
forming awareness. That is true for both human and artificial intelligence offenders.
Thus, for the imposition of criminal liability for negligence offense upon an
artificial intelligence technology, the artificial intelligence system must possess
the capabilities of forming awareness. An artificial intelligence system which
lacks these capabilities cannot be considered offender of negligence offenses in
this manner. Of course, it cannot be considered offender of general intent offenses
either.
Proving these capabilities is through the general features of the artificial intelli-
gence system, regardless the specific case. At this point, it is known that the
offender was not aware of the relevant factual data, and it is also known that he
has the capabilities of being aware. For that to become negligence, it should be
proven that a reasonable person could and should have been aware of the factual
data. The “reasonable person” is a mental standard to be compared to. Although in
some other legal spheres the reasonable person refers to standard higher than
average person, in criminal law it refers to the average person.130
The reasonable person is filled out with different contents by different societies
and cultures in different times and places. The reasonable person is supposed to
reflect the existing relevant situation in the specific society and not be used by
courts to change the current situation. This standard relates to the cognitive
processes that should have occurred.
The reasonable person is measured through two accumulative paths of cognitive
activity:
130
In Hall v. Brooklands Auto Racing Club, [1932] All E.R. 208, [1933] 1 K.B. 205,
101 L.J.K.B. 679, 147 L.T. 404, 48 T.L.R. 546 the “reasonable person” has been described as:
The person concerned is sometimes described as ‘the man in the street’, or ‘the man in the
Clapham omnibus’, or, as I recently read in an American author, ‘the man who takes the
magazines at home, and in the evening pushes the lawn mower in his shirt sleeves’.
126 4 Positive Fault Element Involving Artificial Intelligence Systems
131
State v. Bunkley, 202 Conn. 629, 522 A.2d 795 (1987); State v. Evans, 134 N.H. 378, 594 A.2d
154 (1991).
4.3 Negligence and Artificial Intelligence Systems 127
possibility of the results to occur should be unreasonable risk, i.e., the occurrence of
the results is a risk, that taking this risk is unreasonable in this situation.132
Reasonable and unreasonable risks are measured the same way as reasonable
and unreasonable persons, as described above. For the risk to be considered
reasonable, the individual should take into consideration all relevant considerations
and relate these considerations the proper weight. If one of the ways of actions
accordingly is taking that risk, it is considered to be reasonable. If not, it is
unreasonable. For the fulfillment of negligence by an artificial intelligence system,
it should make unreasonable decisions. The ultimate question here is whether a
machine can be reasonable, or should it be asked whether a machine can be
unreasonable.
Analytically speaking, reasonability of machine is not different than of human.
Both should take into consideration the relevant considerations and relate them the
proper weight. This can easily be a matter of calculation. The relevant
considerations are not more than factors in the equation, and their proper weight
is the combinations of these factors. The equation may be constant if programmed
to be so. However, machine learning features changes that. Machine learning is
process of generalization through induction from many specific cases. The machine
learning feature enables the artificial intelligence system to change the equation
from time to time.
In fact, effective machine learning should cause changes in the equation almost
every time a specific case is analyzed. This is what happens to our image of the
world as our life-experience becomes richer and wider. If the equation remains
constant, the machine learning is considered to be absolutely ineffective. Expert
systems, who are not featured with machine learning, are not different than a human
expert, who insists not to be updated and not to practice his expertise. Machine
learning is essential for the artificial intelligence system to be developing and not be
blocked in stagnation.
When the artificial intelligence system is activated for the first time, the equation
and its factors are programmed by human programmers. The human programmer
determines what the reasonable course of conduct in the relevant cases
is. Afterwards, after analyzing a few cases, the system identifies exceptions,
wider/narrower definitions, newer connections between existing factors, new
factors, etc. The artificial intelligence system’s way of generalizing the knowledge
absorbed from the particular cases is to rephrase the relevant equation. The term
“equation” is referred here in order to describe the relevant algorithm, but, of
course, it is not necessarily equation in its mathematical sense.
Changing the relevant equation by rephrasing it causes the possibility of making
different decisions than those made in the past. This process of induction is in the
core of the machine learning, and the changes in the equation form, in fact, a
132
People v. Decina, 2 N.Y.2d 133, 157 N.Y.S.2d 558, 138 N.E.2d 799 (1956); Government of the
Virgin Islands v. Smith, 278 F.2d 169 (3rd Cir.1960); People v. Howk, 56 Cal.2d 687, 16 Cal.Rptr.
370, 365 P.2d 426 (1961); State v. Torres, 495 N.W.2d 678 (1993).
128 4 Positive Fault Element Involving Artificial Intelligence Systems
different sphere of right decisions. For instance, a medical expert artificial intelli-
gence system is given lists of symptoms of common cold and influenza. When
activated for the first time, it diagnoses them due to the given symptoms. However,
after some more cases the system learns to notice more symptoms as crucial for
distinguishing between common cold and influenza, such as the exact temperature
of the patient.
If the system is required to recommend medical treatments, different treatments
are to be recommended due to different diagnosis. Sometimes the expert system is
not “sure”. The symptoms may match two different diseases. The system can assess
the probabilities according to the factors measured and analyzed.
For instance, the expert system may determine that there is probability of 38 %
the patient got a common cold, and 62 % it is influenza. Processing these
probabilities may be the cause for the particular negligence in artificial intelligence
systems. Mistakes in the conclusions may be both in sure and unsure conclusions.
The system may be sure of mistaken conclusion, and it can also assess probabilities
mistakenly. The mistakes may be caused by wrong changes of the equation, wrong
factors to be considered, wrong ignorance of certain factors or wrong weight related
to certain factors.
These mistakes are byproduct of errors in the machine learning process. More
precisely, ex post errors, i.e., errors which have been considered errors only after the
decision has been made and according to the consequences of the decision. Humans
tend to learn empirically through trial and error method. Analytically, machine and
human errors, in this context, are of the same type. Understanding the error, its
causes and the ways to avoid it are part of the learning process, both human and
machine learning. On this ground, it should be asked, what is to be considered
reasonable decision in this context.
The major question is, given the start point of the system in relation to its basic
factual data, given its specific experience through machine learning, could reason-
able person be aware of the relevant factual data. The derivative question would be,
of course, who is this reasonable person—human or machine. If the general concept
of easing objectivity by adding it some subjective characteristics is accepted, the
reasonable person should have similar attributes of the offender. Only then the
reasonability of the offender’s decision may be measured, and no injustice is made.
Therefore, if the offender has the capability of machine learning, so should the
reasonable person have under that concept. Thus, the reasonable person in the
context of measuring artificial intelligence system decisions’ reasonability would
be the reasonable artificial intelligence system of the same type. That might seem a
bit tricky for the human programmers, operators and users to escape from criminal
liability and leave it to the mistaken machine. What would be easier for a medical
staff to use expert artificial intelligence system and follow its recommendations,
and if the system is wrong, the system is the only one to be criminally liable?
However, the legal situation is not that simple.
The very decisions of posing the specific artificial intelligence system in its
position, using it, following its recommendations, etc. are subject to negligence
offenses as well. The artificial intelligence system has the capability of fulfilling the
4.3 Negligence and Artificial Intelligence Systems 129
negligence requirements, but that is not an exempt from criminal liability for other
persons involved in the particular situation. The very decision of using the artificial
intelligence system may, by itself, be subject to criminal liability. For example, if
the decision has been made under awareness of the relevant mistakes, and these
mistakes caused death, the human decision may lead to charge in murder. However,
if there was no awareness, but reasonable person in this situation could and should
have been aware of it, it may lead to negligent homicide.
Analysis of the reasonable machine relates, in fact, to the feature of machine
learning. The imposition of criminal liability in negligence offenses must relate to
and analyze the machine learning process which revealed the mistaken decision.
Access to this process is based on the records of the artificial intelligence system
itself. However, the reasonability of the decision making process within the
machine learning may be based on expert opinions. This is the very same way of
proving negligence in courts in relation to human offenders. It is not rare to prove or
refute the human offender negligence through expert opinions.
For instance, when the medical expert artificial intelligence system produces
probabilities of 38 % common cold and 62 % influenza, a medical expert may
explain to the court why these probabilities are reasonable/unreasonable under the
specific conditions of the case, and a computer scientist may explain to the court the
process of producing these probabilities based on the artificial intelligence system
particular machine learning process and existing database.
Accordingly, the court has to decide in three questions:
(a) Was the artificial intelligence system unaware of the factual component?
(b) Has the artificial intelligence system the general capability of consolidating
awareness of the factual component?
(c) Could reasonable person have been aware of the factual component?
If the answer is positive for all three questions, and that is proven beyond any
reasonable doubt, the artificial intelligence system has fulfilled the requirements of
the particular negligence offense. Artificial intelligence systems, which are capable
of forming awareness for general intent offenses as discussed above,133 have
neither technologic nor legal problem to form negligence for negligence offenses,
for the negligence is lower level of mental element than general intent. Thus,
negligence is relevant to artificial intelligence technology and its proof in court is
possible.
Accordingly, the question is who is to be criminally liable for the commission of
this kind of offenses. In general, imposition of criminal liability for negligence
offenses requires the fulfillment of both factual and mental elements of these
offenses. Humans are involved in the creation of artificial intelligence technology
and technology, their design, programming and operation. Consequently, when the
factual and mental elements of the offense are fulfilled by artificial intelligence
133
Above at Sect. 4.2.4.
130 4 Positive Fault Element Involving Artificial Intelligence Systems
When negligence is examined, when the offender fulfils both the factual and mental
elements requirements of the particular offense, criminal liability is to be imposed,
the same way as with general intent. In addition, the very same way as in general
intent offenses, the court is not supposed to check whether the offender had been
“evil”, “immoral” etc. That is true for all types of offenders: humans, corporations
and artificial intelligence technology. Therefore, the same justifications for the
imposition of criminal liability upon artificial intelligence technology in general
intent offenses would be relevant here as well.
As long as the narrow fulfillment of these requirements does exist, criminal
liability should be imposed. However, negligence offenses are different than
general intent offenses also in their social purpose. The relevant question would
be whether their different social purpose is relevant not only for humans and
corporations, but for artificial intelligence technology as well. From the first place
negligence offenses were not designed to deal with “evil” persons, but with
individuals who made mistakes in their discretion. Therefore, the debate upon
evil in criminal law is not relevant for negligence offenses, as it may be relevant
for general intent offenses.
The criminal law in this context functions as an educator for designing the
outlines of individual discretion. The boundaries and borderlines of that discretion
are drawn up by negligence offenses. Sometimes any person may exercise discre-
tion in a wrong way. In most times that does not contradict the criminal law norms.
For instance, people may choose the wrong husband or wife, since they may have
exercised their subjective individual discretion in a wrong way, but that does not
come to criminal law. So is the situation with exercising our inner discretion in a
wrong way towards choosing cars, employers, houses and even faith.
However, in some cases that does contradict a norm of criminal law. For
instance, when people’s inner wrong discretion reveals to someone’s death (negli-
gent homicide).134 As long as our wrong discretion is not contradicted to the
criminal law, the society expects us to learn our lesson by our own. The next time
a person chooses a car, house, employer, etc., he would be much more careful in
examining some details relevant for the purchase. This is how human life experi-
ence is gained in general. However, the society takes the risk that its members
would not learn the lesson, and it still does not intervene through criminal law.
However, at some points, when criminal offenses are committed, the society
does not take the risk of letting the individual learn the lesson as an autodidact. The
134
See, e.g., Rollins v. State, 2009 Ark. 484, 347 S.W.3d 20 (2009); People v. Larkins, 2010 Mich.
App. Lexis 1891 (2010); Driver v. State, 2011 Tex. Crim. App. Lexis 4413 (2011).
4.3 Negligence and Artificial Intelligence Systems 131
social harm in these cases is to grave to be left under such risk. In this type of cases
the society intervenes, and that is done through negligence offenses. The purpose is
to make it more sure and certain that the specific individual would learn the relevant
lesson. Prospectively, it is assumed that after the lesson is taught, the probability for
re-commission of the offense would be much lower.
Thus, for instance, the society educates its physicians to be much more careful in
operating surgeries, its employers in saving their employees lives, its construction
companies in using more secure constructions, its factories in creating less pollu-
tion, etc. The human and corporation offenders are supposed to learn their lesson
through the criminal law. Would it be relevant either for artificial intelligence
technology? The answer is positive. For the educative purpose of the criminal
negligence, it is true that there is no much use or utility in the imposition of criminal
liability, unless the offender has the ability to learn.
If society wants to make the offender learn the lesson for his mistakes, it must
assume that the offender has the capability to learn. If such capabilities are existed
and exercised, criminal liability for negligence offenses is necessary. However, if
no such capabilities are exercised manually, it is completely unnecessary, for no
prospective value is expected here. Using and not using criminal liability for
negligence offense would reveal to the same results. For artificial intelligence
systems, which are equipped with the relevant capabilities of machine learning,
the criminal liability for negligence offenses is not less than necessary.
The very same way as for humans, negligence offenses may draw up the
boundaries and borderlines of discretion for artificial intelligence systems. Both
humans, corporations and artificial intelligence systems are supposed to learn from
their mistakes and improve their decisions prospectively. When the mistakes are
part of the criminal law, the criminal law intervenes in shaping the decision-maker
discretion. For the artificial intelligence system the criminal liability for negligence
offenses is a chance to reconsider the decision-making process due to the external
limitations dictated by the criminal law.
If society has learned through years that the human process of decision-making
requires criminal liability for negligence in order to be improved, this logic is not
less relevant for artificial intelligence systems using machine learning methods.
Society can say that artificial intelligence systems can be reprogrammed, but their
precious experience, gained through machine learning, would be lost. Society may
say that artificial intelligence systems are capable to learn their boundaries and fix
their discretion by their own, but it may say that the very same way for humans
either, and still society imposes criminal liability upon humans for negligence
offenses.
Consequently, if artificial intelligence technology has the required capabilities of
fulfilling both factual and mental elements of criminal liability for negligence
offenses, and if the rational for imposition of criminal liability for these offenses
is relevant for both humans and artificial intelligence systems, there is no reason to
avoid from criminal liability in these cases. However, this is not the only way of
artificial intelligence involvement in criminal liability for negligence offenses.
132 4 Positive Fault Element Involving Artificial Intelligence Systems
As described above in the context of general intent, the most common way to deal
with instrumental use of individuals for the commission of offenses is the general
form of perpetration-through-another.135 In order to impose criminal liability for
perpetration-through-another of particular offense, there is a necessary to prove
awareness for that instrumental use. Consequently, perpetration-through-another is
applicable only in general intent offenses. In most cases, the other person being
instrumentally used by the perpetrator is considered “innocent agent” and no
criminal liability is imposed upon him.
The analysis of perpetration-through-another in that context of general intent
offenses has been discussed above. However, that person who is instrumentally
used may also be considered as “semi-innocent agent”, who is criminally liable for
negligence, although the perpetrator is criminally liable for general intent offense.
This is the case where negligence may be relevant for perpetration-through-another,
and that completes the discussion towards it.
For example, a nurse in surgery room is acknowledged that a person who
attacked her in the past is about to be in surgery. She decides that he deserves to
die. She infects the surgery instruments with lethal bacteria, and when the surgeon
comes and makes sure that the surgery instruments are sterilized, she tells him that
she sterilized them and this is her responsibility. The surgery begins, unaware of the
infected instruments, and the patient is infected by the bacteria. A few hours after
the surgery is ended, the patient dies out of the infection.
Legal analysis of the case would suggest the nurse to be perpetrator-through-
another of murder, as she instrumentally used the surgeon to commit the patient’s
murder. The surgeon’s criminal liability in this case is dependent on his mental
state. If he were innocent agent, he would be exempt from criminal liability.
However, if the surgeon has the legal duty to make sure the instruments are
sterilized, he is not completely an innocent agent, since he failed to fulfill his
legal duties. On the other hand, he was not aware of the infection. This is the
case for negligence. When the agent is not aware of crucial elements of the offense,
but reasonable person at his state could and should have been aware, this agent is
negligent. This is the “semi-innocent agent”.136
Thus, when one person is instrumentally using another person who is negligent
about the commission of the offense, it is perpetration-through-another, but both
persons are criminally liable: the perpetrator for general intent offense (e.g.,
murder) and the other person for negligence offense (e.g., negligent homicide).
Since artificial intelligence systems have the capability of forming negligence as
mental element, the question is whether they may function as semi-innocent agents.
The case for artificial intelligence system semi-innocent agent is where the perpe-
trator (human, corporation or artificial intelligence system) instrumentally uses an
135
Above at Sect. 4.2.5.
136
See, e.g., Peter Alldridge, The Doctrine of Innocent Agency, 2 CRIM. L. F. 45 (1990).
4.3 Negligence and Artificial Intelligence Systems 133
artificial intelligence system for the commission of the offense, but although
instrumentally used the artificial intelligence system was negligent as to the com-
mission of that very offense.
Only artificial intelligence systems, which have the capability of fulfilling the
mental element requirement of negligence offenses, may be considered and func-
tion as semi-innocent agents. However, not in any case the artificial intelligence
system has the capability of negligence, it would automatically function as semi-
innocent agent. This capability is necessary for that function, but it is certainly not
adequate.
The semi-innocent agent, human, corporation or machine, should be examined
ad hoc in the particular case. Only if the agent conducted negligently towards the
commission of the offense, this may be considered semi-innocent agent. Thus, if the
instrumentally used artificial intelligence system did not consolidate awareness of
the relevant factual data, but it had the capability and a reasonable person could
have consolidated such awareness, the artificial intelligence system is to be consid-
ered semi-innocent agent within the perpetration-through-it.
The perpetrator’s criminal liability is not affected by the agent’s criminal liabil-
ity, if any. The perpetrator-through-another criminal liability is for the relevant
general intent offense, whether the instrumentally used artificial intelligence system
has no criminal liability (i.e., innocent agent or lacks the relevant capabilities) or
has criminal liability for negligence (i.e., semi-innocent agent). As a result, using
the legal construction of perpetration-through-another as to the instrumental use of
artificial intelligence systems has the same consequences for the perpetrator as
general intent offender, regardless the artificial intelligence system’s criminal
liability.
The agent’s criminal liability in these cases is not directly affected by the
perpetrator’s criminal liability. If the artificial intelligence system was negligent
(i.e., it fulfilled both factual and mental elements requirements of the negligence
offense), criminal liability for negligence would be imposed upon it. This system is
also to be classified as semi-innocent agent within the context of the particular
perpetration-through-another. If the artificial intelligence system was not negligent,
due to its incapability or due to any other reason, no criminal liability is imposed
upon it. This system is also to be classified as innocent agent within the context of
the particular perpetration-through-another. To make the image be clearer, if the
artificial intelligence system is neither innocent agent nor semi-innocent agent, then
it has fulfilled the requirements of the general intent offenses at full.
This is not the case for perpetration-through-another, but for principal perpetra-
tion of the artificial intelligence system. If the artificial intelligence system has the
capability to be criminally liable for general intent offenses as sole-offender, there
is nothing to prevent it from committing the offense jointly with other technology:
humans, corporations or other artificial intelligence technology. Complicity
participated by artificial intelligence systems requires at least general intent, not
negligence, since it requires awareness of the very complicity and the delinquent
association. This situation is not substantively different than fulfillment of any other
general intent offense.
134 4 Positive Fault Element Involving Artificial Intelligence Systems
137
Above at Sect. 4.2.6.
4.4 Strict Liability and Artificial Intelligence Systems 135
for imposition of criminal liability for that offense. However, in legal systems that
require subjective foreseeability, the programmer should be at least aware of the
possibility of the commission of the negligence offense by the artificial intelligence
system for imposition of criminal liability for that offense.
However, if the programmer had neither subjective nor objective foreseeability
towards the commission of the offense, the probable consequence liability would be
irrelevant. In this type of cases no criminal liability would be imposed upon the
programmer, and the artificial intelligence system’s criminal liability for the negli-
gence offense would not affect the programmer’s liability.
In general, strict liability has been accepted as form of mental element requirement
in criminal law as a development from the absolute liability. Since the eighteenth
century for some particular offenses it has been determined by English common law
that they require neither general intent nor negligence. These particular offenses
were related to as public welfare offenses.138 These offenses were inspired by tort
law, that accepted absolute liability as legitimate.
Consequently, these particular offenses were criminal offenses of absolute
liability, and imposition of criminal liability for them required proof of the factual
element alone.139 These absolute liability offenses were considered exceptional for
no mental element is required. In some cases the parliament intervened and required
mental element,140 and in some other cases the court rulings added mental element
requirements.141 By the mid-nineteenth century English courts began to consider
efficiency considerations as part of criminal law in various contexts. That gave rise
to the development of convictions on the basis of public inconvenience.
138
Francis Bowes Sayre, Public Welfare Offenses, 33 COLUM. L. REV. 55, 56 (1933).
139
See, e.g., Nutt, (1728) 1 Barn. K.B. 306, 94 Eng. Rep. 208; Dodd, (1736) Sess. Cas.
135, 93 Eng. Rep. 136; Almon, (1770) 5 Burr. 2686, 98 Eng. Rep. 411; Walter, (1799) 3 Esp.
21, 170 Eng. Rep. 524.
140
See, e.g., 6 & 7 Vict. c.96.
141
Dixon, (1814) 3 M. & S. 11, 105 Eng. Rep. 516; Vantandillo, (1815) 4 M. & S. 73, 105 Eng.
Rep. 762; Burnett, (1815) 4 M. & S. 272, 105 Eng. Rep. 835.
136 4 Positive Fault Element Involving Artificial Intelligence Systems
Offenders were indicted in particular offenses, and they were convicted although
no mental element was proven, due to the public inconvenience caused by the
commission of the offense.142 These convictions created, in fact, an upper threshold
of negligence, a kind of increased negligence. Accordingly, the individual must be
strict and make sure that no offense is committed. This standard of behavior is
higher than in negligence, which requires just behaving reasonably.
In these offenses, it is required more than reasonability, but to make sure of no
offense is committed whatsoever. Such offenses have clear preference of the public
welfare over the strict justice with the potential offender. Since these offenses were
not considered to be grave and severe, they were widened “for the good of all”.143
This development was considered necessary due to the legal and social
developments of the first industrial revolution. For instance, the increasing number
of workers in the cities brought the employers to decrease the worker’s social
conditions.
The parliament intervened through social welfare legislation, and the efficient
enforcement of this legislation was through absolute liability offenses.144 It was
insignificant whether the employer knew or not what are the proper social
conditions for the workers, he must make sure that no violation of these conditions
occurs.145 In the twentieth century this type of criminal liability has been spread to
other spheres of law, including traffic law.146 The American criminal law accepted
absolute liability as basis for criminal liability in the mid-nineteenth century,147
while ignoring previous rulings that did not accept it.148
This acceptance was restricted only to petty offenses, that their violation was
punished through fines, and not very severe fines. Similar acceptance occurred at
142
Woodrow, (1846) 15 M. & W. 404, 153 Eng. Rep. 907.
143
Stephens, [1866] 1 Q.B. 702; Fitzpatrick v. Kelly, [1873] 8 Q.B. 337; Dyke v. Gower, [1892]
1 Q.B. 220; Blaker v. Tillstone, [1894] 1 Q.B. 345; Spiers & Pond v. Bennett, [1896] 2 Q.B. 65;
Hobbs v. Winchester Corporation, [1910] 2 K.B. 471; Provincial Motor Cab Company Ltd.
v. Dunning, [1909] 2 K.B. 599, 602.
144
W. G. Carson, Some Sociological Aspects of Strict Liability and the Enforcement of Factory
Legislation, 33 MOD. L. REV. 396 (1970); W. G. Carson, The Conventionalisation of Early Factory
Crime, 7 INT’L J. OF SOCIOLOGY OF LAW 37 (1979).
145
AUSTIN TURK, CRIMINALITY AND LEGAL ORDER (1969).
146
NICOLA LACEY, CELIA WELLS AND OLIVER QUICK, RECONSTRUCTING CRIMINAL LAW 638–639 (3rd
ed., 2003, 2006).
147
Barnes v. State, 19 Conn. 398 (1849); Commonwealth v. Boynton, 84 Mass. 160, 2 Allen
160 (1861); Commonwealth v. Goodman, 97 Mass. 117 (1867); Farmer v. People, 77 Ill.
322 (1875); State v. Sasse, 6 S.D. 212, 60 N.W. 853 (1894); State v. Cain, 9 W. Va. 559 (1874);
Redmond v. State, 36 Ark. 58 (1880); State v. Clottu, 33 Ind. 409 (1870); State v. Lawrence,
97 N.C. 492, 2 S.E. 367 (1887).
148
Myers v. State, 1 Conn. 502 (1816); Birney v. State, 8 Ohio Rep. 230 (1837); Miller v. State,
3 Ohio St. Rep. 475 (1854); Hunter v. State, 30 Tenn. 160, 1 Head 160 (1858); Stein v. State,
37 Ala. 123 (1861).
4.4 Strict Liability and Artificial Intelligence Systems 137
the same time in the European Continental legal systems.149 Consequently, absolute
liability in criminal law became global phenomenon. However, at the meanwhile
the fault element in criminal law become much more important due to internal
developments in criminal law, and the general intent became the major and
dominant requirement for mental element in criminal law.
Thus, the criminal law should have made changes in the absolute liability for it
to meet the modern understandings towards fault. That was the trigger for moving
from absolute liability to strict liability. The core of the change lies in the move
from absolute legal presumption (praesumptio juris et de jure) to relative legal
presumption (praesumptio juris tantum), so that the offender has the opportunity to
refute the criminal liability. The presumption was presumption of negligence, either
refutable or not.150 The move from absolute liability towards strict liability eased
the acceptance of the presumed negligence as another, third, form of mental
element in criminal law.
Since that wide acceptance of strict liability in the world, legal systems justified
it both in the perspective of fault in criminal law151 and constitutionally. The
European court for human rights justified using strict liability in criminal law in
1998.152 Accordingly, the strict liability was considered as not contradicting the
presumption of innocence, protected by the 1950 European Human Rights Cove-
nant,153 and that ruling has been embraced in Europe and Britain.154 The federal
supreme court of the United States ruled consistently that strict liability does not
contradict the US constitution.155 So did the supreme courts in the states.156
149
John R. Spencer and Antje Pedain, Approaches to Strict and Constructive Liability in Conti-
nental Criminal Law, APPRAISING STRICT LIABILITY 237 (A. P. Simester ed., 2005).
150
Gammon (Hong Kong) Ltd. v. Attorney-General of Hong Kong, [1985] 1 A.C. 1, [1984] 2 All
E.R. 503, [1984] 3 W.L.R. 437, 80 Cr. App. Rep. 194, 26 Build L.R. 159.
151
G., [2003] U.K.H.L. 50, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep. 237, 167 J.P. 621, [2004]
Crim. L.R. 369; Kumar, [2004] E.W.C.A. Crim. 3207, [2005] 1 Cr. App. Rep. 566, [2005] Crim.
L.R. 470; Matudi, [2004] E.W.C.A. Crim. 697.
152
Salibaku v. France, (1998) E.H.R.R. 379.
153
1950 European Human Rights Covenant, sec. 6(2) provides: “Everyone charged with a
criminal offence shall be presumed innocent until proved guilty according to law”.
154
G., [2008] U.K.H.L. 37, [2009] A.C. 92; Barnfather v. Islington London Borough Council,
[2003] E.W.H.C. 418 (Admin), [2003] 1 W.L.R. 2318, [2003] E.L.R. 263; G. R. Sullivan, Strict
Liability for Criminal Offences in England and Wales Following Incorporation into English Law
of the European Convention on Human Rights, APPRAISING STRICT LIABILITY 195 (A. P. Simester
ed., 2005).
155
Smith v. California, 361 U.S. 147, 80 S.Ct. 215, 4 L.Ed.2d 205 (1959); Lambert v. California,
355 U.S. 225, 78 S.Ct. 240, 2 L.Ed.2d 228 (1957); Texaco Inc. v. Short, 454 U.S. 516, 102 S.Ct.
781, 70 L.Ed.2d 738 (1982); Carter v. United States, 530 U.S. 255, 120 S.Ct. 2159, 147 L.Ed.2d
203 (2000); Alan C. Michaels, Imposing Constitutional Limits on Strict Liability: Lessons from the
American Experience, APPRAISING STRICT LIABILITY 218, 222–223 (A. P. Simester ed., 2005).
156
State v. Stepniewski, 105 Wis.2d 261, 314 N.W.2d 98 (1982); State v. McDowell, 312 N.W.2d
301 (N.D. 1981); State v. Campbell, 536 P.2d 105 (Alaska 1975); Kimoktoak v. State, 584 P.2d
25 (Alaska 1978); Hentzner v. State, 613 P.2d 821 (Alaska 1980); State v. Brown, 389 So.2d
48 (La.1980).
138 4 Positive Fault Element Involving Artificial Intelligence Systems
(a) No general intent or negligence were actually existed in the offender; and-
(b) All reasonable measures to prevent the offense were taken.
The first condition deals with the actual mental state of the offender. According to
the presumption, the commission of the factual element presumes that the offender
is at least negligent. That means that the offender’s mental state is of negligence or
general intent. Thus, at first, the conclusion of the presumption should be refuted so
that the presumption is proven as incorrect at this case. The offender should prove
that he was not aware to the relevant facts, and that no other reasonable person could
have been aware of them under the certain circumstances of the case.
This proof resembles refuting general intent in general intent offenses and
negligence in negligence offenses. However, strict liability offenses are not general
intent or negligence offenses, for refuting general intent and negligence is not
adequate for preventing imposition of criminal liability. The social and behavioral
purpose of these offenses is to make the individuals conduct strictly and make sure
that the offense is not committed. That should be proven as well. Consequently, the
offender should prove that he has taken all reasonable measure to prevent the
offense.157
The difference between strict liability and negligence is sharp. To refute negli-
gence it is adequate to prove the offender has taken a reasonable measure, but to
refute strict liability it is required to prove that all reasonable measure were actually
taken. In order to refute the negligence presumption of strict liability the defendant
should positively prove each of these two conditions by a preponderance of the
evidence, as in civil law cases. The defendant is not required to prove these conditions
beyond reasonable doubt, but in general it is not sufficient to only raise reasonable
doubt. This burden of proof is higher than the general burden of the defendant.
157
B. v. Director of Public Prosecutions, [2000] 2 A.C. 428, [2000] 1 All E.R. 833, [2000]
2 W.L.R. 452, [2000] 2 Cr. App. Rep. 65, [2000] Crim. L.R. 403; Richards, [2004]
E.W.C.A. Crim. 192.
4.4 Strict Liability and Artificial Intelligence Systems 139
The possibility of the offender to refute the presumption becomes part of the
strict liability requirement since it relates to the offender’s mental state. The
modern structure of strict liability continues the concept of minimal requirement.
It contains both inner and external aspects. Inward, strict liability is the minimal
requirement of mental element for each of the factual element components.
Consequently, if strict liability is proven in relation to the circumstances and
results, but in relation to the conduct negligence is proven, that satisfies the
requirement of strict liability. It means that for each of the factual element
components at least strict liability is required but not exclusively strict liability.
Outwards, strict liability offenses’ mental element requirement is satisfied through
at least strict liability, but not exclusively. It means that criminal liability for strict
liability offenses may be imposed through proving general intent and negligence as
well as strict liability.
For strict liability is still considered an exception for the general requirement of
general intent, strict liability has been required as an adequate mental element of
relatively lenient offenses. In some legal systems around the world, strict liability
has been restricted ex ante or ex post to lenient offenses.158 This general structure of
strict liability is a template which contains terms from the mental terminology.
(a) was neither aware nor negligent towards the factual element components;
and-
(b) has taken all reasonable measures to prevent the commission of the offense.
158
In re Welfare of C.R.M., 611 N.W.2d 802 (Minn.2000); State v. Strong, 294 N.W.2d
319 (Minn.1980); Thompson v. State, 44 S.W.3d 171 (Tex.App.2001); State v. Anderson,
141 Wash.2d 357, 5 P.3d 1247 (2000).
159
Above at Sect. 4.4.1.
140 4 Positive Fault Element Involving Artificial Intelligence Systems
(a) Was the factual element of the offense has been fulfilled by the artificial
intelligence system?
(b) Has the artificial intelligence system the general capability of consolidating
awareness or negligence?
(c) Has the artificial intelligence system the general capability of
reasonability?
If the answer is positive for all three questions, and that is proven beyond any
reasonable doubt, the artificial intelligence system has fulfilled the requirements of
the particular strict liability offense. Consequently, the artificial intelligence system
is presumed to be at least negligent.
At this point, the defense has the opportunity to refute the negligence presump-
tion through positive evidence. After the evidences are presented, the court has to
decide in two questions:
(a) Had the artificial intelligence system actually formed general intent or
negligence towards the factual elements components of the strict liability
offense?
(b) Had the artificial intelligence system not taken all reasonable measures to
prevent the actual commission of the offense?
If the answer of even one question is positive, the negligence presumption is not
refuted, and criminal liability for the strict liability offense is imposed. Only if the
answers for both questions are negative, the negligence presumption is refuted, and
no criminal liability for the strict liability offense is to be imposed upon the artificial
intelligence system. In general, artificial intelligence systems, which are capable of
forming awareness and negligence, have neither technologic nor legal problem to
form the inner requirements for strict liability offenses, for the strict liability is
lower level of mental element than general intent or negligence.
Thus, strict liability is relevant to artificial intelligence technology and its proof
in court is possible. Accordingly, the question is who is to be criminally liable for
the commission of this kind of offenses. In general, imposition of criminal liability
for strict liability offenses requires the fulfillment of both factual and mental
elements of these offenses. Humans are involved in the creation of artificial intelli-
gence technology and technology, their design, programming and operation. Con-
sequently, when the factual and mental elements of the offense are fulfilled by
142 4 Positive Fault Element Involving Artificial Intelligence Systems
In strict liability offenses, as in general intent and negligence offenses, when the
offender fulfils both the factual and mental elements requirements of the particular
offense, criminal liability is to be imposed. The very same way as in general intent
and negligence offenses, the court is not supposed to check whether the offender
had been “evil”, “immoral” etc. That is true for all types of offenders: humans,
corporations and artificial intelligence technology.
Therefore, the same justifications for the imposition of criminal liability upon
artificial intelligence technology in general intent and negligence offenses would be
relevant here as well. As long as the narrow fulfillment of these requirements does
exist, criminal liability should be imposed. However, strict liability and negligence
offenses are different than general intent offenses also in their social purpose. The
relevant question would be whether their different social purpose is relevant not
only for humans and corporations, but for artificial intelligence technology as well.
From their beginning strict liability offenses were not designed to deal with
“evil” persons, but with individuals who did not make all efforts to prevent the
commission of the offense. Therefore, the debate upon evil in criminal law is not
relevant for strict liability offenses, as it may be relevant for general intent offenses.
The criminal law in this context functions as an educator for making sure that no
offense is to be committed. Accordingly, it is supposed to be designing the outlines
of individual discretion.
The boundaries and borderlines of that discretion are drawn up by strict liability
offenses. Sometimes any person may exercise discretion in a wrong way and make
some effort, but not all efforts, to prevent the commission of the offense. In most
times that does not contradict the criminal law norms. For instance, people may take
the risk of investing our money in doubtful stock, since people may not be doing all
efforts to prevent damage to their investments, but that does not come to
criminal law.
However, in some cases that does contradict a norm of criminal law, which is
designed to educate us to make sure that no offense is to be committed. As long as
people’s wrong discretion and absence of efforts to prevent offenses are not
contradicted to the criminal law, the society expects people to learn our lesson by
their own. The next time people invest their money they would be much more
careful in examining some details relevant for the investment. This is how human
life experience is gained.
However, the society takes the risk that people will not learn their lesson, and it
still does not intervene through criminal law. At some points, when criminal
offenses are committed, the society does not take the risk of letting the individual
learn the lesson as an autodidact. The social harm in these cases is to grave to be left
under such risk. In this type of cases the society intervenes, and that is done through
4.4 Strict Liability and Artificial Intelligence Systems 143
both strict liability and negligence offenses. In strict liability offenses the purpose is
to educate the individuals to make all possible efforts to prevent the occurrence of
the offense.
For instance, when driving on road, any driver is expected (and educated) to
drive so carefully, that any traffic offense is to be prevented through that driving.
The purpose is to make it more sure and certain that the specific individual would
learn how to behave carefully, extra-carefully. Prospectively, it is assumed that
after the individual is convicted in the strict liability offense, the probability for
re-commission of the offense would be much lower. Thus, for instance, the society
educates its drivers to drive extra-carefully and its employers to adhere the social
security regulations towards paying wages etc.
The human and corporation offenders are supposed to learn how to behave
carefully through the criminal law. Would it be relevant either for artificial intelli-
gence technology? The answer is positive. For the educative purpose of the strict
liability, it is true that there is no much use or utility in the imposition of criminal
liability, unless the offender has the ability to learn and to change behavior
accordingly.
If society wants to make the offender learn to behave very carefully, the society
must assume that the offender has the capability to learn and to implement that
knowledge. If such capabilities are existed and exercised, criminal liability for strict
liability offenses is necessary. However, if no such capabilities are exercised
manually, it is completely unnecessary, for no prospective value is expected here.
Using and not using criminal liability for strict liability offense would reveal to the
same results.
For artificial intelligence systems, which are equipped with the relevant
capabilities of machine learning, the criminal liability for strict liability offenses
is not less than necessary, if these capabilities are applied in the relevant situations
involving duties to behave extra-carefully. The very same way as for humans, strict
liability offenses may draw up the boundaries and borderlines of discretion for
artificial intelligence systems. Humans, corporations and artificial intelligence
systems are supposed to learn from their experience and improve their decisions
prospectively, including the standards of carefulness.
When the non-carefulness is part of the criminal law, the criminal law intervenes
in shaping the discretion towards careful behavior. For the artificial intelligence
system the criminal liability for strict liability offenses is a chance to reconsider the
decision-making process due to the external limitations dictated by the criminal
law, which require extra-careful conduct and decision-making. If society has
learned through years that the human process of decision-making requires criminal
liability for strict liability in order to be improved, this logic is not less relevant for
artificial intelligence systems using machine learning methods.
The society can say that artificial intelligence systems can be reprogrammed, but
their precious experience, gained through machine learning, would be lost. Society
may say that artificial intelligence systems are capable to learn their boundaries and
fix their discretion by their own, but society can say that the very same way for
144 4 Positive Fault Element Involving Artificial Intelligence Systems
humans either, and still society imposes criminal liability upon humans for strict
liability offenses.
Consequently, if artificial intelligence technology has the required capabilities of
fulfilling both factual and mental elements of criminal liability for strict liability
offenses, and if the rational for imposition of criminal liability for these offenses is
relevant for both humans and artificial intelligence systems, there is no reason to
avoid from criminal liability in these cases. However, this is not the only way of
artificial intelligence involvement in criminal liability for strict liability offenses.
In most legal systems, the useful way to deal with instrumental use of individuals
for the commission of offenses is the general form of perpetration-through-another.
In order to impose criminal liability for perpetration-through-another of particular
offense, there is a necessary to prove awareness for that instrumental use. Conse-
quently, perpetration-through-another is applicable only in general intent offenses.
In most cases, the other person being instrumentally used by the perpetrator is
considered “innocent agent” and no criminal liability is imposed upon him.
The analysis of perpetration-through-another in that context of general intent
offenses has been discussed above. However, that person who is instrumentally
used may also be considered as “semi-innocent agent”, who is criminally liable for
negligence, although the perpetrator is criminally liable for general intent offense.
Negligence is the lowest level of mental element required for the person instru-
mentally used in order to be considered “semi-innocent agent”. In this context,
strict liability is too low level of mental element in order to be considered “semi-
innocent agent”.
If the person who is instrumentally used by another person is under mental state
of strict liability, that person is to be considered innocent agent, as if that person has
no criminal mental state at all. As a result, in perpetration-through-another of
particular offense, the other person (the instrumentally used person) may be in
four possible mental states, which reflect matched legal consequences (accomplice
in general intent, semi-innocent agent in negligence, innocent agent in strict liabil-
ity and innocent agent in lack of mental element).
When the other person is aware of the delinquent enterprise and still continues to
participate under no pressure, he becomes an accomplice to the commission of this
offense. Negligence reduces that person’s legal state to become semi-innocent
agent.160 However, whether the mental state of that person is of strict liability or
of no criminal mental state, that person is to be considered innocent agent, that no
criminal liability is imposed upon him, and the full criminal liability for the relevant
offense is imposed upon the perpetrator, who instrumentally used that person. That
is correct for both humans and artificial intelligence technology.
160
See, e.g., Glanville Williams, Innocent Agency and Causation, 3 CRIM. L. F. 289 (1992).
4.4 Strict Liability and Artificial Intelligence Systems 145
relevant. The mental condition for the probable consequence liability requires the
unplanned offense to be “probable” for the party who did not actually commit it.
It is necessary that party could have foreseen and reasonably predict the com-
mission of the offense. Some legal systems prefer to examine the actual and
subjective foreseeability (the party has actually and subjectively foreseen the
occurrence of the unplanned offense), whereas others prefer to evaluate the ability
to foresee through an objective standard of reasonability (the party has not actually
foreseen the occurrence of the unplanned offense, but any reasonable person in his
state could have). Actual foreseeability parallels the subjective general intent,
whereas objective foreseeability parallels the objective negligence.
Lower level of foreseeability is not adequate for the probable consequence
liability. Consequently, the question is towards the level of foreseeability of the
users (the robbers). If they have actually foreseen the commission of that offense by
the drone, criminal liability for that offense would be imposed upon them in
addition to the criminal liability for robbery. If the uses formed objective foresee-
ability, criminal liability for the additional offense would be imposed only in the
legal systems that probable consequence liability may be satisfied through objective
foreseeability. However, if the mental state of the users towards the additional
offense is strict liability, it is not adequate for the imposition of criminal liability
through the probable consequence liability.
Even though the additional offense itself requires strict liability for imposition of
criminal liability, that is correct only for the actual perpetrator of that offense and
not for imposition of criminal liability through probable consequence liability.
Thus, if the users had neither subjective nor objective foreseeability towards the
commission of the offense, the probable consequence liability would be irrelevant.
In this type of cases no criminal liability would be imposed upon the users, and the
artificial intelligence system’s criminal liability for the strict liability offense would
not affect the users’ liability.
Negative Fault Elements and Artificial
Intelligence Systems 5
Contents
5.1 Relevance and Structure of Negative Fault Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.2 Negative Fault Elements by Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . 150
5.2.1 In Personam Negative Fault Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.2.2 In Rem Negative Fault Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Negative fault elements are defenses, in which the court is bound to consider when
imposing criminal liability upon the defendant, if claimed. Defenses in criminal law
are complementary to the mental element requirement. Both deal with the
offender’s fault concerning the commission of the offense. The mental element
requirement is the positive aspect of fault (what should be in the offender’s mind
during the commission of the offense), whereas the general defenses are the
negative aspect of fault (what should not be in the offender’s mind during the
commission of the offense).1
For instance, awareness is part of mental element requirement (general intent),
and insanity is a general defense. Therefore, in general intent offenses the offender
must be aware and must not be insane. Thus, the fault requirement in criminal law
consists on both mental element requirement and general defenses. The general
defenses were developed in the ancient world in order to prevent injustice in certain
types of cases. For instance, a person who killed another out of self-defense was not
criminally liable for the homicide, since he lacked the required fault to cause death.
Authentic factual mistake of the offender as to the commission of intentional
offense was considered then as negating the required fault for imposition of
1
ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW 157–158, 202 (5th ed., 2006).
criminal liability.2 In the modern era the general defenses became wider and more
conclusive. However, the common factor of all general defenses remained the same
one. All general defenses in criminal law are part of the negative aspect of the fault
requirement, as they are meant to negate the offender’s fault.
The deep abstract question behind the general defenses is whether the commis-
sion of the offense was not imposed upon the offender. Thus, when a person really
acts in self-defense, it is considered as imposed on him. For saving his life, which is
considered a legitimate purpose, that person had no choice, but act in self-defense.
Of course, such person could have given up his life, but that is not considered to be
legitimate requirement, as it goes against the natural instinct of every living
creature.
All general defenses may be divided into two main types: in personam and in
rem defenses.3 In personam defenses are general defenses which are related to the
personal characteristics of the offender (exempts), whereas in rem are related to the
characteristics of the factual event (justifications). The personal characteristics of
the offender may negate the fault towards the commission of the offense, regardless
the factual characteristics of the event or the exact identity of the particular offense.
In in personam defenses the personal characteristics of the offender are adequate to
prevent imposition of criminal liability in any offense.
For instance, a child under the age of legal maturity is not criminally liable for
any offense factually committed by him. So is the situation for insane individuals
who committed the offense during the time they were considered insane. The exact
identity of the particular offense committed by the individual is completely insig-
nificant for the question of imposition of criminal liability. It, perhaps, may be
relevant for further steps of treatment or rehabilitation triggered by the commission
of the offense, but not for the imposition of criminal liability. The personal in
personam defenses, as general defenses, are completed by the impersonal in rem
defenses.
In rem defenses are impersonal general defenses. As such they are not depended
on the identity of the offender, but only on the factual event actually occurred. The
personal characteristics of the individual are insignificant for the in rem defenses.
For instance, an individual is attacked by another person up to real danger to his life.
The only way to be out of this danger was through pushing the attacker away.
Pushing away a person is considered assault unless done under consent. In this case,
the pushing person would argue for self-defense, regardless his identity, the attacker
identity or any other personal attributes of them, since self-defense relates only to
the factual event itself.
2
REUVEN YARON, THE LAWS OF ESHNUNNA 265, 283 (2nd ed., 1988).
3
Compare Kent Greenawalt, Distinguishing Justifications from Excuses, 49 LAW & CONTEMP.
PROBS. 89 (1986); Kent Greenawalt, The Perplexing Borders of Justification and Excuse,
84 COLUM. L. REV. 1897 (1984); GEORGE P. FLETCHER, RETHINKING CRIMINAL LAW 759–817
(1978, 2000).
5.1 Relevance and Structure of Negative Fault Elements 149
Since in rem defenses are impersonal, they have in addition a prospective value.
Not only is the individual not criminally liable if acted under in rem defense, but
also he should have been acting this way. In rem defenses define not only types of
general defenses, but also the proper behavior.4 Thus, individuals should defend
themselves under the conditions of self-defense, even though that may apparently
seem as commission of an offense. This is not true for in personam defenses. An
infant under the maturity age is not supposed to commit offenses, although no
criminal liability would be imposed. So are not insane individuals.
The prospective behavioral value of the in rem defenses expresses the social
values of the relevant society. If self-defense, for instance, is accepted as legitimate
in rem defense, it means that society prefers individuals to protect themselves when
the authorities are unable to protect them. Society prefers to reduce the state’s
monopoly on power through legitimizing self-assistance rather than leave
individuals vulnerable and helpless. Society does not enforce individuals to act in
manner of self-defense, but if they do so, it does not consider them criminally liable
for the offense committed through that self-defense.
Both in personam defenses and in rem defenses are general defenses. The phrase
“general defenses” relates to defenses that may be attributed to any offense and not
to particular and certain group of offenses. For instance, infancy may be attributed
to any offense, as long as it has been committed by an infant. On the contrary, there
are some specific defenses, which may be attributed only to specific offenses or
specific type of offenses. For instance, in some countries in the offense of statutory
rape, it is a defense for the defendant if the gap of ages is under 3 years. This
defense is unique for statutory rape, and it is irrelevant for any other offense. In
personam defenses and in rem defenses are classified as general defenses.
As defense arguments, general defenses should be positively argued by the
defense. If the defense chooses not to raise these arguments, they are not discussed
in court, even though all participants of the trial understand that such an argument
may be relevant. It is not enough to argue the general defense, but its elements
should be proven by the defendant. In some legal systems it should be proven
through raising reasonable doubt that the elements of the defense have been
actually existed in the case, and in other legal systems they should be proven by a
preponderance of the evidence. Accordingly, the prosecution has the opportunity to
refute the general defense.
General defenses of the in personam defense type include infancy, loss of self-
control, insanity, intoxication, factual mistake, legal mistake and substantive immu-
nity. General defenses of the in rem defense type include self-defense (including
defense of dwelling), necessity, duress, superior orders, and de minimis defense. All
these general defenses may negate the offender’s fault. The question is whether
4
Compare Paul H. Robinson, A Theory of Justification: Societal Harm as a Prerequisite for
Criminal Liability, 23 U.C.L.A. L. REV. 266 (1975); Paul H. Robinson, Testing Competing
Theories of Justification, 76 N.C. L. REV. 1095 (1998); George P. Fletcher, The Nature of
Justification, ACTION AND VALUE IN CRIMINAL LAW 175 (Stephen Shute, John Gardner and Jeremy
Horder eds., 2003).
150 5 Negative Fault Elements and Artificial Intelligence Systems
these general defenses are applicable for artificial intelligence technology in the
context of criminal law. These general defenses and their applicability to artificial
intelligence criminal liability are discussed below, divided into in personam
defenses and in rem defenses.
In personam negative fault elements are in personam defenses which are general
defenses, that are related to the personal characteristics of the offender (in
personam), as noted above.5 The applicability of in personam defenses upon
artificial intelligence criminal liability raises the question of the capability of
artificial intelligence systems to form the personal characteristics required for
these general defenses. The question if could an artificial intelligence system be
insane, for instance, is interpreted at first to the question whether it has the mental
capability of forming the elements of insanity in criminal law. This question mutatis
mutandis is relevant for all in personam defenses, as discussed below.
5.2.1.1 Infancy
Could an artificial intelligence system be considered infant for the question of
criminal liability? Since ancient ages infants under certain biologic age were not
considered criminally liable (doli incapax). The difference between the different
legal systems consisted on the exact age of maturity. For instance, Roman law made
it be the age of 7 years old.6 This defense is determined through legislation7 and
case-law.8 It was not questionable, that the relevant age is biological and not mental
age, mainly for evidential reasons.9 Biologic age is much easier to be proven.
However, it was presumed that the biologic age matches the mental age. If the
infant’s biologic age is beyond the lower level of age, but under the age of full
maturity, the mental age of the infant is examined through evidence (e.g., expert
testimony).10 The conclusive examination is whether the infant understands the
5
Above at Sect. 5.1.
6
RUDOLPH SOHM, THE INSTITUTES OF ROMAN LAW 219 (3rd ed., 1907).
7
See, e.g., MINN. STAT. }9913 (1927); MONT. REV. CODE }10729 (1935); N.Y. PENAL CODE }816
(1935); OKLA. STAT. }152 (1937); UTAH REV. STAT. 103-I-40 (1933).
8
State v. George, 20 Del. 57, 54 A. 745 (1902); Heilman v. Commonwealth, 84 Ky.
457, 1 S.W. 731 (1886); State v. Aaron, 4 N.J.L. 269 (1818).
9
State v. Dillon, 93 Idaho 698, 471 P.2d 553 (1970); State v. Jackson, 346 Mo. 474, 142 S.W.2d
45 (1940).
10
See Godfrey v. State, 31 Ala. 323 (1858); Martin v. State, 90 Ala. 602, 8 So. 858 (1891); State
v. J.P.S., 135 Wash.2d 34, 954 P.2d 894 (1998); Beason v. State, 96 Miss. 165, 50 So. 488 (1909);
State v. Nickelson, 45 La.Ann. 1172, 14 So. 134 (1893); Commonwealth v. Mead, 92 Mass.
5.2 Negative Fault Elements by Artificial Intelligence Technology 151
way he behaves and whether he understands the wrong character of that behavior.11
If the infant understands that he may be criminally liable for the offense, as if he
were mature.
However, there may be some procedural changes in the criminal process in
comparison to the standard process (e.g., juvenile court, presence of parents, lenient
punishments, etc.). The rationale behind this general defense is that infants under
certain age (biologic or mental) are presumed to be incapable of forming the
relevant fault required for criminal liability.12 The mental capacity of the infant is
incapable of containing the required fault and understanding the full social and
individual meanings of criminal liability. In this case, the imposition of criminal
liability is irrelevant, unnecessary and vicious.
Consequently, infants are not criminally liable, but rather educated, rehabilitated
and treated.13 Accordingly, the question is whether this rationale is relevant only for
humans, or it may be relevant for other legal entities as well. The general defense of
infancy is not considered applicable for corporations. There are no “infant
corporations”, and the moment the corporation is registered (and legally exists)
criminal liability may be imposed upon it. The reason is that the rationale of this
general defense is irrelevant for corporations. An infant has no mental capability to
form the required fault due to the mental underdevelopment of consciousness in this
age. When the infant becomes older, the mental capacity develops gradually until
possessing the capability of understanding right and wrong.
At this point, criminal liability becomes relevant. The mental capacity of
corporations is not depended on their chronologic “age” (the date of registration).
It is considered to be constant. Moreover, the mental capacity of a corporation is
derivative from its human organs, who are mature entities. Consequently, there is
no legitimate in rem defense for the general defense of infancy to be applicable for
corporations. At this point the question is whether artificial intelligence systems
resemble to humans or to corporations in this context.
The answer is different in relation to different types of artificial intelligence
systems. It should be distinguished between constant artificial intelligence systems
and dynamically developing artificial intelligence systems. Constant artificial intel-
ligence systems begin their activity with the same capacities that would accompany
them during their entire activity over years. Such systems would not experience any
398 (1865); Willet v. Commonwealth, 76 Ky. 230 (1877); Scott v. State, 71 Tex.Crim.R. 41, 158
S.W. 814 (1913); Price v. State, 50 Tex.Crim.R. 71, 94 S.W. 901 (1906).
11
Adams v. State, 8 Md.App. 684, 262 A.2d 69 (1970):
the most modern definition of the test is simply that the surrounding circumstances must
demonstrate, beyond a reasonable doubt, that the individual knew what he was doing and
that it was wrong.
12
A.W.G. Kean, The History of the Criminal Liability of Children, 53 L. Q. REV. 364 (1937).
13
Andrew Walkover, The Infancy Defense in the New Juvenile Court, 31 U.C.L.A. L. REV.
503 (1984); Keith Foren, Casenote: In Re Tyvonne M. Revisited: The Criminal Infancy Defense
in Connecticut, 18 Q. L. REV. 733 (1999).
152 5 Negative Fault Elements and Artificial Intelligence Systems
14
Frederick J. Ludwig, Rationale of Responsibility for Young Offenders, 29 NEB. L. REV.
521 (1950); In re Tyvonne, 211 Conn. 151, 558 A.2d 661 (1989).
5.2 Negative Fault Elements by Artificial Intelligence Technology 153
defense of infancy may be relevant for that type of artificial intelligence systems
under the relevant circumstances.
Thus, in this example, the patient did not control his self-behavior (the reflex),
but he surely controlled the conditions for its occurrence (knocking the knee),
therefore the general defense of loss of self-control would not be applicable for
him. Many types of situations were recognized as loss of self-control.
15
In Bratty v. Attorney-General for Northern Ireland, [1963] A.C. 386, 409, [1961] 3 All E.R. 523,
[1961] 3 W.L.R. 965, 46 Cr. App. Rep 1, Lord Denning noted:
The requirement that it should be a voluntary act is essential, not only in a murder case, but
also in every criminal case. No act is punishable if it is done involuntarily.
State v. Mishne, 427 A.2d 450 (Me.1981); State v. Case, 672 A.2d 586 (Me.1996).
16
See, e.g., People v. Newton, 8 Cal.App.3d 359, 87 Cal.Rptr. 394 (1970).
154 5 Negative Fault Elements and Artificial Intelligence Systems
These situations include automatism (acting with no aware central control over
the body),17 convulsions, post-epileptic states,18 post-stroke states,19 organic brain
diseases, central nervous system diseases, hypoglycemia, hyperglycemia,20 som-
nambulism (sleep-walking),21 extreme absence of sleep,22 side effects of bodily23
or mental traumas,24 blackout situations,25 side effects of amnesia26 and brain-
wash,27 and many more other situations.28 For the fulfillment of the first condition,
the cause for the loss of self-control is insignificant. As long as the offender is
actually incapable of controlling his behavior, the first condition is fulfilled.
The second condition relates to the cause for entering the first conditions. If that
cause was controlled by the offender, he is not considered as losing his self-control.
Controlling the conditions for losing the self-control is controlling the behavior.
Therefore, when the offender controls the conditions to become in control or out of
control, he may not be considered as an individual who lost his self-control. In
Europe, the second condition is a doctrine (actio libera in causa), which dictates
17
Kenneth L. Campbell, Psychological Blow Automatism: A Narrow Defence, 23 CRIM. L. Q.
342 (1981); Winifred H. Holland, Automatism and Criminal Responsibility, 25 CRIM. L. Q.
95 (1982).
18
People v. Higgins, 5 N.Y.2d 607, 186 N.Y.S.2d 623, 159 N.E.2d 179 (1959); State v. Welsh,
8 Wash.App. 719, 508 P.2d 1041 (1973).
19
Reed v. State, 693 N.E.2d 988 (Ind.App.1998).
20
Quick, [1973] Q.B. 910, [1973] 3 All E.R. 347, [1973] 3 W.L.R. 26, 57 Cr. App. Rep. 722, 137
J.P. 763; C, [2007] E.W.C.A. Crim. 1862, [2007] All E.R. (D) 91.
21
Fain v. Commonwealth, 78 Ky. 183 (1879); Bradley v. State, 102 Tex.Crim.R. 41, 277 S.W. 147
(1926); Norval Morris, Somnambulistic Homicide: Ghosts, Spiders, and North Koreans, 5 RES
JUDICATAE 29 (1951).
22
McClain v. State, 678 N.E.2d 104 (Ind.1997).
23
People v. Newton, 8 Cal.App.3d 359, 87 Cal.Rptr. 394 (1970); Read v. People, 119 Colo.
506, 205 P.2d 233 (1949); Carter v. State, 376 P.2d 351 (Okl.Crim.App.1962).
24
People v. Wilson, 66 Cal.2d 749, 59 Cal.Rptr. 156, 427 P.2d 820 (1967); People v. Lisnow,
88 Cal.App.3d Supp. 21, 151 Cal.Rptr. 621 (1978); Lawrence Taylor and Katharina Dalton,
Premenstrual Syndrome: A New Criminal Defense?, 19 CAL. W. L. REV. 269 (1983); Michael
J. Davidson, Feminine Hormonal Defenses: Premenstrual Syndrome and Postpartum Psychosis,
2000 ARMY LAWYER 5 (2000).
25
Government of the Virgin Islands v. Smith, 278 F.2d 169 (3rd Cir.1960); People v. Freeman,
61 Cal.App.2d 110, 142 P.2d 435 (1943); State v. Hinkle, 200 W.Va. 280, 489 S.E.2d 257 (1996).
26
State v. Gish, 17 Idaho 341, 393 P.2d 342 (1964); Evans v. State, 322 Md. 24, 585 A.2d
204 (1991); State v. Jenner, 451 N.W.2d 710 (S.D.1990); Lester v. State, 212 Tenn. 338, 370 S.
W.2d 405 (1963); Polston v. State, 685 P.2d 1 (Wyo.1984).
27
Richard Delgado, Ascription of Criminal States of Mind: Toward a Defense Theory for the
Coercively Persuaded (“Brainwashed”) Defendant, 63 MINN. L. REV. 1 (1978); Joshua Dressler,
Professor Delgado’s “Brainwashing” Defense: Courting a Determinist Legal System, 63 MINN.
L. REV. 335 (1978).
28
FRANCIS ANTONY WHITLOCK, CRIMINAL RESPONSIBILITY AND MENTAL ILLNESS 119–120 (1963).
5.2 Negative Fault Elements by Artificial Intelligence Technology 155
that if the entrance to the uncontrolled situation was controlled by itself, the general
defense of loss of self-control is rejected.29
Accordingly, artificial intelligence systems may experience loss of self-control
in the context of criminal law. The loss of self-control may have external and
internal causes. For example, a human pushes an artificial intelligence system onto
another human person. The pushed artificial intelligence system has no control over
that movement. This is an example for external cause for loss of self-control. If the
pushed artificial intelligence system makes non-consensual physical contact with
the other person, it may be considered as assault. The mental element required for
assault is awareness.
If the artificial intelligence system is aware of that physical contact, both factual
and mental elements requirements of the assault are fulfilled. If the artificial
intelligence system were human, it would have probably argued for the general
defense of loss of self-control. Consequently, although both mental and factual
elements of the assault are fulfilled, no criminal liability for assault should be
imposed, since the commission of the offense was involuntary, or due to loss of
self-control. This general defense would have been preventing imposition of crimi-
nal liability upon human offenders. So should it prevent the imposition of criminal
liability upon artificial intelligence technology.
If the artificial intelligence system has no artificial intelligence capabilities of
consolidating awareness, there was no necessary with this defense, since the
artificial intelligence system would have been functioning as not more than a
screwdriver. However, the artificial intelligence system is aware of the assault,
and a screwdriver is not. The capabilities of the artificial intelligence system to
fulfill the mental element requirement of the offense create the necessary to apply
this general defense. This general defense functions with humans and artificial
intelligence systems at the same way.
An example for internal cause for loss of self-control is when internal malfunc-
tion or technical failure of the movement system cause uncontrolled movements of
the artificial intelligence system. The artificial intelligence system may be aware of
the malfunction, but still not be able to control it or fix it. This is also the case for the
general defense of loss of self-control. Thus, whether the cause for the loss of self-
control is external or internal, it is relevant for the applicability of this general
defense.
However, if the artificial intelligence system controlled these causes, the defense
would be inapplicable. For instance, if the artificial intelligence system physically
cased a person to push him onto another person (external cause), or if the artificial
intelligence system caused the malfunction while knowing what are the probable
consequences on its movement mechanism (internal cause), the second condition of
the defense is not fulfilled, and the defense is inapplicable. This is the same
29
RG 60, 29; RG 73, 177; VRS 23, 212; VRS 46, 440; VRS 61, 339; VRS 64, 189; DAR 1985,
387; BGH 2, 14; BGH 17, 259; BGH 21, 381.
156 5 Negative Fault Elements and Artificial Intelligence Systems
situation with humans. Consequently, it seems that the general defense of loss of
self-control may be applicable for artificial intelligence systems.
5.2.1.3 Insanity
Could an artificial intelligence system be considered insane for the question of
criminal liability? Insanity was known to humanity since the forth millennium BC.30
However, it was considered then as serving the sentence for religious sins.31
Insanity was considered the punishment itself, therefore there was no necessary
to research it or find cures for it.32 Only since the middle of the eighteenth century,
insanity has been explored as mental disease together with its legal aspects.33 In the
nineteenth century the terms “insanity” and “moral insanity” described situations
when the individual has no moral orientation or has defective moral concept,
although he is aware of the common moral values.34
Insanity was diagnosed as such only through major deviations from the common
behavior, especially sexual behavior.35 Since the end of the nineteenth century it
has been understood that insanity is mental malfunctioning, which sometimes may
not be expressed through behavioral deviations from the common behavior. This
approach fed the understandings of insanity in criminal law and criminology.36
Mental diseases and defects were categorized through their symptoms and along
with medical treatment for them, their effect on criminal liability was explored and
recorded. However, the different needs of psychiatry and criminal law created
different definitions for insanity.
30
KARL MENNINGER, MARTIN MAYMAN AND PAUL PRUYSER, THE VITAL BALANCE 420–489 (1963);
George Mora, Historical and Theoretical Trends in Psychiatry, 1 COMPREHENSIVE TEXTBOOK OF
PSYCHIATRY 1, 8–19 (Alfred M. Freedman, Harold Kaplan and Benjamin J. Sadock eds., 2nd
ed., 1975).
31
MICHAEL MOORE, LAW AND PSYCHIATRY: RETHINKING THE RELATIONSHIP 64–65 (1984); Anthony
Platt and Bernard L. Diamond, The Origins of the “Right and Wrong” Test of Criminal Responsi-
bility and Its Subsequent Development in the United States: An Historical Survey, 54 CAL. L. REV.
1227 (1966).
32
SANDER L. GILMAN, SEEING THE INSANE (1982); JOHN BIGGS, THE GUILTY MIND 26 (1955).
33
WALTER BROMBERG, FROM SHAMAN TO PSYCHOTHERAPIST: A HISTORY OF THE TREATMENT OF MENTAL
ILLNESS 63 (1975); GEORGE ROSEN, MADNESS IN SOCIETY: CHAPTERS IN THE HISTORICAL SOCIOLOGY OF
MENTAL ILLNESS 33, 82 (1969); EDWARD NORBECK, RELIGION IN PRIMITIVE SOCIETY 215 (1961).
34
JAMES COWLES PRICHARD, A TREATISE ON INSANITY AND OTHER DISORDERS AFFECTING THE MIND
(1835); ARTHUR E. FINK, CAUSES OF CRIME: BIOLOGICAL THEORIES IN THE UNITED STATES, 1800–1915
48–76 (1938); Janet A. Tighe, Francis Wharton and the Nineteenth Century Insanity Defense: The
Origins of a Reform Tradition, 27 AM. J. LEGAL HIST. 223 (1983).
35
Peter McCandless, Liberty and Lunacy: The Victorians and Wrongful Confinement, MADHOUSES,
MAD-DOCTORS, AND MADMEN: THE SOCIAL HISTORY OF PSYCHIATRY IN THE VICTORIAN ERA 339, 354
(Scull ed., 1981); VIEDA SKULTANS, ENGLISH MADNESS: IDEAS ON INSANITY, 1580–1890 69–97 (1979);
MICHEL FOUCAULT, MADNESS AND CIVILIZATION 24 (1965).
36
Seymour L. Halleck, The Historical and Ethical Antecedents of Psychiatric Criminology,
PSYCHIATRIC ASPECTS OF CRIMINOLOGY 8 (Halleck and Bromberg eds., 1968); FRANZ ALEXANDER
AND HUGO STAUB, THE CRIMINAL, THE JUDGE, AND THE PUBLIC 24–25 (1931); FRANZ ALEXANDER, OUR
AGE OF UNREASON: A STUDY OF THE IRRATIONAL FORCES IN SOCIAL LIFE (rev. ed., 1971).
5.2 Negative Fault Elements by Artificial Intelligence Technology 157
For instance, the early English legal definition for insane person (“idiot”) was
that he is not able to count until 20, whereas psychiatry does not consider such a
person as insane.37 The criminal law needed a bright, clear and conclusive defini-
tion for insanity, whereas psychiatry had no such needs. The modern legal defini-
tion of insanity in most modern legal systems is inspired by two nineteenth century
English tests. One is the M’Naughten rules from 1843,38 and the second is the
irresistible impulse test from 1840.39 The combination of both tests forms compli-
ance between the general defense of insanity and the structure of general intent.
The insanity legal definition has both cognitive and volitive aspects. The cogni-
tive aspect of insanity consists of the capability to understand the criminality of the
conduct, whereas the volitive aspect consists of the capability to control the will.
Thus, if a mental disease or defect cause cognitive malfunction (difficulty to
understand the factual reality and the criminality of the conduct) or volitive
malfunction (irresistible impulse), it is considered insanity in its legal manner.40
This is the conclusive common test for insanity.41 It fits the structure of general
intent, which also contains both cognitive and volitive aspects, and it is comple-
mentary to the general intent requirement.42
This definition of insanity is functional and not categorical. There is no neces-
sary of being mentally ill in certain list of mental diseases to be considered insane.
Any mental defect, of any kind, may be the basis for insanity as long as it causes
cognitive or volitive malfunctions. The malfunctions are examined functionally and
not necessarily medically through a certain list of mental diseases.43 As a result, a
person may be considered insane for criminal law and perfectly sane for psychiatry
37
Homer D. Crotty, The History of Insanity as a Defence to Crime in English Common Law,
12 CAL. L. REV. 105, 107–108 (1924).
38
M’Naghten, (1843) 10 Cl. & Fin. 200, 8 E.R. 718.
39
Oxford, (1840) 9 Car. & P. 525, 173 E.R. 941.
40
United States v. Freeman, 357 F.2d 606 (2nd Cir.1966); United States v. Currens, 290 F.2d
751 (3rd Cir.1961); United States v. Chandler, 393 F.2d 920 (4th Cir.1968); Blake v. United States,
407 F.2d 908 (5th Cir.1969); United States v. Smith, 404 F.2d 720 (6th Cir.1968); United States
v. Shapiro, 383 F.2d 680 (7th Cir.1967); Pope v. United States, 372 F.2d 710 (8th Cir.1970).
41
Commonwealth v. Herd, 413 Mass. 834, 604 N.E.2d 1294 (1992); State v. Curry, 45 Ohio St.3d
109, 543 N.E.2d 1228 (1989); State v. Barrett, 768 A.2d 929 (R.I.2001); State v. Lockhart, 208 W.
Va. 622, 542 S.E.2d 443 (2000). See also 18 U.S.C.A. }17.
42
THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT AND EXPLANATORY NOTES 61–
62 (1962, 1985):
(1) A person is not responsible for criminal conduct if at the time of such conduct as a result
of mental disease or defect he lacks substantial capacity either to appreciate the criminality
[wrongfulness] of his conduct or to conform his conduct to the requirements of law;
(2) As used in this Article, the terms ‘mental disease or defect’ do not include an
abnormality manifested only by repeated criminal or otherwise antisocial conduct.
43
State v. Elsea, 251 S.W.2d 650 (Mo.1952); State v. Johnson, 233 Wis. 668, 290 N.W. 159
(1940); State v. Hadley, 65 Utah 109, 234 P. 940 (1925); HENRY WEIHOFEN, MENTAL DISORDER AS A
CRIMINAL DEFENSE 119 (1954); K. W. M. Fulford, Value, Action, Mental Illness, and the Law,
158 5 Negative Fault Elements and Artificial Intelligence Systems
(e.g., cognitive malfunction that is not categorized as mental disease). The opposite
possibility (sane for criminal law, but insane for psychiatry) is feasible as well (e.g.,
mental disease that does not cause any cognitive or volitive malfunction).
The insane person is presumed to be incapable of forming the relevant fault
required for criminal liability. On that basis the question is whether the general
defense of insanity is applicable for artificial intelligence systems. The general
defense of insanity requires a mental, or inner, defect which causes cognitive or
volitive malfunction. No certain types of mental diseases are required, but any
mental defect. The question is how it can be known about the existence of that
“mental defect”.
Since the mental defect is examined functionally and not through certain
categories, therefore the symptoms of that mental defect are crucial in identification
of it. If the inner defect causes cognitive or volitive malfunction, whether that inner
defect is classified as “mental disease”, chemical imbalance in brain, electric
imbalance in brain etc., or not. The inner cause is examined through its functional
value over human mind. So is the legal situation with humans, and so may it be with
artificial intelligence systems. The more complicated and advanced the artificial
intelligence system is, the higher probability for inner defects.
The defects may be mainly in software, but also in hardware. Some inner defects
cause no malfunction of the artificial intelligence system, some do. If the inner
defect caused a cognitive or volitive malfunction of the artificial intelligence
system, it matches the criminal law definition of insanity. Since strong artificial
intelligence systems are capable of forming all general intent components, and
these components consist on cognitive and volitive components due to the general
intent structure, it is most probable that some inner defects may cause malfunction
of these capabilities.
When an inner defect causes such malfunction it matches the definition of
insanity in criminal law. Partial insanity would be applicable when the cognitive
or volitive malfunctions are not complete. Temporary insanity would be applicable
when these malfunctions affect the offender (human or artificial intelligence sys-
tem) for determined period.44
One may argue that this is not the typical character of the insane person, as it
does not match his perspective of the concept of insanity as drawn by psychiatry,
culture, folklore, literature and even movies. However, it still is insanity for
criminal law. First, criminal law definition for insanity is different than its
definitions in psychiatry, culture, etc, and this definition is used for human insanity.
Why should another definition be used just for artificial intelligence systems?
Second, criminal law does not require mental disease for human insanity, why
should it be required for artificial intelligence system insanity?
ACTION AND VALUE IN CRIMINAL LAW 279 (Stephen Shute, John Gardner and Jeremy Horder
eds., 2003).
44
People v. Sommers, 200 P.3d 1089 (2008); McNeil v. United States, 933 A.2d 354 (2007);
Rangel v. State, 2009 Tex.App. 1555 (2009); Commonwealth v. Shumway, 72 Va.Cir. 481 (2007).
5.2 Negative Fault Elements by Artificial Intelligence Technology 159
Criminal law definitions may seem to be too technical, but, technical or not, if
fulfilled by the offender, they are applied. If both human offenders and artificial
intelligence systems offenders have the capability of fulfilling the insanity require-
ments in criminal law, there is no legitimate reason to make the general defense of
insanity be applicable just for one type of offenders. Consequently, it seems that the
general defense of insanity may be applicable for artificial intelligence systems.
5.2.1.4 Intoxication
Could an artificial intelligence system be considered intoxicated for the question of
criminal liability? The effects of intoxicating materials were known to humanity
since prehistory. In the early law of ancient era the term “intoxication” referred to
drunkenness as a result of exposure to alcohol. Later, when the intoxicating effects
of other materials became known to humanity, this term has been expanded.45 Until
the beginning of the nineteenth century intoxication was not accepted as general
defense. The archbishop of Canterbury wrote in the seventh century that imposing
criminal liability upon a drunk person who committed homicide is justified for two
reasons: first, the very drunkenness, and second, the homicide of a Christian
person.46
Drunkenness was conceptualized as religious and moral sin, therefore it was
considered immoral to let offenders be in personam defense from criminal liability
for being drunk.47 Only in the nineteenth century a serious legal discussion towards
intoxication was made by courts. This kind of discussion was enabled due to the
legal and scientific developments, which have created the understanding that an
intoxicated person is not necessarily mentally competent for criminal liability (non
compos mentis). From the very beginning of the legal evaluation of intoxication in
the nineteenth century, the courts distinguished cases of voluntary and involuntary
intoxication.48
Voluntary intoxication was considered as part of the offender’s fault, therefore it
could not be the basis for in personam defense from criminal liability. However,
voluntary intoxication could have been considered as relevant circumstances for
imposition of more lenient punishment.49 In addition, voluntary intoxication could
have refuted premeditation in first degree murder cases.50 Courts have
45
R. U. Singh, History of the Defence of Drunkenness in English Criminal Law, 49 LAW Q. REV.
528 (1933).
46
THEODORI LIBER POENITENTIALIS, III, 13 (668–690).
47
Francis Bowes Sayre, Mens Rea, 45 HARV. L. REV. 974, 1014–1015 (1932).
48
WILLIAM OLDNALL RUSSELL, A TREATISE ON CRIMES AND MISDEMEANORS 8 (1843, 1964).
49
Marshall, (1830) 1 Lewin 76, 168 E.R. 965.
50
Pearson, (1835) 2 Lewin 144, 168 E.R. 1108; Thomas, (1837) 7 Car. & P. 817, 173 E.R. 356:
Drunkenness may be taken into consideration in cases where what the law deems sufficient
provocation has been given, because the question is, in such cases, whether the fatal act is to
be attributed to the passion of anger excited by the previous provocation, and that passion is
more easily excitable in a person when in a state of intoxication than when he is sober.
160 5 Negative Fault Elements and Artificial Intelligence Systems
distinguished cases by the reasons for entering into the intoxication situations.
Entering voluntary intoxication out of the will to commit an offense was considered
different case the entering voluntary intoxication for no criminal reason.51
However, involuntary intoxication has been recognized and accepted as an in
personam defense from criminal liability.52 Involuntary intoxication is a situation
imposed upon the individual, therefore it is not just and fair to impose criminal
liability in such situations. Thus, the general defense of intoxication has two main
functions. When it is involuntary intoxication, it prevents imposition of criminal
liability. When it is voluntary, but not purposed for commission of an offense, it is
to be considered for imposition of more lenient punishment.
The modern understanding of intoxication includes any mental effect which is
caused by external material (e.g., chemicals). The required mental effect matches
the structure of general intent, discussed above. Consequently, the effect may be
cognitive or volitive.53 The intoxicating effect may relate to the offender’s percep-
tion, understanding of factual reality or awareness (cognitive effect), or it may
relate to the offender’s will up to irresistible impulse (volitive effect). Intoxication
is initialized by an external material. There is no close list of materials, and they
may be illegal (e.g., heroin, cocaine, etc.) or perfectly legal (e.g., alcohol, sugar,
pure water, etc.).
The effect of the external materials on the individual is subjective. One person
may be affected very differently than another from the same materials at the same
quantity. Sugar may cause one person hyperglycemia, whereas another is barely
affected. Pure water may imbalance the electrolytes in one person, whereas another
is barely affected, etc. Cases of addiction raised the question whether the absence of
the external material may be considered as a cause for intoxication. For instance,
when a narcotist person is in procedure of weaning from drugs, he may experience
cognitive and volitive malfunctions due to the absence of drugs.
Consequently, narcotic addiction cognitive and volitive effects were considered
intoxication for the criminal law.54 For the question of voluntary and involuntary
intoxication in these cases (the addicted wanted to begin the weaning procedure),
51
Meakin, (1836) 7 Car. & P. 297, 173 E.R. 131; Meade, [1909] 1 K.B. 895; Pigman v. State,
14 Ohio 555 (1846); People v. Harris, 29 Cal. 678 (1866); People v. Townsend, 214 Mich.
267, 183 N.W. 177 (1921).
52
Derrick Augustus Carter, Bifurcations of Consciousness: The Elimination of the Self-Induced
Intoxication Excuse, 64 MO. L. REV. 383 (1999); Jerome Hall, Intoxication and Criminal Respon-
sibility, 57 HARV. L. REV. 1045 (1944); Monrad G. Paulsen, Intoxication as a Defense to Crime,
1961 U. ILL. L. F. 1 (1961).
53
State v. Cameron, 104 N.J. 42, 514 A.2d 1302 (1986); State v. Smith, 260 Or. 349, 490 P.2d
1262 (1971); People v. Leonardi, 143 N.Y. 360, 38 N.E. 372 (1894); Tate v. Commonwealth,
258 Ky. 685, 80 S.W.2d 817 (1935); Roberts v. People, 19 Mich. 401 (1870); People v. Kirst,
168 N.Y. 19, 60 N.E. 1057 (1901); State v. Robinson, 20 W.Va. 713, 43 Am.Rep. 799 (1882).
54
Addison M. Bowman, Narcotic Addiction and Criminal Responsibility under Durham, 53 GEO.
L. J. 1017 (1965); Herbert Fingarette, Addiction and Criminal Responsibility, 84 YALE L. J.
413 (1975); Lionel H. Frankel, Narcotic Addiction, Criminal Responsibility and Civil Commit-
ment, 1966 UTAH L. REV. 581 (1966); Peter Barton Hutt and Richard A. Merrill, Criminal
5.2 Negative Fault Elements by Artificial Intelligence Technology 161
the cause for the addiction is examined as voluntary or involuntary and not the
cause for weaning.55 Thus, intoxication is examined through functional examina-
tion of its cognitive and volitive effects upon the specific individual, regardless the
exact identity of the external material which is the initiative cause for those effects.
On that basis the question is whether the general defense of intoxication is applica-
ble for artificial intelligence systems.
As aforesaid, the general defense of intoxication requires an external material
(e.g., presence or absence of certain chemical material), which affects the inner
process of consciousness through cognitive or volitive effects. For example, the
manufacturer of artificial intelligence systems wanted to reduce the production
expenses, therefore cheap materials were used. After a few months in some of the
initial components of the artificial intelligence systems a process of corrosion has
begun. Consequently, transmission of information was defected in a way affected
the awareness process. Technically, this is a very similar process to the effects of
alcohol on the human neurons.
Another example, a military artificial intelligence system is designed to function
in civil zones after chemical weapon attack. When such an attack occurred (during
training or as real attack), the artificial intelligence system has been activated. After
exposed to the gas, parts of its hardware were infected, and consequently began to
malfunction. This malfunction affected the identification process of the artificial
intelligence system, therefore it began to attack innocent civilians. Analyzing the
artificial intelligence system’s records afterwards showed that the exposure to the
gas was the only reason for attacking civilians. If this artificial intelligence system
would have been human, any court would have accepted the general defense of
intoxication and exonerate the defendant.
Why should not it be the same for an artificial intelligence system? If examined
functionally, when there is no difference between the effects of external materials
on humans and artificial intelligence systems, there is no legitimate in rem defense
for the applicability of the general defense of intoxication on only one of them.
Strong artificial intelligence systems may possess both cognitive and volitive inner
processes. These processes may be affected by various factors. When they are
affected by external materials, as demonstrated above, it fulfills the requirements of
intoxication as general defense.
As a result, if the exposure to certain materials affects cognitive and volitive
processes of an artificial intelligence system in a way that causes the system to
commit an offense, there is no reason why to prevent the applicability of intoxica-
tion as general defense. It may be true that artificial intelligence systems cannot be
drunk out of alcohol nor have illusions and delusions out of drugs, but these effects
Responsibility and the Right to Treatment for Intoxication and Alcoholism, 57 GEO. L. J.
835 (1969).
55
Powell v. Texas, 392 U.S. 514, 88 S.Ct. 2145, 20 L.Ed.2d 1254 (1968); United States v. Moore,
486 F.2d 1139 (D.C.Cir.1973); State v. Herro, 120 Ariz. 604, 587 P.2d 1181 (1978); State v. Smith,
219 N.W.2d 655 (Iowa 1974); People v. Davis, 33 N.Y.2d 221, 351 N.Y.S.2d 663, 306 N.E.2d
787 (1973).
162 5 Negative Fault Elements and Artificial Intelligence Systems
are not the only possible effects related to intoxication. If a human soldier attacks
his friends due to exposure to chemical gas attack, his argument for intoxication is
accepted. If this exposure has the same substantive and functional effects upon both
humans and artificial intelligence systems, there is no legitimate reason to make the
general defense of intoxication be applicable just for one type of offenders. Conse-
quently, it seems that the general defense of intoxication may be applicable for
artificial intelligence systems.
56
William G. Lycan, Introduction, MIND AND COGNITION 3–13 (William G. Lycan ed., 1990).
5.2 Negative Fault Elements by Artificial Intelligence Technology 163
If people already begin to doubt their awareness of the factual reality, they must
consider, in addition, the problem of perspective. Even if people assume that they
experience the factual reality (with no commas), they may experience it only
through their subjective perspective, which is not necessarily the only perspective
of that very factual reality. For instance, if one cuts a triangle shape from cardboard,
it may be seen a triangle if it is looked at towards its flat side. However, it may also
be seen as a narrow strip if it is rotated in 90 on the main axis crossing its flat shape.
The problem of perspective may be crucial if different interpretations are added to
different perspectives.
For instance, two person see and hear a man telling a woman that he is about to
kill her, and he holds a long knife. One person would understand that as a serious
threat on the woman’s life, call the police or attempt to save her by attacking the
man, whereas the other would understand it as part of a show (e.g., street theatre),
that requires no intervention of his. On that basis, the deep question in criminal law
in this context is what should be the factual basis of criminality—the factual reality
as actually occurred, or what the offender believed it to be the factual reality,
although it may have not actually occurred.
For example, a defendant in rape confesses that he and the complainant had full
sexual intercourse. The defendant proves that he had really believed that it was
done consensually, and the prosecution proves that in fact the complainant did not
consent. The court believes both of them, and both of them are telling the truth.
Should the court acquit or convict the defendant. The modern criminal law since the
seventeenth century prefers the subjective perspective of the individual
(as defendant) on the factual reality upon the factual reality itself.57 The general
concept is that the individual cannot be legitimately criminally liable, but only for
“facts” he believed to be knowing, whether they actually occurred in the factual
reality or not.58
The limitations on this concept in most legal systems were evidentiary, so that
the defendant’s argument would be considered true and authentic. However, if the
argument is considered true and authentic, this becomes the basis for the imposition
of criminal liability. Thus, in the above example of rape, the defendant is to be
exonerated since he really believed it was consensual. If the defendant’s perspective
negates the mental element requirement, the factual mistake works as a general
defense which reveals the defendant’s acquittal.59
57
Edwin R. Keedy, Ignorance and Mistake in the Criminal Law, 22 HARV. L. REV. 75, 78 (1909);
Levett, (1638) Cro. Car. 538.
58
State v. Silveira, 198 Conn. 454, 503 A.2d 599 (1986); State v. Molin, 288 N.W.2d
232 (Minn.1979); State v. Sexton, 160 N.J. 93, 733 A.2d 1125 (1999).
59
State v. Sawyer, 95 Conn. 34, 110 A. 461 (1920); State v. Cude, 14 Utah 2d 287, 383 P.2d
399 (1963); Ratzlaf v. United States, 510 U.S. 135, 114 S.Ct. 655, 126 L.Ed.2d 615 (1994); Cheek
v. United States, 498 U.S. 192, 111 S.Ct. 604, 112 L.Ed.2d 617 (1991); Richard H. S. Tur,
Subjectivism and Objectivism: Towards Synthesis, ACTION AND VALUE IN CRIMINAL LAW
213 (Stephen Shute, John Gardner and Jeremy Horder eds., 2003).
164 5 Negative Fault Elements and Artificial Intelligence Systems
The general defense of factual mistake is not applicable only in general intent
offenses, but also in negligence and strict liability offenses. The only difference is
in the required type of mistake. In general intent offenses any authentic mistake
negates awareness of the factual reality, and is considered adequate for that general
defense. In negligence offenses, the mistake should be reasonable as well, for the
defendant to be considered as acting reasonably.60 In strict liability offenses, the
mistake should be inevitable although the defendant has taken all reasonable
measures to prevent it.61 On that basis the question is whether the general defense
of factual mistake is applicable for artificial intelligence systems.
Both humans and artificial intelligence systems may experience difficulties,
errors and malfunctions in the process of awareness of the factual reality. These
difficulties may be both in the process of absorbing the factual data by senses and in
the process of creating a relevant general image towards this data. In most cases the
result of such malfunctioned process is creation of inner factual image which is
different than the factual reality, as the court understands it. This is factual mistake
concerning the factual reality. Factual mistakes are part of human everyday life and
they are the wide basis for human behavior.
In some cases, the factual mistake of both humans and artificial intelligence
systems may reveal to commission of an offense. It means that according to the
factual reality it is considered as an offense, but not according to the subjective
inner factual image of the individual, which happens to be considered as involving
factual mistake. For instance, a human soldier mistakenly identifies his friend as an
enemy soldier and he shoots him. The shot soldier, for unknown reasons, wore the
enemy uniform, spoke the enemy language, and was looked like as if he intends to
attack the shooting soldier. Although he was called to identify himself, he ignored
the requirement. In this case, the mistake is authentic, reasonable and inevitable.
If the shooting soldier is human and he argues for factual mistake, he would
probably be exonerated (if indicted at all). Now, let us assume that the soldier is not
human, but a strong artificial intelligence system. Why should the criminal law treat
the artificial intelligence system soldier differently than the human soldier? The
error for both human and artificial intelligence system soldiers is substantively and
functionally identical. The factual mistake of both humans and artificial intelligence
systems causes the same substantive and functional effects on cognition and on the
perception of factual reality. As a result, there is no reason why to prevent the
applicability of factual mistake as general defense on artificial intelligence systems,
the very same way it is applicable on humans.
Although computers may have the appeal of not making mistakes, they do. The
probability of mistake in mathematical calculations of a computer may be low, but
if the computer absorbs mistaken factual data, the final figures and calculations may
be regarded wrong. Their calculations towards the required, possible and
60
United States v. Lampkins, 4 U.S.C.M.A. 31, 15 C.M.R. 31 (1954).
61
People v. Vogel, 46 Cal.2d 798, 299 P.2d 850 (1956); Long v. State, 44 Del. 262, 65 A.2d
489 (1949).
5.2 Negative Fault Elements by Artificial Intelligence Technology 165
impossible courses of conduct are affected accordingly, the same way as humans.62
This is the case for the general defense of factual mistake. If factual mistakes have
the same substantive and functional effects upon both humans and artificial intelli-
gence systems, there is no legitimate reason to make the general defense of factual
mistake be applicable just for one type of offenders. Consequently, it seems that the
general defense of factual mistake may be applicable for artificial intelligence
systems.
62
Fernand N. Dutile and Harold F. Moore, Mistake and Impossibility: Arranging Marriage
Between Two Difficult Partners, 74 NW. U. L. REV. 166 (1980).
63
Douglas Husak and Andrew von Hirsch, Culpability and Mistake of Law, ACTION AND VALUE IN
CRIMINAL LAW 157, 161–167 (Stephen Shute, John Gardner and Jeremy Horder eds., 2003).
64
Digesta, 22.6.9: “juris quidam ignorantiam cuique nocere, facti vero ignorantiam non nocere”.
65
See, e.g., Brett v. Rigden, (1568) 1 Plowd. 340, 75 E.R. 516; Mildmay, (1584) 1 Co. Rep. 175a,
76 E.R. 379; Manser, (1584) 2 Co. Rep. 3, 76 E.R. 392; Vaux, (1613) 1 Blustrode
197, 80 E.R. 885; Bailey, (1818) Russ. & Ry. 341, 168 E.R. 835; Esop, (1836) 7 Car. & P. 456,
173 E.R. 203; Crawshaw, (1860) Bell. 303, 169 E.R. 1271; Schuster v. State, 48 Ala. 199 (1872).
166 5 Negative Fault Elements and Artificial Intelligence Systems
In the nineteenth century, when culpability requirement in criminal law has dra-
matically developed, a balance was required. Consequently, the required legal
mistake was required to be made in good faith (bona fide),66 and in the highest
standard of mental element, i.e., strict liability. According to this standard, the
required mistake is an inevitable legal mistake although all reasonable measures
have been taken to prevent it.67
This high standard of mistake is required in relation to all types of offenses,
regardless their mental element requirement. Thus, the general standard of legal
mistake is higher than that of factual mistake. The main debate in courts in this
context is whether the offender has indeed taken all reasonable measures to prevent
the legal mistake. That includes the questions of reasonable reliance upon statutes,
judicial decisions,68 official interpretations of the law (including pre-rulings),69 and
advices of private counsel.70 On that basis the question is whether the general
defense of legal mistake is applicable for artificial intelligence systems.
Technically, if the relevant entity, human or artificial intelligence system, has
the capability of fulfilling the mental element requirement of strict liability
offenses, this entity has the capabilities of arguing for legal mistake as general
defense. Since strong artificial intelligence systems have the capabilities of fulfill-
ing the mental element requirement of strict liability offenses, they have the
relevant capabilities that the general defense of legal mistake would be relevant
for them. The absence of legal knowledge towards specific issue may be proven
through the artificial intelligence records of knowledge, and thus the good faith
requirement is fulfilled as well.
The basic meaning of the applicability of legal mistake defense to artificial
intelligence systems is that the system has not been restricted by any formal legal
restriction, and it acted accordingly. If the artificial intelligence system has a
software mechanism that searches for such restrictions and although activated no
such legal restriction has been found, this general defense would be relevant.
However, the system’s in personam defense from criminal liability does not
function as an in personam defense from criminal liability for the programmers
66
Forbes, (1835) 7 Car. & P. 224, 173 E.R. 99; Parish, (1837) 8 Car. & P. 94, 173 E.R. 413; Allday,
(1837) 8 Car. & P. 136, 173 E.R. 431; Dotson v. State, 6 Cold. 545 (1869); Cutter v. State,
36 N.J.L. 125 (1873); Squire v. State, 46 Ind. 459 (1874).
67
State v. Goodenow, 65 Me. 30 (1876); State v. Whitoomb, 52 Iowa 85, 2 N.W. 970 (1879).
68
Lutwin v. State, 97 N.J.L. 67, 117 A. 164 (1922); State v. Whitman, 116 Fla. 196, 156
So. 705 (1934); United States v. Mancuso, 139 F.2d 90 (3rd Cir.1943); State v. Chicago, M. &
St.P.R. Co., 130 Minn. 144, 153 N.W. 320 (1915); Coal & C.R. v. Conley, 67 W.Va.
129, 67 S.E. 613 (1910); State v. Striggles, 202 Iowa 1318, 210 N.W. 137 (1926); United States
v. Albertini, 830 F.2d 985 (9th Cir.1987).
69
State v. Sheedy, 125 N.H. 108, 480 A.2d 887 (1984); People v. Ferguson, 134 Cal.App. 41, 24
P.2d 965 (1933); Andrew Ashworth, Testing Fidelity to Legal Values: Official Involvement and
Criminal Justice, 63 MOD. L. REV. 663 (2000); Glanville Williams, The Draft Code and Reliance
upon Official Statements, 9 LEGAL STUD. 177 (1989).
70
Rollin M. Perkins, Ignorance and Mistake in Criminal Law, 88 U. PA. L. REV. 35 (1940).
5.2 Negative Fault Elements by Artificial Intelligence Technology 167
or users of the system. If these persons could have restricted the system to legal
activity, but have not done it, they may be criminally liable for the offense through
the perpetration-through-another liability or probable consequence liability.
For instance, an artificial intelligence system absorbs factual data upon certain
persons, and it is required to analyze their personality accordingly and publish it in
certain way. In one case the publication is considered criminal libel. If the records
of the system show that the system has not been restricted by any restriction towards
libelous publications and neither had it mechanism for searching for such
restrictions nor have it found such restriction if it had that mechanism, the system
would not be criminally liable for the libel. However, the manufacturer,
programmers and users may be criminally liable for the libel as perpetrators-
through-another or through probable consequence liability.
Artificial intelligence system may have very wide knowledge towards many
kinds of issues, but it does not necessarily contain legal knowledge of every system
on every legal issue. The system may be searching for legal restrictions, if designed
to do that, but not necessarily find such. This is the case for the general defense of
legal mistake. If legal mistakes have the same substantive and functional effects
upon both humans and artificial intelligence systems, there is no legitimate reason
to make the general defense of legal mistake be applicable just for one type of
offenders. Consequently, it seems that the general defense of legal mistake may be
applicable for artificial intelligence systems.
committed the relevant offense during their official duty, for the fulfillment of their
official duty, and with good faith (bona fide), i.e., not exploiting the immunity for
deliberate commission of other criminal offenses. On that basis the question is
whether the general defense of substantive immunity is applicable for artificial
intelligence systems.
Let us assume that in the above example (fireman saving a young woman from
her burning apartment), that the fireman is human. That fireman, if indicted in
causing property damage, would have probably argued for substantive immunity.
The court would have probably accept this argument and acquit him immediately. It
may be assumed, that the fire was too heavy to risk human life, and therefore an
artificial intelligence system has been sent to save the woman’s life. If the human
fireman has been granted such immunity, why would not it be granted for the
artificial intelligence system fireman? At the point of breaking the window, the
artificial intelligence system, if equipped with strong artificial intelligence system,
the very same decision the human fireman does. Why would it be different as to
their criminal liability?
If all conditions to grant this general defense are met, there is no reason to use
different standards for humans and artificial intelligence systems. Artificial intelli-
gence systems are already in use for official duties (e.g., as guards), and inevitably
they sometimes have to commit offenses for fulfilling their duties. For instance,
prison guards might be physically assaulting escaping prisoners to prevent the
escape. If such situations have the same substantive and functional effects upon
both humans and artificial intelligence systems, there is no legitimate reason to
make the general defense of substantive immunity be applicable just for one type of
offenders. Consequently, it seems that the general defense of substantive immunity
may be applicable for artificial intelligence systems.
In rem negative fault elements are in rem defenses which general defenses, that are
related to the characteristics of the factual event (in rem), as noted above.71 The
applicability of in rem defenses upon artificial intelligence criminal liability raises
the question of the capability of artificial intelligence systems to be part of such
situations as self-defense, necessity or duress. This raises deep questions. For
instance, would it be legitimate to enable an artificial intelligence system to defend
itself from an attack? What if the attack is driven by humans—would it be legiti-
mate to let an artificial intelligence system attack humans for the artificial intelli-
gence system’s sake?
In general, since in rem defenses are in rem general defenses, the personal
characteristics of the individual (human or artificial intelligence system) should
be considered insignificant. However, these general defenses were designed to
71
Above at Sect. 5.1.
5.2 Negative Fault Elements by Artificial Intelligence Technology 169
humans while being aware of the human weaknesses and for these weaknesses. For
instance, self-defense was designed to protect the human instinct of life. Is this
instinct relevant to artificial intelligence systems, which are machines? If not, why
would the self-defense be relevant for machines? The applicability of in rem
defenses as general defenses upon artificial intelligence systems is explored below.
5.2.2.1 Self-Defense
Self-defense is one of the most ancient defenses in human culture. Its basic essence
is to partly reduce of the thorough applicability of the general concept of the
society’s monopoly over power.72 According to this concept, only the society
(i.e., the state as such) has the authority to use force upon the individuals. No
individual is authorized to do that. Consequently, when one individual has a dispute
with another, he may not use power, but apply the state (e.g., through courts, police,
etc.) for the state solve the problem and use power. This concept excludes the power
from being in the individuals’ hands.
However, for this concept to be effective the state’s representatives must be
present all the time in all places. If one individual is attacked by another in a dark
corner of the street, he may not retaliate, but rather wait to the state representatives.
They may come, but they also may be unavailable at that point of time. In order to
enable individuals to protect themselves from attackers in this kind of situations, the
society must retreat, partly, from that concept. One retreat is through the acceptance
of self-defense as general defense in criminal law. The self-defense enables the
individual to protect some values while using force outside the society’s monopoly
over power concept.
Being in situation that requires self-defense is considered as negating the
individual’s fault required for the imposition of criminal liability. This concept
has been accepted by legal systems in the world since ancient ages.73 In time this
defense became wider and more accurate. Its modern basis is to enable the individ-
ual to repel forthcoming attack upon a legitimate interest. Consequently, there are
several conditions to enter the sphere of this general defense:
72
Chas E. George, Limitation of Police Powers, 12 LAW. & BANKER & S. BENCH & B. REV.
740 (1919); Kam C. Wong, Police Powers and Control in the People’s Republic of China: The
History of Shoushen, 10 COLUM. J. ASIAN L. 367 (1996); John S. Baker Jr., State Police Powers and
the Federalization of Local Crime, 72 TEMP. L. REV. 673 (1999).
73
Dolores A. Donovan and Stephanie M. Wildman, Is the Reasonable Man Obsolete? A Critical
Perspective on Self-Defense and Provocation, 14 LOY. L. A. L. REV. 435, 441 (1981); Joshua
Dressler, Rethinking Heat of Passion: A Defense in Search of a Rationale, 73 J. CRIM. L. &
CRIMINOLOGY 421, 444–450 (1982); Kent Greenawalt, The Perplexing Borders of Justification and
Excuse, 84 COLUM. L. REV. 1897, 1898, 1915–1919 (1984).
170 5 Negative Fault Elements and Artificial Intelligence Systems
(a) The protected interest should be legitimate. Legitimate interests are life,
freedom, body and property74—of the individual or of other individuals.75
No previous introduction between them is required.76 Thus, self-defense is
not entirely “self”;
(b) The protected interest should be attacked illegitimately.77 When a police-
man attacks the individual to arrest him by a warrant, this is a legitimate
attack78;
(c) The protected interest should be in an immediate and actual danger79;
(d) The act (self-defense) should be repelling the attack, proportional to it,80
and immediate81; and-
(e) The defender did not control the attack or the conditions for its occurrence
(actio libera in causa).82
If all these conditions are fulfilled, the individual is considered to be acting under
self-defense, and accordingly no criminal liability is imposed upon him for the
commission of the offense. Thus, not every time an attack is repelled, it may be
considered self-defense, but only when the repelling act follows the above
conditions at full. On that basis the question is whether the general defense of
self-defense is applicable for artificial intelligence systems. The answer for this
74
State v. Brosnan, 221 Conn. 788, 608 A.2d 49 (1992); State v. Gallagher, 191 Conn. 433, 465
A.2d 323 (1983); State v. Nelson, 329 N.W.2d 643 (Iowa 1983); State v. Farley, 225 Kan. 127, 587
P.2d 337 (1978).
75
Commonwealth v. Monico, 373 Mass. 298, 366 N.E.2d 1241 (1977); Commonwealth
v. Johnson, 412 Mass. 368, 589 N.E.2d 311 (1992); Duckett v. State, 966 P.2d 941 (Wyo.1998);
People v. Young, 11 N.Y.2d 274, 229 N.Y.S.2d 1, 183 N.E.2d 319 (1962); Batson v. State,
113 Nev. 669, 941 P.2d 478 (1997); State v. Wenger, 58 Ohio St.2d 336, 390 N.E.2d
801 (1979); Moore v. State, 25 Okl.Crim. 118, 218 P. 1102 (1923).
76
Williams v. State, 70 Ga.App. 10, 27 S.E.2d 109 (1943); State v. Totman, 80 Mo.App.
125 (1899).
77
Lawson, [1986] V.R. 515; Daniel v. State, 187 Ga. 411, 1 S.E.2d 6 (1939).
78
John Barker Waite, The Law of Arrest, 24 TEX. L. REV. 279 (1946).
79
People v. Williams, 56 Ill.App.2d 159, 205 N.E.2d 749 (1965); People v. Minifie, 13 Cal.4th
1055, 56 Cal.Rptr.2d 133, 920 P.2d 1337 (1996); State v. Coffin, 128 N.M. 192, 991 P.2d
477 (1999).
80
State Philbrick, 402 A.2d 59 (Me.1979); State v. Havican, 213 Conn. 593, 569 A.2d 1089
(1990); State v. Harris, 222 N.W.2d 462 (Iowa 1974); Judith Fabricant, Homicide in Response to a
Threat of Rape: A Theoretical Examination of the Rule of Justification, 11 GOLDEN GATE U. L. REV.
945 (1981).
81
Celia Wells, Battered Woman Syndrome and Defences to Homicide: Where Now?, 14 LEGAL
STUD. 266 (1994); Aileen McColgan, In Defence of Battered Women who Kill, 13 OXFORD J. LEGAL
STUD. 508 (1993); Joshua Dressler, Battered Women Who Kill Their Sleeping Tormenters:
Reflections on Maintaining Respect for Human Life while Killing Moral Monsters, CRIMINAL
LAW THEORY – DOCTRINES OF THE GENERAL PART 259 (Stephen Shute and A. P. Simester eds., 2005).
82
State v. Moore, 158 N.J. 292, 729 A.2d 1021 (1999); State v. Robinson, 132 Ohio App.3d
830, 726 N.E.2d 581 (1999).
5.2 Negative Fault Elements by Artificial Intelligence Technology 171
83
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231, 1255–
1258 (1992).
84
ISAAC ASIMOV, I, ROBOT 40 (1950).
85
See, e.g., United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988); John
C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry into the Problem
of Corporate Punishment, 79 MICH. L. REV. 386 (1981).
172 5 Negative Fault Elements and Artificial Intelligence Systems
86
JUDITH JARVIS THOMSON, RIGHTS, RESTITUTION AND RISK: ESSAYS IN MORAL THEORY 33–48 (1986);
Sanford Kadish, Respect for Life and Regard for Rights in the Criminal Law, 64 CAL. L. REV.
871 (1976); Patrick Montague, Self-Defense and Choosing Between Lives, 40 PHIL. STUD.
207 (1981); Cheyney C. Ryan, Self-Defense, Pacificism, and the Possibility of Killing, 93 ETHICS
508 (1983).
5.2 Negative Fault Elements by Artificial Intelligence Technology 173
5.2.2.2 Necessity
Could an artificial intelligence system be considered as acting under necessity in the
criminal law context? Necessity is a in rem defense from the same “family” of self-
defense. Both are partly reduce of the thorough applicability of the general concept
of the society’s monopoly over power, discussed above.87 The major difference
87
Above at Sect. 5.2.2.1.
174 5 Negative Fault Elements and Artificial Intelligence Systems
between self-defense and necessity is in the identity of the reaction’s object. In self-
defense the defender’s reaction is against the attacker, whereas in necessity it is
against an innocent object (innocent person, property, etc.). The innocent object is
not necessarily connected to the cause of the reaction.
For instance, two persons are sailing in a boat in the high seas. The boat crashes
into an iceberg and sinks. Both of the persons are surviving on an improvised raft,
but with no water or food. After a few days, one of them eats the other in order to
survive.88 The eaten person did not attack the eater, and was not blame for the crash,
he was completely innocent. So was the eater, but he knew that if he does not eat the
other person, he definitely dies. If the eater is indicted for murder of the other
person, he may argue for necessity. Self-defense is not relevant in this case, since
the eaten person did not perform any attack against the eater.
The traditional approach towards necessity is that under the right circumstances
it may justify the commission of offenses (quod necessitas non habet legem).89 The
traditional reason is the criminal law’s understanding of the human nature’s
weaknesses. The individual who acts under necessity is considered to be choosing
the lesser of two evils, from his own point of view.90 In the above example, if the
eater chooses not to eat the other person, they both would die. If he chooses to eat,
only one of them would die. Both situations are “evil”, but the lesser “evil” of the
two is the one that one person survives. The victim of necessity is not considered
blame for anything, but innocent, and still the act would be justified.91
Since the act of necessity is a self-act of the individual, when the authorities are
not available, the general defense of necessity partly reduces of the thorough
applicability of the general concept of the society’s monopoly over power,
discussed above. Being in situation that requires act of necessity is considered as
negating the individual’s fault required for the imposition of criminal liability. This
concept has been accepted by legal systems in the world since ancient ages.92 In
time this defense became wider and more accurate. Its modern basis is to enable the
individual to protect legitimate interests through choosing the lesser of two evils, as
88
See, e.g., United States v. Holmes, 26 F. Cas. 360, 1 Wall. Jr. 1 (1842); Dudley and Stephens,
[1884] 14 Q.B. D. 273.
89
W. H. Hitchler, Necessity as a Defence in Criminal Cases, 33 DICK. L. REV. 138 (1929).
90
Edward B. Arnolds and Norman F. Garland, The Defense of Necessity in Criminal Law: The
Right to Choose the Lesser Evil, 65 J. CRIM. L. & CRIMINOLOGY 289 (1974); Lawrence P. Tiffany
and Carl A. Anderson, Legislating the Necessity Defense in Criminal Law, 52 DENV. L. J.
839 (1975); Rollin M. Perkins, Impelled Perpetration Restated, 33 HASTINGS L. J. 403 (1981).
91
Long v. Commonwealth, 23 Va.App. 537, 478 S.E.2d 324 (1996); State v. Crocker, 506 A.2d
209 (Me.1986); Humphrey v. Commonwealth, 37 Va.App. 36, 553 S.E.2d 546 (2001); United
States v. Oakland Cannabis Buyers’ Cooperative, 532 U.S. 483, 121 S.Ct. 1711, 149 L.Ed.2d
722 (2001); United States v. Kabat, 797 F.2d 580 (8th Cir.1986); McMillan v. City of Jackson,
701 So.2d 1105 (Miss.1997).
92
BENJAMIN THORPE, ANCIENT LAWS AND INSTITUTES OF ENGLAND 47–49 (1840, 2004); Reniger
v. Fogossa, (1551) 1 Plowd. 1, 75 E.R. 1, 18; Mouse, (1608) 12 Co. Rep. 63, 77 E.R. 1341;
MICHAEL DALTON, THE COUNTREY JUSTICE ch. 150 (1618, 2003).
5.2 Negative Fault Elements by Artificial Intelligence Technology 175
aforesaid. Consequently, there are several conditions to enter the sphere of this
general defense:
(a) The protected interest should be legitimate. Legitimate interests are life,
freedom, body and property—of the individual or of other individuals. No
previous introduction between them is required93;
(b) The protected interest should be in an immediate and actual danger94;
(c) The act (of necessity) is directed towards an external or innocent interest95;
(d) The act (of necessity) should be neutralizing the danger, proportional to
it,96 and immediate97; and-
(e) The defender did not control the causes of the danger or the conditions for
its occurrence (actio libera in causa).
If all these conditions are fulfilled, the individual is considered to be acting under
necessity defense, and accordingly no criminal liability is imposed upon him for the
commission of the offense. Thus, not every time a danger is neutralized through
causing harm to an innocent interest, it may be considered necessity, but only when
the act follows the above conditions at full. On that basis the question is whether the
general defense of necessity is applicable for artificial intelligence systems. The
answer for this question is depended on the artificial intelligence systems
capabilities of fulfilling the above conditions in the relevant certain situations.
Four of these conditions [(a), (b), (d) and (e)] are identical to the self-defense
conditions, mutatis mutandis. Instead of an attack on the legitimate interest, it
would be an actual danger to that very interest. The main difference between self-
93
United States v. Randall, 104 Wash.D.C.Rep. 2249 (D.C.Super.1976); State v. Hastings,
118 Idaho 854, 801 P.2d 563 (1990); People v. Whipple, 100 Cal.App. 261, 279 P. 1008 (1929);
United States v. Paolello, 951 F.2d 537 (3rd Cir.1991).
94
Commonwealth v. Weaver, 400 Mass. 612, 511 N.E.2d 545 (1987); Nelson v. State, 597 P.2d
977 (Alaska 1979); City of Chicago v. Mayer, 56 Ill.2d 366, 308 N.E.2d 601 (1974); State v. Kee,
398 A.2d 384 (Me.1979); State v. Caswell, 771 A.2d 375 (Me.2001); State v. Jacobs, 371 So.2d
801 (La.1979); Anthony M. Dillof, Unraveling Unknowing Justification, 77 NOTRE DAME L. REV.
1547 (2002).
95
United States v. Contento-Pachon, 723 F.2d 691 (9th Cir.1984); United States v. Bailey,
444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980); Hunt v. State, 753 So.2d 609 (Fla.
App.2000); State v. Anthuber, 201 Wis.2d 512, 549 N.W.2d 477 (App.1996).
96
State v. Fee, 126 N.H. 78, 489 A.2d 606 (1985); United States v. Sued-Jimenez, 275 F.3d 1 (1st
Cir.2001); United States v. Dorrell, 758 F.2d 427 (9th Cir.1985); State v. Marley, 54 Haw.
450, 509 P.2d 1095 (1973); State v. Dansinger, 521 A.2d 685 (Me.1987); State v. Champa,
494 A.2d 102 (R.I.1985); Wilson v. State, 777 S.W.2d 823 (Tex.App.1989); State v. Cram,
157 Vt. 466, 600 A.2d 733 (1991).
97
United States v. Maxwell, 254 F.3d 21 (1st Cir.2001); Andrews v. People, 800 P.2d
607 (Colo.1990); State v. Howley, 128 Idaho 874, 920 P.2d 391 (1996); State v. Dansinger,
521 A.2d 685 (Me.1987); Commonwealth v. Leno, 415 Mass. 835, 616 N.E.2d 453 (1993);
Commonwealth v. Lindsey, 396 Mass. 840, 489 N.E.2d 666 (1986); People v. Craig, 78 N.Y.2d
616, 578 N.Y.S.2d 471, 585 N.E.2d 783 (1991); State v. Warshow, 138 Vt. 22, 410 A.2d 1000
(1979).
176 5 Negative Fault Elements and Artificial Intelligence Systems
defense and necessity lies within one condition. Whereas in self-defense the act is
directed towards the attacker, in necessity the act is directed towards an external or
innocent interest. In necessity the defender should choose between the lesser of two
evils, which one of them is causing harm to an external interest, which may be an
innocent person, who may have nothing to do with that danger.
The question towards artificial intelligence systems in this context is whether
they possess the capability of choosing the “lesser of two evils”. For instance, a
locomotive artificial intelligence system drone is transporting 20 passengers. The
drone arrives to a rails-junction of two rails. On one rail there is a playing child, but
the second rail is ended on the near cliff. If the drone chooses the first rail, the child
would definitely die, but the 20 passengers would survive. However, if the drone
chooses the second rail, the child would survive, but due to its velocity and the
distance from the cliff, the train would defiantly fall from the 200 ft cliff and no
passenger would survive.
If the locomotive was driven by a human driver, and this deriver would have
chosen the first rail (with the child on it), no criminal liability would have been
imposed upon him due to the general defense of necessity. An artificial intelligence
system may calculate the probabilities for each possibility and choose the possibil-
ity with minimum casualties. Strong artificial intelligence systems are already used
for prediction of very complicated events (e.g., climate computers), and calculating
the probabilities in the above example is considered much simpler. Analyzing the
case by the artificial intelligence system would probably reveal to the same two
possibilities of the human driver.
If the artificial intelligence system takes into consideration the number of
probable casualties, it would probably choose to run over the child. This may be
taken into considerations due to the basic programming of the system or due to
relevant machine learning. In such choice, all conditions of necessity are fulfilled.
Therefore, if it were human, no criminal liability was imposed due to the general
defense of necessity. Why would the artificial intelligence system be treated
differently? Moreover, if the artificial intelligence system chooses the other possi-
bility and causes not the lesser but the greater of two evils, society would probably
want to impose criminal liability (upon the programmer, the user or the artificial
intelligence system), exactly the same way if the artificial intelligence system was
human.
Of course, there may occur some moral dilemmas in such choices and decisions,
e.g., is it legitimate for artificial intelligence system to decide upon human life, or is
it legitimate for artificial intelligence system to cause human death or severe injury.
However, these dilemmas are not different than the moral dilemmas of the self-
defense, discussed above.98 Moreover, the moral questions are not to be taken into
consideration in relation to the criminal liability question. Consequently, it seems
that the general defense of necessity may be applicable for artificial intelligence
systems in similar way of self-defense.
98
Above at Sect. 5.2.2.1.
5.2 Negative Fault Elements by Artificial Intelligence Technology 177
5.2.2.3 Duress
Could an artificial intelligence system be considered as acting under duress in the
criminal law context? Duress is a in rem defense from the same “family” of self-
defense and necessity. All are partly reduce of the thorough applicability of the
general concept of the society’s monopoly over power, discussed above.99 The
major difference between self-defense, necessity and duress is in the course of
conduct. In self-defense the defender’s reaction is repelling the attacker, in neces-
sity it is a reaction against an external innocent object and in duress it is
surrendering to a threat through commission of an offense.
For instance, a retired criminal with expertise in braking into safes is no longer
active. He is invited by ex-friends to participate in another robbery, where his
expertise is required. He says no. They try to convince him, but he still refuses.
Therefore, they kidnap his son and threat him, that if he does not participate in the
robbery, they would kill his son. He knows them very well and knows that they are
serious. He also knows that if police is involved, they would kill his son. As a result,
he surrenders to the threat, participates in the robbery and uses his expertise. If
captured, he may argue for duress. Self-defense and necessity are irrelevant here,
since he surrendered to the threat rather facing it.
The traditional approach towards duress is that under the right circumstances it
may justify the commission of offenses.100 The traditional reason is the criminal
law’s understanding of the human nature’s weaknesses. Sometimes the individual
would rather commit an offense under threat rather than face the threat and pay the
price of causing harm to precious interests. Until the eighteenth century the general
defense of duress was applicable for all offenses.101 Later, the Anglo-American
legal systems made its applicability narrower, and it does not include severe
homicide offenses which require general intent.102
Thus, for instance, in the above example, if it were not robbery but murder, the
general defense of duress would not have been applicable for the imposition of
criminal liability, but only as consideration of punishment. The reason for the
narrow applicability is the sanctity of human life.103 However, this approach has
99
Ibid.
100
John Lawrence Hill, A Utilitarian Theory of Duress, 84 IOWA L. REV. 275 (1999); Rollin
M. Perkins, Impelled Perpetration Restated, 33 HASTINGS L. J. 403 (1981); United States
v. Johnson, 956 F.2d 894 (9th Cir.1992); Sanders v. State, 466 N.E.2d 424 (Ind.1984); State
v. Daoud, 141 N.H. 142, 679 A.2d 577 (1996); Alford v. State, 866 S.W.2d 619 (Tex.Crim.
App.1993).
101
McGrowther, (1746) 18 How. St. Tr. 394.
102
United States v. LaFleur, 971 F.2d 200 (9th Cir.1991); Hunt v. State, 753 So.2d 609 (Fla.
App.2000); Taylor v. State, 158 Miss. 505, 130 So. 502 (1930); State v. Finnell, 101 N.M. 732,
688 P.2d 769 (1984); State v. Nargashian, 26 R.I. 299, 58 A. 953 (1904); State v. Rocheville,
310 S.C. 20, 425 S.E.2d 32 (1993); Arp v. State, 97 Ala. 5, 12 So. 301 (1893).
103
State v. Nargashian, 26 R.I. 299, 58 A. 953 (1904).
178 5 Negative Fault Elements and Artificial Intelligence Systems
(a) The protected interest should be legitimate. Legitimate interests are life,
freedom, body and property—of the individual or of other individuals, and
no previous introduction between them is required106;
(b) The protected interest should be in an immediate and actual danger107;
(c) The act (of duress) is a surrender to the threat;
(d) The act (of duress) should be proportional to the danger108; and-
(e) The defender did not control the causes of the danger or the conditions for
its occurrence (actio libera in causa).109
104
People v. Merhige, 212 Mich. 601, 180 N.W. 418 (1920); People v. Pantano, 239 N.Y. 416,
146 N.E. 646 (1925); Tully v. State, 730 P.2d 1206 (Okl.Crim.App.1986); Pugliese
v. Commonwealth, 16 Va.App. 82, 428 S.E.2d 16 (1993).
105
United States v. Bakhtiari, 913 F.2d 1053 (2nd Cir.1990); R.I. Recreation Center v. Aetna Cas.
& Surety Co., 177 F.2d 603 (1st Cir.1949); Sam v. Commonwealth, 13 Va.App. 312, 411 S.E.2d
832 (1991).
106
Commonwealth v. Perl, 50 Mass.App.Ct. 445, 737 N.E.2d 937 (2000); United States v. -
Contento-Pachon, 723 F.2d 691 (9th Cir.1984); State v. Ellis, 232 Or. 70, 374 P.2d 461 (1962);
State v. Torphy, 78 Mo.App. 206 (1899).
107
People v. Richards, 269 Cal.App.2d 768, 75 Cal.Rptr. 597 (1969); United States v. Bailey,
444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980); United States v. Gomez, 81 F.3d 846 (9th
Cir.1996); United States v. Arthurs, 73 F.3d 444 (1st Cir.1996); United States v. Lee, 694 F.2d
649 (11th Cir.1983); United States v. Campbell, 675 F.2d 815 (6th Cir.1982); State v. Daoud,
141 N.H. 142, 679 A.2d 577 (1996).
108
United States v. Bailey, 444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980); People v. Handy,
198 Colo. 556, 603 P.2d 941 (1979); State v. Reese, 272 N.W.2d 863 (Iowa 1978); State v. Reed,
205 Neb. 45, 286 N.W.2d 111 (1979).
109
Fitzpatrick, [1977] N.I. 20; Hasan, [2005] U.K.H.L. 22, [2005] 4 All E.R. 685, [2005] 2 Cr.
App. Rep. 314, [2006] Crim. L.R. 142, [2005] All E.R. (D) 299.
5.2 Negative Fault Elements by Artificial Intelligence Technology 179
If all these conditions are fulfilled, the individual is considered to be acting under
duress defense, and accordingly no criminal liability is imposed upon him for the
commission of the offense. Thus, not every time a person surrenders a threat, it may
be considered duress, but only when the act follows the above conditions at full. On
that basis the question is whether the general defense of duress is applicable for
artificial intelligence systems. The answer for this question is depended on the
artificial intelligence systems capabilities of fulfilling the above conditions in the
relevant certain situations.
Four of these conditions [(a), (b), (d) and (e)] are almost identical to the self-
defense and necessity conditions, mutatis mutandis. In most legal systems duress
does not require immediate act, for the threat and danger to the legitimate interest
may be continuous. However, the main difference between self-defense, necessity
and duress lies within one condition. Whereas in self-defense the act is directed
towards the attacker and in necessity the act is directed towards an external or
innocent interest, the act in duress is surrender to the relevant threat. The commis-
sion of the offense in duress is the surrender to the threat and the way the individual
face that threat.
The question towards artificial intelligence systems in this context is whether
they possess the capability of choosing the “lesser of two evils”. For instance, a
prison guard artificial intelligence system has captured an escaping prisoner. The
prisoner point a loaded gun to a human prison guard and says that if he is not
released immediately by the artificial intelligence system, he shoots down the
human guard. The artificial intelligence system calculates probabilities and figures
out that the danger is real. If the artificial intelligence system surrenders to the
threat, the human guard’s life are saved, but an offense is committed (e.g., acces-
sory to escape). If the artificial intelligence system does not surrender, no offense is
committed, the escape is failed, but the human guard is murdered.
If the prison guard who captured the prisoner were human, no criminal liability
would have been imposed upon him due to the general defense of duress as all
conditions of this defense are fulfilled. An artificial intelligence system may
calculate the probabilities for each possibility and choose the possibility with
minimum casualties. Strong artificial intelligence systems are already used for
prediction of very complicated events (e.g., climate computers), and calculating
the probabilities in the above example is considered much simpler. Analyzing the
case by the artificial intelligence system would probably reveal to the same two
possibilities of a human prison guard.
If the artificial intelligence system takes into consideration the probability of
casualties, it would probably choose to surrender the threat. This may be taken into
considerations due to the basic programming of the system or due to relevant
machine learning. In such choice, all conditions of duress are fulfilled. Therefore,
if it were human, no criminal liability was imposed due to the general defense of
duress. Why would the artificial intelligence system be treated differently? More-
over, if the artificial intelligence system chooses the other possibility and causes not
the lesser but the greater of two evils, society would probably want to impose
180 5 Negative Fault Elements and Artificial Intelligence Systems
criminal liability (upon the programmer, the user or the artificial intelligence
system), exactly the same way if the artificial intelligence system was human.
Of course, there may occur some moral dilemmas in such choices and decisions,
e.g., is it legitimate for artificial intelligence system to decide upon human life, or is
it legitimate for artificial intelligence system to cause, directly or indirectly, human
death or severe injury. However, these dilemmas are not different than the moral
dilemmas of the self-defense and necessity, discussed above.110 Moreover, the
moral questions are not to be taken into consideration in relation to the criminal
liability question. Consequently, it seems that the general defense of duress may be
applicable for artificial intelligence systems in similar way of self-defense and
necessity.
110
Above at Sects. 5.2.2.1 and 5.2.2.2.
111
Michael A. Musmanno, Are Subordinate Officials Penally Responsible for Obeying Superior
Orders which Direct Commission of Crime?, 67 DICK. L. REV. 221 (1963).
112
Axtell, (1660) 84 E.R. 1060; Calley v. Callaway, 519 F.2d 184 (5th Cir.1975); United States
v. Calley, 48 C.M.R. 19, 22 U.S.C.M.A. 534 (1973).
5.2 Negative Fault Elements by Artificial Intelligence Technology 181
If all these conditions are fulfilled, the individual is considered to be acting under
superior orders defense, and accordingly no criminal liability is imposed upon him
for the commission of the offense. Thus, not every time an individual obeys a
superior order, the general defense is applicable, but only when the act follows the
above conditions at full. On that basis the question is whether the general defense of
superior orders is applicable for artificial intelligence systems. The answer for this
question is depended on the artificial intelligence systems capabilities of fulfilling
the above conditions in the relevant certain situations.
The first condition relates to objective characteristics of the relationships
between the individual and the relevant organization.114 That requires hierarchical
subordination to authorized public authority for the systems of hierarchical orders
would be legitimate and operative. Such systems are existed in the army, the police
etc. However, private organizations have no authority to commit offenses. Artificial
intelligence systems are in use in many of these organizations. Artificial intelli-
gence systems are in military use, police use, prisons use etc. These systems are
operated under superior orders for their regular activity. The tasks given to these
systems are various.
113
A. P. ROGERS, LAW ON THE BATTLEFIELD 143–147 (1996).
114
Jurco v. State, 825 P.2d 909 (Alaska App.1992); State v. Stoehr, 134 Wis.2d 66, 396 N.W.2d
177 (1986).
182 5 Negative Fault Elements and Artificial Intelligence Systems
The second condition relates to the characteristics of the given superior order.
The order must require obedience, otherwise it cannot be considered an order. As to
its content, the order should not be manifestly illegal. If the order is legal or illegal,
but not manifestly illegal, it satisfies this condition. The classification of the order as
illegal or manifestly illegal is determined by the court. However, it may be taught
inductively from case to case. Artificial intelligence systems which are equipped
with machine learning utilities have the capability of inference the general outlines,
at least, of the manifestly illegal order.
For instance, an aircraft artificial intelligence system drone is operated by the Air
Force. Its mission is to search for specific terrorist lab and destroy it. The drone
found it, delivered the information to the headquarters and request for orders.
According to the information, the lab is populated by a known terrorist and his
family. The order is to attack by a heavy bomb. The drone calculates probabilities
and the probability is that all the people in the lab would die. The drone executes the
order. After the order is executed the drone records are examined, and it turns out
that the drone understood the legality of this ordered as situated in a grey area as to
the terrorist’s family, since it could fly lower and destroy the lab with less
casualties.
If the drone were human, it would have probably arguing for the general defense
of superior order. Since the international law accepts such orders under certain
situations, this order may be either legal or illegal, but not manifestly illegal.
Consequently, a human pilot would have probably been acquitted in such case for
this general defense would have been applicable. Why would the artificial intelli-
gence system be treated differently? If both human pilot and artificial intelligence
system has the same functional discretion and both fulfill the relevant conditions of
this general defense, then there is no legitimate reason for having a double standard
in such cases.
Of course, the artificial intelligence system criminal liability, if any, does not
affect the superiors’ criminal liability, if any (in cases of illegal order). There may
also occur some moral dilemmas in such choices and decisions, e.g., is it legitimate
for artificial intelligence system to decide upon human life, or is it legitimate for
artificial intelligence system to cause, directly or indirectly, human death or severe
injury. However, these dilemmas are not different than the moral dilemmas
involved in the applicability of other general defenses, discussed above.115 More-
over, the moral questions are not to be taken into consideration in relation to the
criminal liability question. Consequently, it seems that the general defense of
superior orders may be applicable for artificial intelligence systems.
115
See, e.g., at Sects. 5.2.2.1, 5.2.2.2, and 5.2.2.3.
5.2 Negative Fault Elements by Artificial Intelligence Technology 183
criminalization, i.e., cases, which are not supposed to be considered criminal, are
included within the scope of the relevant offenses. Sometimes the criminal
proceedings in these cases would be rather socially harmful than useful. For
instance, within the scope of the particular offense of theft comes the case of
14-years-old boy who steals his brother’s basketball, and the question is whether
this is the relevant case for criminal proceedings in theft, considering its social
consequences.
The key in most legal systems to solve such problem is through granting wider
discretion to the prosecution and the court. The prosecution may decide not to open
criminal proceedings in cases of low public interest. If opened, the court may decide
to acquit the defendant due to low public interest. When the prosecution exercises
this discretion, it is within its administrative discretion. When the court exercises
this discretion, it is within its judicial power through the general defense of de
minimis. Thus, in general, the general defense of de minimis enables the court to
acquit the defendant for low public interest in the particular case.
This type of judicial discretion has been widely accepted since ancient times.
The Roman law, for example, determined that criminal law does not extend upon
minor and petty matters (de minimis non curat lex), and the judge should not be
troubled by such matters (de minimis non curat praetor).116 In the modern criminal
law the general defense of de minimis is exercised by court seldom and very rarely
due to the wide administrative discretion of the prosecution. However, in the
relevant extreme cases, this judicial discretion may be exercised by the court in
addition to the administrative discretion of the prosecution.117
The basic examination for de minimis defense is towards the social endanger-
ment reflected by the commission of the particular offense. The commission of the
offense should reflect an extremely low social endangerment for the general
defense of de minimis would be applicable.118 Of course, different societies in
different times may realize different social endangerments for the same offenses,
since social endangerment is dynamically conceptualized through morality, culture,
religion, etc. The relevant social endangerment is determined by the court. On that
basis the question is whether this general defense may be relevant for offenses
committed by artificial intelligence systems.
The applicability of de minimis defense is upon the relevant case, regarding all
relevant aspects, and not necessarily upon the offender as such. The personality of
the offender may be taken into consideration, but only as part of assessing the case.
For this reason, there is no difference between humans, corporations or artificial
intelligence systems as to the applicability of this defense. The required low social
116
Vashon R. Rogers Jr., De Minimis Non Curat Lex, 21 ALBANY L. J. 186 (1880); Max L. Veech
and Charles R. Moon, De Minimis non Curat Lex, 45 MICH. L. REV. 537 (1947).
117
THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT AND EXPLANATORY NOTES
40 (1962, 1985).
118
Stanislaw Pomorski, On Multiculturalism, Concepts of Crime, and the “De Minimis” Defense,
1997 B.Y.U. L. REV. 51 (1997).
184 5 Negative Fault Elements and Artificial Intelligence Systems
endangerment is reflected from the factual event (in rem). For instance, a human
driver slipped with his car on the road and hit the pavement. No damages caused to
the pavement or other property, and of course there are no casualties. This is a
relevant case for de minimis, although this case might be within the scope of several
traffic offenses.
Would the case be legally different, if the driver was not human, but an artificial
intelligence system drone? Would have it been different, if the car was related to a
corporation? There is no substantive difference between humans, corporations or
artificial intelligence systems as to the applicability if de minimis defense, espe-
cially not when this general defense is directed to the characteristics of the factual
event and not necessarily to those of the offender. Consequently, it seems that the
general defense of de minimis may be applicable for artificial intelligence systems.
Punishibility of Artificial Intelligence
Technology 6
Contents
6.1 General Purposes of Punishments and Sentencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.1.1 Retribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.1.2 Deterrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6.1.3 Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.1.4 Incapacitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6.2 Relevance of Sentencing to Artificial Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6.2.1 Relevant Purposes to Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . 210
6.2.2 Outlines for Imposition of Specific Punishments on Artificial Intelligence
Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
1
See in general GABRIEL HALLEVY, THE RIGHT TO BE PUNISHED – MODERN DOCTRINAL SENTENCING
15–56 (2013).
6.1.1 Retribution
2
BRONISLAW MALINOWSKI, CRIME AND CUSTOM IN SAVAGE SOCIETY (1959, 1982).
6.1 General Purposes of Punishments and Sentencing 187
because it is the offender who is the object of the suffering caused by the punish-
ment. Retribution, therefore, measures the subjective price of suffering from the
offender’s point of view.3
The equation that defines the subjective price of suffering has two parts. The first
is the suffering caused by the offender, and it includes the suffering caused to
society as well, not only to the individual victim of the offense.4 When a thief steals
an object from someone he causes suffering to the person from whom he stole as the
victim feels the absence of the stolen object. But this is not the only suffering the act
causes, and not the most important one. The theft also causes suffering to society
through loss of economic security, the need for professional attention to deal with
the theft, the necessity to protect individuals from further thefts, and so on. All
relevant types of sufferings must be taken into consideration when meting out the
offender’s punishment.
The second part of the equation is the subjective price of the suffering as viewed
through the offender’s eyes. The suffering caused by the offender to the victim and
to society must be translated into individual suffering imposed on the offender
through punishment. That subjective price determines the type and amount of
punishment. Naturally, such pricing is limited to the legal punishments accepted
in a given legal system. In most legal systems the suffering caused by the offense is
interpreted in terms of imprisonment, fines, public service, etc. Moreover, the rate
at which these punishments can be imposed is limited by the law.
For example, even if the court translates suffering caused by a theft into a
punishment of 10 years of imprisonment, it is not authorized to punish the thief
for more than 3 years of imprisonment if this is the maximum rate determined by
law.5 Based on this approach to retribution, the court must develop an internal
factual image of the offender that is sufficiently broad to allow it to carry out the
process of pricing. Because the process is subjective for each offender, this subjec-
tivity must be filled with relevant factual data that is crucial for applying proper
retribution in the process of sentencing.
Retribution is considered to be the dominant purpose of punishment, but it is not
the only one, and it does not provide solutions to all the needs of modern sentenc-
ing. Retribution is retrospective (it focuses on past events) and causes suffering to
the offender. As such, it lacks a prospective aspect and it does not provide a solution
to the social need of preventing of offenses. Furthermore, retribution does not
provide a solution to the social need of rehabilitating the offender through sentenc-
ing. Therefore, retribution must be complemented by other general purposes of
punishment.
3
NIGEL WALKER, WHY PUNISH? (1991).
4
Paul Butler, Retribution, for Liberals, 46 U.C.L.A. L. REV. 1873 (1999); Michele Cotton, Back
With a Vengeance: The Resilience of Retribution as an Articulated Purpose of Criminal Punish-
ment, 37 AM. CRIM. L. REV. 1313 (2000); Jean Hampton, Correcting Harms versus Righting
Wrongs: The Goal of Retribution, 39 U.C.L.A. L. REV. 1659 (1992).
5
See, e.g., article 242 of the German Penal Code.
188 6 Punishibility of Artificial Intelligence Technology
This does not diminish the status of retribution as a major purpose of punishment
among the general purposes of punishment. In most modern legal systems retribu-
tion is still considered as the dominant purpose of punishment, and the other three
purposes (deterrence, rehabilitation, and incapacitation) are auxiliary purposes.
Retribution retains the proper connection between the damage caused to society
by the offense and the punishment imposed on the offender. This proportional
sentencing is achieved by prevention ex ante of disproportional revenge by society
on the offender.
Retribution can ensure a high level of certainty in the expected punishment.
Certainty is one result of focusing on the actual damage caused by the offense rather
than on potential damage, the offender’s will, or his personality. Retribution does
not neglect the offender’s personal character, and it aims to adjust the proper
suffering to the offender’s subjective attributes. The connection that retribution
aims most to retain is the one between the consequences of the offense and the
punishment being imposed. Retribution can thus assuage the thirst for revenge of
the victims and of society.
Nevertheless, the nature of retribution contains some disadvantages as well, in
areas in which other general purposes of punishment can offer solutions. As noted
above, retribution is a manifestation for the desire to make the offender suffer for
his injurious acts (lex talionis), a desire that does not take into consideration
prospective social consequences. Retribution may be the basis for punishment
even if no direct social benefit is expected to ensue from that punishment. Thus,
from the point of view of retribution, the future social consequences of the punish-
ment are entirely immaterial.
If retribution completely indifferent to the social benefit of punishment, it may
raise questions about its efficiency with respect to social values. Proportional
punishment may be socially deterring and may deter offender from reoffending,
but from the point of view of retribution this effect is entirely insignificant6; if a
punishment has no deterrence value at all, it is still considered proper punishment.
In the eighteenth century Immanuel Kant justified retribution and supported the
punishment of the last person on earth, if it meant the extinction of mankind, in the
name of retribution, which is blind to future social benefits.7
Retribution does not distinguish between different types of offenders who may
require different types of social treatment in order to prevent further delinquency on
their part. For example, a recidivist may require different social treatment than a
first offender.8 A thief who commits ten identical thefts and each time is captured,
convicted, sentenced, imprisoned, and released, after which he immediately
commits another theft would be justifiably punished each time with the same
6
Ledger Wood, Responsibility and Punishment, 28 AM. INST. CRIM. L. & CRIMINOLOGY 630 (1938).
7
IMMANUEL KANT, METAPHYSICAL ELEMENTS OF JUSTICE: PART I – THE METAPHYSICS OF MORALS
102 (trans. John Ladd, 1965).
8
Gabriel Hallevy, Victim’s Complicity in Criminal Law, 2 INT’L J. PUNISHMENT & SENTENCING
74 (2006).
6.1 General Purposes of Punishments and Sentencing 189
6.1.2 Deterrence
9
ANDREW VON HIRSCH, DOING JUSTICE: THE CHOICE OF PUNISHMENT 50 (1976). Compare United States
v. Bergman, 416 F.Supp. 496 (S.D.N.Y.1976); Richard S. Frase, Limiting Retributivism, PRINCI-
PLED SENTENCING: READINGS ON THEORY AND POLICY 135 (Andrew von Hirsch, Andrew Ashworth
and Julian Roberts eds., 3rd ed., 2009).
190 6 Punishibility of Artificial Intelligence Technology
In sum, what the individual considers is not benefits vs. punishment but the
expected value of the benefits (if not caught) vs. the expected value of the punish-
ment (if caught). For the rational individual, it pays to commit the offense if the
expected value of the benefits is greater than the expected value of the punishment.
This reality can be expressed through the following inequality10:
W ð1 RÞ > P R
Naturally, the situation described in this formula is not acceptable for society.
When it pays to commit offenses in a given society, the social fabric is in danger
and the negative incentive is not sufficient to avoid delinquency. In these cases, to
make legal social control effective, society must increase the value of the right side
of the inequality (P·R) by increasing the level of punishment (P) or the risk of being
caught (R). The question is which option is more effective.
Increasing the value of punishment (P) may cause difficulties. This is the
cheapest solution for society, as amending the sanction clause of an offense requires
little effort on the part of legislators. If the punishment is a fine, this may increase
the revenues of the state, and if the punishment is imprisonment it may increase the
expenses of the state. In either case, from the point of view of the offender, the real
value of the punishment remains subjective, as noted above.11 Thus, raising the
level of punishment for a given offense is not necessarily effective for any given
offender.
Society may also use secondary means to increase the value of the punishment,
in addition to amending the sanction clause (for example by publicizing the
offender’s suffering and humiliation), but the primary means remains increasing
the rate of punishment. Most states use this means regularly when faced with
delinquency of a certain type. In general, the value of punishment is first determined
according to the presumed preferences of society and the presumed severity of the
offense. Thus, because murder is considered more severe than theft, the punishment
for murder is harsher than the punishment for theft.
Offenses are reexamined when deterrence becomes relevant. If the offense is
committed regularly, the sanction may be interpreted as inadequate to create the
required deterrence, and the legislators are likely to raise the level of the punish-
ment. Before this step is taken, however, the courts may impose harsher
punishments within the limits of the existing offense. But legislator cannot increase
the level of punishment indefinitely. Each society has its upper limits for punish-
ment, and harsher punishments are considered illegitimate, illegal, or not feasible.
In societies that accept the capital penalty, the upper limit of punishment is
capital penalty with full confiscation of property. In other societies the upper limit is
10
Where W is the value of the benefit, R is the risk of being caught, and P is the value of the
punishment.
11
In a higher point of view, this may prospectively reduce the state’s expenses, if the sanction is
effective. If delinquency is prevented or reduced, some of the state’s sources may be available for
other social tasks.
6.1 General Purposes of Punishments and Sentencing 193
lower. The question is how should society act when the punishment has already
exceeded the upper limit, and the offense is still being committed. Making the
punishment harsher is not a valid option anymore. Moreover, from the offender’s
point of view, the value of punishment is continuously eroding.12 For the recidivist
offender the deterrence of punishment is at its highest when the punishment is
imposed for the first time. Each subsequent time that the punishment is imposed its
deterrent value erodes.
Thus, courts would have to impose increasingly harsher punishments on recidi-
vist offenders in order to achieve deterrence. When the punishment reaches the
upper limit of the offense, no harsher punishment can be imposed in order to
increase deterrence and society has a serious problem with that offender: the
maximum punishment does not deter the offender, who keeps committing the
offense.
Nevertheless, increasing punishment (P) is not the best way of increasing the
expected value of the punishment (P·R). Increasing P has its advantages as it is
inexpensive, focuses on the substantive law, and is a simple method. But it is also
possible to increase the expected value of the punishment by increasing the risk of
being caught (R). Increasing R has to do with the efforts of the authorities to enforce
the law, which are significantly more expensive and require many more means than
increasing P. A common example of such efforts is increasing the number of police
officers and their presence, which naturally requires expending greater resources by
society.
Prima facie, the choice between increasing P or R may be settled simply in favor
of increasing P because it is cheaper, simpler, and does not require many resources.
But modern criminological research points out that increasing the risk factor is
much more effective in preventing delinquency than increasing the punishment.13
Both factors increase the expected value of punishment, but the more significant of
the two is the risk factor, which pays the most important role in the offender’s
considerations whether to commit the offense.
There are many examples to substantiate this argument. For instance, when
municipal workers are on strike and do not write tickets for illegal parking, most
drivers park their cars without paying or in prohibited places. Furthermore, if the
factors are compared, it would be found that for most individuals the value of the
punishment is insignificant compared to the value of the risk. Consider the driver
who knows that if he is caught exceeding the speed limit, he will pay a fine of $100
12
Gabriel Hallevy, The Recidivist Wants to Be Punished – Punishment as an Incentive to
Re-offend, 5 INT’L J. OF PUNISHMENT & SENTENCING 124 (2009).
13
SUSAN EASTON AND CHRISTINE PIPER, SENTENCING AND PUNISHMENT: THE QUEST FOR JUSTICE 124–126
(2nd ed., 2008); NIGEL WALKER, WHY PUNISH? (1991); ANDREW VON HIRSCH, ANTHONY E. BOTTOMS
AND ELIZABETH BURNEY, CRIMINAL DETERRENCE AND SENTENCE SEVERITY (1999); Daniel Nagin,
General Deterrence: A Review of the Empirical Evidence, DETERRENCE AND INCAPACITATION:
ESTIMATING THE EFFECTS OF CRIMINAL SANCTIONS ON CRIME RATES 95 (Alfred Blumstein, Jacqueline
Cohen and Daniel Nagin eds., 1978); MARGERY FRY, ARMS OF THE LAW 76 (1951).
194 6 Punishibility of Artificial Intelligence Technology
and be on record with the registry of motor vehicles. The authorities examine two
options: (a) increasing P and decreasing R, and (b) increasing R and decreasing P.
In the first option the fine is raised to $1,000, but all policemen, speed traps, and
cameras are removed from the road. It is likely that most drivers will drive faster
because the risk of being caught has become significantly lower. In the second
option the fine is lowered to $10, but at every 100 yards there is a police officer
operating a speed trap. Most likely drivers will slow down because the risk of being
caught has increased significantly.
Historically and empirically it has been shown that there is a sharp increase in
delinquency whenever the risk of being caught is lowered, but no significant
decrease in delinquency when punishments become harsher. This conclusion is
borne out by Wolpin’s research, carried out over 73 years, between 1894 and
1967.14 Other studies pointed out the same phenomenon in different locations.
For example, the policemen’s strike in 1923 in Melbourne, Australia,15 the
policemen’s strike in 1919 in Liverpool, England, and the arrest in 1944 of the
Danish policemen by the Nazi authorities for assisting the local resistance to enable
Danish Jews to escape to Sweden.16
These studies show that the most dominant factor in increasing the rate of
deterrence is related to law enforcement rather than to severity of punishment.
Law enforcement, in this context, has to do with an increase in the offender’s risk of
being caught, with immediate action on the part of the authorities in activating the
criminal process, and with the certainty that punishment will be imposed.17 At the
same time, punishments that are too lenient decrease the deterrence significantly
because the offender does not experience the value of the negative incentive even if
he is caught by the authorities.
The personal character of the offender naturally plays an important role in
considering deterrence. Even if the expected value of the punishment (P·R) is
lower than the expected value of the benefits, this is not necessarily an adequate
incentive for delinquency. For prudent offenders (risk haters) a significant gap
between the values would be needed to provide them with an incentive to offend.
For other offenders (risk lovers) a situation in which the expected value of the
benefit exceeds that of the punishment would be considered adequate to offend.
It appears, therefore, that the right combination of a proper rate of punishment
and proper risk of capture can form an optimal value for individual deterrence. But
deterrence as a general purpose of punishment focuses on punishment and sentenc-
ing, not on the methods of law enforcement. The punishment factor itself is crucial
for achieving deterrence, but its highest effectiveness is achieved only when it is
combined with a proper risk of the offender being captured.
14
JAMES Q. WILSON, THINKING ABOUT CRIME 123–142 (2nd ed., 1985).
15
Laurence H. Ross, Deterrence Regained: The Cheshire Constabulary’s “Breathalyser Blitz”,
6 J. LEGAL STUD. 241 (1977).
16
STEPHAN HURWITZ, CRIMINOLOGY 303 (1952).
17
Easton and Piper, supra note 13, at pp. 124–126.
6.1 General Purposes of Punishments and Sentencing 195
18
PAUL JOHANN ANSELM FEUERBACH, LEHRBUCH DES GEMEINEN IN DEUTSCHLAND GÜLTIGEN PEINLICHEN
RECHTS 117 (1812, 2007).
19
Johannes Andenaes, The General Preventive Effects of Punishment, 114 U. PA. L. REV. 949, 952
(1966).
20
Johannes Andenaes, The Morality of Deterrence, 37 U. CHI. L. REV. 649 (1970).
21
JEFFRIE G. MURPHY, GETTING EVEN: FORGIVENESS AND ITS LIMITS (2003); Jeffrie G. Murphy,
Marxism and Retribution, 2 PHILOSOPHY AND PUBLIC AFFAIRS 43 (1973).
22
Dan M. Kahan, Between the Economics and Sociology: The New Path of Deterrence, 95 MICH
L. REV. 2477 (1997); Neal Kumar Katyal, Deterrence’s Difficulty, 95 MICH. L. REV. 2385 (1997);
Jonathan S. Abernethy, The Methodology of Death: Reexamining the Deterrence Rationale,
27 COLUM. HUM. RTS. L. REV. 379 (1996); Craig J. Albert, Challenging Deterrence: New Insights
on Capital Punishment Derived from Panel Data, 60 U. PITT. L. REV. 321 (1999); James
M. Galliher and John F. Galliher, A “Commonsense” Theory of Deterrence and the “Ideology”
of Science: The New York State Death Penalty Debate, 92 J. CRIM. L. & CRIMINOLOGY 307 (2002);
Andrew D. Leipold, The War on Drugs and the Puzzle of Deterrence, 6 J. GENDER RACE & JUST.
111 (2002).
196 6 Punishibility of Artificial Intelligence Technology
punishment. The public deterrence is aimed both at potential offenders who have
never been caught and have never been punished, and at those who have already
experienced criminal proceedings. The general concept of public deterrence is that
individuals are capable of learning from the experience of others and not only from
self-experience. Thus, it is assumed that if the media publicizes criminal verdicts,
the public will tend to avoid delinquency out of the fear of being punished. But not
all verdicts are publicized, and not all individuals are capable of understanding the
verdicts or have access to them.
Despite all of the above, the most important difficulty in public deterrence is its
contradiction with the principle of personal liability, which is one of the fundamen-
tal principles of criminal law.23 According to this principle, the offender can be
punished only for his own behavior, never for the behavior of other persons,
including the potential behavior of other persons. Thus, the society may impose a
punishment on the individual to inflict suffering on him for what he did (retribu-
tion), to deter him personally from recidivism (deterrence), to rehabilitate him
(rehabilitation), and to disable his delinquent capabilities (incapacitation), but not
in order to deter other persons from committing the same offense.
For example, let us assume that the common punishment for commission of
robbery under certain circumstances is 4 years of imprisonment, and that this would
be the punishment in a specific case if the court did not consider public deterrence.
But if the court were to consider public deterrence, it may impose 8 years of
imprisonment only to deter the public. Is it justified to punish the individual doubly
for the sake of public deterrence when half of the punishment already satisfies the
purposes of punishment, including those of individual deterrence?
This raises the question of the legitimacy of the public deterrence. The question
is an acute one because the public is not necessarily knowledgeable in legal matters
of this type, and even if it were, it may not have sufficient legal knowledge to fully
understand the legal meaning of a given punishment. Moreover, individuals are
required to pay a heavy price in order to produce a short-lived deterrence in the
public. In the example above, in order to provide a deterrent for some individuals
who may spend a few minutes reading a short article in the local newspaper, the
offender must serve four additional years in prison. Is it fair? And is it legitimate?
Public deterrence may be consistent with the principle of personal liability,
however, if it becomes only an incidental consequence of the punishment imposed
on the individual. When the court does not aim the punishment at deterring the
public, but the public is nevertheless deterred by the punishment, the deterrence is
legitimate. When, however, the court aims the punishment ex ante at deterring the
public, it is illegitimate, whether the public is actually deterred or not. It is not
legitimate for the court to use the individual instrumentally merely to deter the
23
For the principle of personal liability in criminal law see GABRIEL HALLEVY, THE MATRIX OF
DERIVATIVE CRIMINAL LIABILITY 1–61 (2012).
6.1 General Purposes of Punishments and Sentencing 197
public.24 The individual has the right to be punished for his behavior and not for the
purpose of deterring other people.
The individual’s right to be punished cannot tolerate the instrumental use of the
individual for purposes of deterring others. If a deserved punishment is imposed on
the offender, and one of the incidental consequences of the punishment is that the
public is deterred, the individual pays no additional price for the deterrence of the
public, and public deterrence may be considered legitimate under these
circumstances. Therefore, punishment in criminal law must always be personal
and focused on the individual. It may have public consequences, but these cannot be
deliberate or included in the purposes of punishment.
In general, deterrence is the prospective general purpose of punishment. Deter-
rence is not intended to address the offense that has already been committed, only to
prevent the commission of further offenses. The offense already committed serves
deterrence only as the initial trigger for activating the criminal process, including
sentencing and punishment. This trigger may serve as an indication of the required
measures needed to intimidate or deter the offender from committing further
offenses. Consequently, deterrence is not intended to repair the social harm that
has already been caused by the commission of the offense.
The purpose of deterrence is to provide an answer to the potential social
endangerment embodied in the offender’s behavior.25 It is assumed that through
punishment it is possible to prevent the commission of further offenses, although
the already committed offense cannot be changed. Thus, deterrence is aimed at the
future and not at the past, and it focuses on the prevention of recidivism. The major
role of deterrence is the creation of a better future, free of repeated offending.
Focusing on the past is the role of retribution. Deterrence accepts the fact that the
past is beyond change.
With deterrence in view, it is possible to impose identical punishments on two
offenders who have committed offenses of different severity. If the danger of
recidivism to society is identical for both offenders, identical means can serve the
purpose of preventing recidivism regardless of the severity of the already
committed offenses. The social harm caused by the offenses is immaterial for
deterrence (although it is most significant for retribution).
Because deterrence is affected by the personal character of the offender, there is
a chance that the offender is punished for his personal character and not for any
behavior that occurred in the past.26 Punishing a person for his personal character is
problematic in modern criminal law because it represents punishment for personal
24
Antony Robin Duff and David Garland, Introduction: Thinking about Punishment, A READER ON
PUNISHMENT 1, 11 (Antony Robin Duff and David Garland eds., 1994).
25
Easton and Piper, supra note 13, at pp. 124–126.
26
LEON RADZINOWICZ AND ROGER HOOD, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRA-
TION FROM 1750 VOL. 5: THE EMERGENCE OF PENAL POLICY (1986).
198 6 Punishibility of Artificial Intelligence Technology
status, regardless the behavior, which is prohibited.27 Modern criminal law prefers
punishing for behavior (in rem) rather than for personal status (in personam).
Because of all the above-mentioned limitations, deterrence cannot function as
the sole purpose of punishment. To formulate a fair punishment that also provides
an adequate and satisfactory solution to the various problems raised by punishment
and sentencing, deterrence must be balanced and completed by other purposes of
punishment. Combining deterrence with retribution can provide a solution to
problems both prospectively and retrospectively. But deterrence alone may not
necessarily exhaust all the required prospective aspects of punishment.
Deterrence is indeed a prospective purpose of punishment, but it relates to only
one aspect: the prevention of further delinquency. Deterrence does so by creating
fear and intimidation of expected punishment, including fear of the criminal
process itself, which includes humiliation, loss of time, money, etc. Deterrence
does not address the substantive problems that have led the offender to delinquency,
nor does it pretend to ensure the physical prevention of further delinquency, as it
focuses on mental intimidation. If the substantive problems are acute and remain
unsolved, and mental intimidation is not effective, the result may be that deterrence
is ineffective even prospectively, as it is substantively not different from dressage
through intimidation.28
Thus, deterrence is balanced and completed by retribution retrospectively, and it
is balanced and completed by rehabilitation and incapacitation prospectively.
Rehabilitation focuses on the substantive problems that have led the offender to
delinquency, and incapacitation is concerned with the actual physical prevention of
further delinquency.
6.1.3 Rehabilitation
27
MIRKO BAGARIC, PUNISHMENT AND SENTENCING: A RATIONAL APPROACH (2001).
28
Jeffrie G. Murphy, Marxism and Retribution, 2 PHILOSOPHY AND PUBLIC AFFAIRS 43 (1973).
6.1 General Purposes of Punishments and Sentencing 199
29
Gabriel Hallevy, Therapeutic Victim-Offender Mediation within the Criminal Justice Process –
Sharpening the Evaluation of Personal Potential for Rehabilitation while Righting Wrongs under
the Alternative-Dispute-Resolution (ADR) Philosophy, 16 HARV. NEGOT. L. REV. 65 (2011).
200 6 Punishibility of Artificial Intelligence Technology
30
DAVID ABRAHAMSEN, CRIME AND THE HUMAN MIND (1945); ELMER H. JOHNSON, CRIME, CORRECTION
AND SOCIETY 44–439 (1968); WILLIAM C. MENNINGER, PSYCHIATRIST TO A TROUBLED WORLD (1967).
31
JOHN LEWIS GILLIN, CRIMINOLOGY AND PENOLOGY 708 (1927); Paul W. Tappan, Sentences for Sex
Criminals, 42 J. CRIM. L. CRIMINOLOGY & POLICE SCI. 332 (1951).
6.1 General Purposes of Punishments and Sentencing 201
1970s the courts have used it sparingly.32 Probation, another new punishment
created by rehabilitation, is still being used in most developed countries, but
much more carefully than before.
At the beginning of the twenty-first century, the dominant trend in the use of
rehabilitation as a general purpose of punishment is to instill cognitive and social
qualifications in the offenders that would enable them to deal with the external and
internal factors that led them to delinquency.33 These qualifications are internal
tools the offender is expected to use in order to face factual reality without turning
to delinquency and to carry out a conscious internal change with respect to both the
external and internal factors mentioned above. The aim is to change the
rehabilitated offender’s outlook in the aspects relevant to delinquency.34
Rehabilitation can offer an opportunity to the offender to undergo a process of
re-socialization and to reintegrate into society in a way that does not involve
delinquency. It may be difficult for legal practitioners to identify rehabilitation as
a general purpose of punishment because it emphasizes the correction of the
offender and not the suffering involved in the punishment. But rehabilitation is a
general purpose of punishment because the rehabilitation process and the punish-
ment are integrated, and the punishment is the trigger that initiates the rehabilitation
program.35
At times, the involvement of the community and of the social circles close to the
offender (e.g., family, friends, teachers, etc.) is required to complete the
32
Robert W. Kastenmeier and Howard C. Eglit, Parole Release Decision-Making: Rehabilitation,
Expertise and the Demise of Mythology, 22 AM. U. L. REV. 477 (1973); JESSICA MITFORD, KIND AND
USUAL PUNISHMENT: THE PRISON BUSINESS (1974).
33
DAVID P. FARRINGTON AND BRANDON C. WELSH, PREVENTING CRIME: WHAT WORKS FOR CHILDREN,
OFFENDERS, VICTIMS AND PLACES (2006); LAWRENCE W. SHERMAN, DAVID P. FARRINGTON, DORIS
LEYTON MACKENZIE AND BRANDON C. WELSH, EVIDENCE-BASED CRIME PREVENTION (2006); ROSEMARY
SHEEHAN, GILL MCLVOR AND CHRIS TROTTER, WHAT WORKS WITH WOMEN OFFENDERS (2007); Laaman
v. Helgemoe, 437 F.Supp. 269 (1977); Secretary of State for the Home Department, [2003]
E.W.C.A. Civ. 1522, [2003] All E.R. (D) 56; Secretary of State for Justice, [2008]
E.W.C.A. Civ. 30, [2008] All E.R. (D) 15, [2008] 3 All E.R. 104; Anthony E. Bottoms, Empirical
Research Relevant to Sentencing Frameworks: Reform and Rehabilitation, PRINCIPLED SENTENCING:
READINGS ON THEORY AND POLICY 16 (Andrew von Hirsch, Andrew Ashworth and Julian Roberts
eds., 3rd ed., 2009); Peter Raynor, Assessing the Research on ‘What Works’, PRINCIPLED SENTENC-
ING: READINGS ON THEORY AND POLICY 19 (Andrew von Hirsch, Andrew Ashworth and Julian
Roberts eds., 3rd ed., 2009); Francis T. Cullen and Karen E. Gilbert, Reaffirming Rehabilitation,
PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY 28 (Andrew von Hirsch, Andrew
Ashworth and Julian Roberts eds., 3rd ed., 2009); Andrew von Hirsch and Lisa Maher, Should
Penal Rehabilitation Be Revived?, PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY
33 (Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009).
34
Richard P. Seiter and Karen R. Kadela, Prisoner Reentry: What Works, What Does Not, and
What Is Promising, 49 CRIME AND DELINQUENCY 360 (2003); Clive R. Hollin, Treatment Programs
for Offenders, 22 INT’L J. OF LAW & PSYCHIATRY 361 (1999).
35
Francis A. Allen, Legal Values and the Rehabilitative Ideal, 50 J. CRIM. L. CRIMINOLOGY &
POLICE SCI. 226 (1959); LIVINGSTON HALL AND SHELDON GLUECK, CRIMINAL LAW AND ITS ENFORCE-
MENT 18 (2nd ed., 1958); Edward Rubin, Just Say No to Retribution, 7 BUFF. CRIM. L. REV.
17 (2003).
202 6 Punishibility of Artificial Intelligence Technology
36
Andrew Ashworth, Rehabilitation, PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY 1, 2
(Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009); PETER RAYNOR AND
GWEN ROBINSON, REHABILITATION, CRIME AND JUSTICE 21 (2005); SHADD MARUNA, MAKING GOOD:
HOW CONVICTS REFORM AND BUILD THEIR LIVES (2001); STEPHEN FARRALL, RETHINKING WHAT WORKS
WITH OFFENDERS: PROBATION, SOCIAL CONTEXT, AND DESISTANCE FROM CRIME (2002).
37
LEON RADZINOWICZ AND ROGER HOOD, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRA-
TION FROM 1750 VOL. 5: THE EMERGENCE OF PENAL POLICY (1986).
38
MIRKO BAGARIC, PUNISHMENT AND SENTENCING: A RATIONAL APPROACH (2001).
6.1 General Purposes of Punishments and Sentencing 203
Modern criminal law prefers punishing for behavior (in rem) rather than for
personal status (in personam). As a result, rehabilitation cannot serve as the sole
consideration or purpose of punishment, and can only be complementary to the
other purposes of punishment.
Deterrence is also a prospective purpose of punishment, but it relates to another
aspect of delinquency prevention. Deterrence is intended to prevent recidivism
through intimidation. The means that prevents reoffending is the offender’s fear
of the potential punishment. Deterrence does not consider the substantive reasons
and roots of delinquency of the offender, and thus it is not intended to solve these
problems, but only to handle their external symptoms expressed by the commission
of the offense. By contrast, rehabilitation is designed to address these problems.
Nevertheless, rehabilitation does not provide solutions to all prospective
problems of delinquency: it is not intended to eliminate the physical factors that
lead to delinquency or to solve the various types of social risk associated with the
offender. Moreover, the internal cognitive change in the offender is not always
sufficiently powerful to prevent reoffending. Furthermore, the reasons for delin-
quency are not always internal. For example, when the reasons for delinquency are
physical (e.g., chemical imbalance, genetic problems, etc.) or mental (e.g., mental
impairment that cannot be treated without medication), rehabilitation is likely to be
irrelevant and ineffective despite the fact that it is a prospective purpose of
punishment.39
Thus, whereas rehabilitation is balanced and completed by retribution as a
retrospective purpose of punishment, deterrence and incapacitation balance and
complete rehabilitation as prospective purposes. Deterrence focuses on the social
risk associated with the offender and incapacitation focuses on the physical preven-
tion of further delinquency.
6.1.4 Incapacitation
39
Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and Theoretical Relevance of
Hypersexual Desire, 47 INT’L J. OF OFFENDER THERAPY AND COMPARATIVE CRIMINOLOGY 439 (2003);
Matthew Jones, Overcoming the Myth of Free Will in Criminal Law: The True Impact of the
Genetic Revolution, 52 DUKE L. J. 1031 (2003); Sanford H. Kadish, Excusing Crime, 75 CAL.
L. REV. 257 (1987).
204 6 Punishibility of Artificial Intelligence Technology
assumption is that a sex offender who commits his offenses because of endocrino-
logical problems (hormonal imbalance) can achieve the necessary balance through
chemical treatment, and that a property offender can be prevented from committing
further property offenses if his hands are cut off.
Incapacitation is a prospective purpose of punishment because it relates only to
the future. From the point of view of the offender and of society, incapacitation does
not address the offense already committed, only future offenses. Incapacitation is
irrelevant for the past because no offender can be incapacitated retroactively.
Consequently, the purpose of incapacitating the individual is always to prevent
the commission of further offenses in the future, in other words, to prevent recidi-
vism. The offense that has already been committed serves only as the initial trigger
for initiating the process of incapacitation, but it is not treated by that process.
Although incapacitation, rehabilitation, and deterrence are all prospective
purposes of punishment, and all three are intended to prevent recidivism, they are
substantively different. Rehabilitation and deterrence are designed to create an
internal conscious change within the offender’s mind to prevent the offender
from committing further offenses. Rehabilitation is aimed at achieving the same
end by addressing the roots of the delinquency, and the purpose of deterrence is to
deal with the external symptoms of delinquency, as noted above. By contrast,
incapacitation does not operate through internal conscious changes but by the
physical prevention of further delinquency.
As far as incapacitation is concerned, it is immaterial whether or not the offender
has internally assimilated the social value of avoiding delinquency, has been
deterred from delinquency, has been rehabilitated, or wishes to commit any further
offense. Incapacitation is effective even when the offender feels no solidarity with
the social values of delinquency prevention and even if he still exhibits an extreme
desire to commit further offenses.40 Incapacitation operates at two levels: breaking
the linkage between the offender and the opportunity to commit further offenses,
and disabling the offender’s physical ability to reoffend.
Developments in incapacitation as a general purpose of punishment in the
twentieth century have led to the creation of three general circles of incapacitation:
40
Ledger Wood, Responsibility and Punishment, 28 AM. INST. CRIM. L. & CRIMINOLOGY 630, 639
(1938).
41
GERALD CAPLAN, PRINCIPLES OF PREVENTIVE PSYCHIATRY (1964).
6.1 General Purposes of Punishments and Sentencing 205
42
MARCUS FELSON, CRIME AND EVERYDAY LIFE: INSIGHTS AND IMPLICATIONS FOR SOCIETY 17, 95,
109, 120 (1994).
43
RONALD V. CLARKE, SITUATIONAL CRIME PREVENTION: SUCCESSFUL CASE STUDIES (1992); Ronald
V. Clarke and Derek B. Cornish, Modeling Offenders’ Decisions: A Framework for Policy and
Research, 6 CRIME AND JUSTICE: AN ANNUAL REVIEW OF RESEARCH 147 (1985).
44
Don M. Gottfredson, Assessment and Prediction Methods in Crime and Delinquency, PRESIDENTS
NATIONAL COMMISSION FOR LAW ENFORCEMENT AND ADMINISTRATION OF JUSTICE, TASK FORCE REPORT:
JUVENILE DELINQUENCY AND YOUTH CRIME (1967); Joan Petersilia and Peter W. Greenwood, Man-
datory Prison Sentences: Their Projected Effects on Crime and Prison Populations, 69 J. CRIM.
L. & CRIMINOLOGY 604 (1978).
45
JOHN W. HINTON, DANGEROUSNESS: PROBLEMS OF ASSESSMENT AND PREDICTION (1983); JOHN
MONAHAN, PREDICTING VIOLENT BEHAVIOR: AN ASSESSMENT OF CLINICAL TECHNIQUES (1981); PETER
GREENWOOD AND ALLAN ABRAHAMSE, SELECTIVE INCAPACITATION (1982).
206 6 Punishibility of Artificial Intelligence Technology
prevent delinquency, and society must prevent reoffending. The failure of the other
two circles may be the result of ineffectiveness, inefficiency, or inactivity.
In contrast to rehabilitation and deterrence, which are focused on inner changes
in the offender, incapacitation focuses on the physical prevention of recidivism
either by breaking the linkage between the offender and the opportunity to offend
(e.g., through the object of delinquency, location, devices, etc.) or by neutralizing
the offender’s capability to reoffend. Absolute neutralizing can take the form of
capital penalty, and in the case of certain offenses it can take the form of amputation
of limbs, including castration, or chemical castration.46
In legal systems in which these punishments are allowed, they are used to
achieve absolute incapacitation of delinquent capabilities.47 In other legal systems
alternative punishments are used for the same purposes, despite their inability to
achieve absolute incapacitation. For example, long-term imprisonment removes the
offender from society and reduces the offender’s opportunities for delinquent
activity, but offenses can also be committed in prison as well as after release,
when the offender has greater experience and perhaps more incentive to reoffend
(e.g., because of the economic difficulties of the family due to the imprisonment,
the loss of certain social qualifications, association with other offenders, etc.).48
Prison authorities may be assisted by a system of release committees in
predicting the chances of recidivism after release,49 but not necessarily in
eliminating them. Long-term imprisonment may reduce the risk of recidivism,
but it cannot ensure the incapacitation of the offender’s delinquent capabilities.50
The choice of the most appropriate means to incapacitate the offender’s delinquent
capabilities is a social choice, based on the values of any given society. There are
difficulties in assessing the chances that the offender will reoffend because the
prediction is based on the offender’s criminal record51 and on other personal
46
JACK P. GIBBS, CRIME, PUNISHMENT AND DETERRENCE 58 (1975).
47
BARBARA HUDSON, UNDERSTANDING JUSTICE: AN INTRODUCTION TO IDEAS, PERSPECTIVES AND
CONTROVERSIES IN MODERN PENAL THEORY 32 (1996, 2003).
48
Joseph Murray, The Effects of Imprisonment on Families and Children of Prisoners, THE
EFFECTS OF IMPRISONMENT 442 (Alison Liebling and Shadd Maruna eds., 2005); Shadd Maruna
and Thomas P. Le Bel, Welcome Home? Examining the “Reentry Court” Concept from a Strength-
Based Perspective, 4 WESTERN CRIMINOLOGY REVIEW 91 (2003).
49
Malcolm M. Feeley and Jonathan Simon, The New Penology: Notes on the Emerging Strategy of
Corrections and Its Implications, 30 CRIMINOLOGY 449 (1992); Andrew von Hirsch, Incapacitation,
PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY 75 (Andrew von Hirsch, Andrew
Ashworth and Julian Roberts eds., 3rd ed., 2009); ANDREW VON HIRSCH, PAST OR FUTURE CRIMES:
DESERVEDNESS AND DANGEROUSNESS IN THE SENTENCING OF CRIMINALS 176–178 (1985).
50
FRANKLIN E. ZIMRING AND GORDON J. HAWKINS, DETERRENCE: THE LEGAL THREAT IN CRIME CONTROL
(1973).
51
MARK H. MOORE, SUSAN R. ESTRICH, DANIEL MCGILLIS AND WILLIAM SPELLMAN, DEALING WITH
DANGEROUS OFFENDERS: THE ELUSIVE TARGET OF JUSTICE (1985).
6.1 General Purposes of Punishments and Sentencing 207
characteristics.52 These difficulties have to do with the method used to make such
predictions and not with the substantial need for such assessment.53
Because in some cases the incapacitation of delinquent capabilities of offenders
may exceed the maximum penalty for a given offense, the penalty maximum
limitation has become more flexible. In some legal systems, it has been permitted
to impose harsher punishments than specified in the offense if the court reaches the
conclusion that in this way it can protect society from recidivism.54 Moreover, in
some legal systems preventive detention is used after the offender finishes serving
his imprisonment term, if the court finds that the offender is still dangerous to the
society despite the fact that the punishment has been served in full.55
In some legal systems, the offender is restricted by the court after being released
from prison because he is assessed to be dangerous to society.56 Restrictions may
apply to specific places of residence (as in the case of sex offenders and pedophiles
who may be restricted from living close to their potential victims) or to certain
professions in which the offender may not engage. The offender may also be
required to undergo medical treatment, to meet with relevant professionals, not to
leave a certain territory, to report to the police periodically, and so on.57
At times, the incapacitating measures are not aimed at the offender but at society
at large. For example, the names and photographs of convicted offenders may be
published after their release from prison as a warning to the public to exercise
caution in dealing with these offenders. These preventive measures are used in
52
Anthony E. Bottoms and Roger Brownsword, Incapacitation and “Vivid Danger”, PRINCIPLED
SENTENCING: READINGS ON THEORY AND POLICY 83 (Andrew von Hirsch, Andrew Ashworth and
Julian Roberts eds., 3rd ed., 2009); Andrew von Hirsch and Andrew Ashworth, Extending
Sentences for Dangerousness: Reflections on the Bottoms-Brownsword Model, PRINCIPLED SEN-
TENCING: READINGS ON THEORY AND POLICY 85 (Andrew von Hirsch, Andrew Ashworth and Julian
Roberts eds., 3rd ed., 2009).
53
Andrew von Hirsch and Lila Kazemian, Predictive Sentencing and Selective Incapacitation,
PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY 95 (Andrew von Hirsch, Andrew
Ashworth and Julian Roberts eds., 3rd ed., 2009); Lila Kazemian and David P. Farrington,
Exploring Residual Career Length and Residual Number of Offenses for Two Generations of
Repeat Offenders, 43 J. OF RESEARCH IN CRIME AND DELINQUENCY 89 (2006).
54
ARNE LONBERG, THE PENAL SYSTEM OF DENMARK (1975); JEAN E. FLOUD AND WARREN YOUNG,
DANGEROUSNESS AND CRIMINAL JUSTICE (1981); LINDA SLEFFEL, THE LAW AND THE DANGEROUS
CRIMINAL (1977); Parole Board, [2003] U.K.H.L. 42, [2004] 1 A.C. 1.
55
W.H. HAMMOND AND EDNA CHAYEN, PERSISTENT CRIMINALS (1963); DAVID A. THOMAS, PRINCIPLES
OF SENTENCING 309 (1980); Lawrence Davidoff and John Barkway, Extended Terms of Imprison-
ment for Persistent Offenders, 21 HOME OFFICE RESEARCH BULLETIN 43 (1986); Andrew von Hirsch,
Prediction of Criminal Conduct and Preventive Confinement of Convicted Persons, 21 BUFF.
L. REV. 717 (1972).
56
See, e.g., article 104 of the Sexual Offences Act, 2003, c.42; article 227 of the Criminal Justice
Act, 2003, c.44; articles 98–101 of the Criminal Justice and Immigration Act, 2008, c.4; Richards,
[2006] E.W.C.A. Crim. 2519, [2007] Crim. L.R. 173.
57
Jonathan Simon, The Ideological Effect of Actuarial Practices, 22 LAW & SOCIETY REV.
771 (1988); Jonathan Simon, Megan’s Law: Crime and Democracy in Late Modern America,
25 LAW & SOCIAL INQUIRY 1111 (2000).
208 6 Punishibility of Artificial Intelligence Technology
conjunction with other measures such as close monitoring of offenders who are still
considered to be dangerous to the public, despite having completed serving their
penalty. Common monitoring measures are police tracking or electronic bracelets
that enable the police to locate the offender at any time.58
The general justification for restricting the released offender beyond the period
of penalty specified for the given offense as part of incapacitation has to do with the
desire to protect society from the social danger caused by the offender. Substan-
tively, this is not different from the forcible hospitalization of mentally ill persons,
the quarantine imposed on individuals suffering from an infectious disease, revok-
ing the weapons license of persons convicted of violent offenses, or revoking the
driver’s license of epileptic individuals.59
Incapacitation as a general purpose of punishment is designed to physically
prevent the occurrence of further offenses, regardless of the harm actually caused to
society by the former offense. For example, an offender who attempts to commit an
offense but does not complete it because he is caught in the act is still considered
dangerous to society, although he has not caused any actual harm.60 From the point
of view of incapacitation, the harm already caused to society is immaterial, as
incapacitation is a prospective general purpose of punishment, similar to deterrence
and rehabilitation, as noted above.
In general, incapacitation is a prospective general purpose of punishment and it
is not intended to address the offense that has already been committed, only to
prevent the commission of future offenses. The offense already committed serves
incapacitation only as the trigger that initiates the criminal process, including
sentencing and punishment. This trigger may assist in specifying the measures
required to incapacitate the offender’s delinquent capabilities. Consequently, inca-
pacitation is not intended to provide a solution to the social harm that has already
been caused by the commission of the offense.
Incapacitation, however, is designed to deal with the physical capability of the
offender to reoffend. The assumption is that punishment can prevent recidivism.
The primary measures taken by incapacitation, as noted above, are breaking the
linkage between the offender and the opportunity to offend and eliminating the
offender’s physical capability to reoffend. In this way, incapacitation is oriented
toward the future rather than the past, and focuses on the prevention of recidivism.
The primary role of incapacitation is the creation of a better future, free from
reoffending. It is the role of retribution to focus on the past, whereas incapacitation
accepts the fact that the past is beyond change.
58
Joseph B. Vaughn, A Survey of Juvenile Electronic Monitoring and Home Confinement
Programs, 40 JUVENILE & FAM. C. J. 1 (1989).
59
NIGEL WALKER, PUNISHMENT, DANGER AND STIGMA: THE MORALITY OF CRIMINAL JUSTICE
ch. 5 (1980); Marvin E. Wolfgang, Current Trends in Penal Philosophy, 14 ISR. L. REV.
427 (1979).
60
GABRIEL HALLEVY, THE MATRIX OF DERIVATIVE CRIMINAL LIABILITY 75–83 (2012).
6.1 General Purposes of Punishments and Sentencing 209
61
Norval Morris, Incapacitation within Limits, PRINCIPLED SENTENCING: READINGS ON THEORY AND
POLICY 90 (Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009).
62
MIRKO BAGARIC, PUNISHMENT AND SENTENCING: A RATIONAL APPROACH (2001).
63
Herbert L. Packer, The Practical Limits of Deterrence, CONTEMPORARY PUNISHMENT 102, 105
(Rudolph J. Gerber, Patrick D. McAnany and Norval Morris eds., 1972).
64
Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and Theoretical Relevance of
Hypersexual Desire, 47 INT’L J. OF OFFENDER THERAPY AND COMPARATIVE CRIMINOLOGY 439 (2003);
Matthew Jones, Overcoming the Myth of Free Will in Criminal Law: The True Impact of the
Genetic Revolution, 52 DUKE L. J. 1031 (2003); Sanford H. Kadish, Excusing Crime, 75 CAL.
L. REV. 257 (1987).
210 6 Punishibility of Artificial Intelligence Technology
this way, deterrence, rehabilitation, and incapacitation are all prospective general
purposes of punishment, but each is designed to solve different problems in
preventing further delinquency.65
From the above four purposes of punishment, what are the relevant purposes to
artificial intelligence technology? Retribution is meant to satisfy society more than
it is purposed to the offender. Causing suffer to the offender, itself, has no
prospective value. The suffer may deter the offender, but that is part of the general
purpose of deterrence, not retribution. Retribution may supply some catharsis to the
society and victims through causing suffer to the offender. Punishing machines
through retribution, in this context, would be meaningless and impractical.
Some people, when they hurry up and suddenly their car cannot be ignited, they
get angry. In their anger they may hit the car, kick it or even shout at it. Punishing
machines, any machines, from cars to highly-sophisticated artificial intelligence
robots, through retribution would not be different than kicking a car. It may ease the
anger for some personalities, but not more than that. A machine does not suffer, and
as long as retribution is based on suffer, retribution would not be very relevant to
punishing robots. This is true for both classic and modern (“just desert”) approaches
of retribution.
Moreover, functionally, if retribution functions as lenient factor of sentencing in
order to prevent revenge, it just strengthens the retribution’s irrelevancy to artificial
intelligence sentencing. Revenge is assumed to cause additional suffer to the
offender than the official punishment, however, since machines do not experience
suffer, the choice between revenge and retribution is meaningless for them.
Deterrence is meant to prevent the commission of the next offense through
intimidation. For machines, at the moment, intimidation is a feeling they cannot
experience. The intimidation itself is based on future suffer imposed in case of
committing the offense. Since machines do not experience suffer either at the
moment, as aforesaid, the reason for intimidation, besides the intimidation itself,
is also annihilated when considering the appropriate punishment for robots. How-
ever, both retribution and deterrence may be relevant as punishment’s purposes
regarding the human participants in the commission of the offense (e.g., users and
programmers).
As to rehabilitation, artificial intelligence systems may experience decision-
making processes and take decisions that might be seem unreasonable. Sometimes
the artificial intelligence system may be needing external directing in order to refine
the making-decisions process. This may be part of the machine learning process.
65
LIVINGSTON HALL AND SHELDON GLUECK, CRIMINAL LAW AND ITS ENFORCEMENT 17 (2nd ed., 1958).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 211
Rehabilitation functions exactly the same way for humans, therefore it may be
applicable to artificial intelligence systems as well. The rehabilitation of humans
causes them to make better decisions in their daily life in the society’s point of
view. So may the criminal process cause for artificial intelligence systems. The
punishment, under this approach, would be directed to refine the machine learning
process.
The artificial intelligence system after being rehabilitated would be able to form
better and more accurate decisions after adding more limitations to its discretion
and refining the process through machine learning. Thus, the punishment, if
adjusted correctly to the particular artificial intelligence system, would be part of
the machine learning process. Through this process, directed by the rehabilitative
punishment, the artificial intelligence system would have better tools to analyze
factual data and deal with it. In fact, this is the same effect of rehabilitative
punishments on humans. Due to the rehabilitative punishment, they have better
tools to face factual reality.
Consequently, rehabilitation may be relevant purpose of punishment for artifi-
cial intelligence systems as it is not based on intimidation or suffer, but it is directed
to creation of better performances of the artificial intelligence systems. For humans,
this consideration may be secondary in most cases, however, for artificial intelli-
gence systems it may be a primary purpose of punishment. Nevertheless, rehabili-
tation is not the only consideration that may be relevant for artificial intelligence
systems, but incapacitation is relevant as well.
As to incapacitation, if an artificial intelligence system commits offenses during
its activation, and it has no capability of changing its ways through inner changes
(e.g., through machine learning), only incapacitation may supply an adequate
answer. Whether the artificial intelligence system understands the meaning of its
activity or not, and whether the artificial intelligence system is equipped with
proper tools to perform inner changes, delinquency must still be prevented. In
such situation the society must take from the artificial intelligence system its
physical capabilities to commit further offenses. The particular artificial intelli-
gence system must be out of the circle of delinquency, regardless its skills.
Substantively, this is what the society does with equivalent cases of human
offenders.66
It may be concluded that towards artificial intelligence systems the two relevant
considerations of punishments are rehabilitation and incapacitation. Both reflect the
extreme edges of sentencing, and both serve the criminal law purposes towards
non-human offenders. When the artificial intelligence system possesses capabilities
of performing inner changes that affect its activity, it seems that rehabilitation
would be the relevant consideration rather than incapacitation. However, when the
66
Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and Theoretical Relevance of
Hypersexual Desire, 47 INT’L J. OF OFFENDER THERAPY AND COMPARATIVE CRIMINOLOGY 439 (2003);
Matthew Jones, Overcoming the Myth of Free Will in Criminal Law: The True Impact of the
Genetic Revolution, 52 DUKE L. J. 1031 (2003); Sanford H. Kadish, Excusing Crime, 75 CAL.
L. REV. 257 (1987).
212 6 Punishibility of Artificial Intelligence Technology
Given that sentencing considerations are relevant for artificial intelligence systems,
the question is how would it be possible to impose human punishments upon them.
For instance, how can imprisonment, fine or capital penalty be imposed upon
artificial intelligence systems. For this purpose the legal system need a legal
technique of conversion from human penalties to artificial intelligence penalties.
The required legal technique may be inspired by the legal technique of converting
human penalties for corporations.
Corporations are legal entities in criminal law, and criminal liability may be
imposed upon them as if they were human offenders. When a corporation is found
criminally liable, the question of punishment arises. In general, because there is no
legal difference between corporate and human offenders in the imposition of
criminal liability, there is no reason for substantive differences between them in
punishment, at least not from the point of view of the general purposes of punish-
ment. There may be some technical differences, however, in the way certain
punishments are executed.
Retribution relates to the subjective pricing of suffering, which is affected by the
social harm caused by the offense. The social harm is measured objectively,
regardless of the identity of the offender. Although a corporation may cause greater
harm with a lesser effort, retribution considers the actual harm and not the
offender’s capabilities. For the subjective pricing of suffering, the court must
consider the personal characteristics of the corporation (together with the imper-
sonal characteristics of the offense), in the same way it does in relation to human
offenders. Concerning imposition of certain punishments, some adjustments must
be made, as discussed below.
Deterrence relates to the balance between the expected values of benefit and
punishment resulting from the commission of the offense. The effect of deterrence
through punishment on this balance is not different for corporate and human
offenders. Increasing the expected value of the punishment affects the balance in
the same way for both corporate and human offenders. The corporate rationality
required for deterrence is present in the corporate decision-making processes,
which can be fully affected by the deterrent effect of punishment.
Rehabilitation relates to the offender’s personal rehabilitation potential and
seeks an appropriate solution to the sources of the offender’s delinquency. As
general purpose of punishment, rehabilitation may be relevant whether the offender
is human or a corporation. A corporation may have rehabilitation potential, as a
corporation, and its delinquency may have reasons that can be treated appropriately.
Occasionally the offense reveals a delinquent organizational subculture within a
6.2 Relevance of Sentencing to Artificial Intelligence Systems 213
corporation that encourages offending and provides incentives for it, directly or
indirectly (by disregarding offenses or by unwillingness to prevent their
commission).67
Imposing criminal liability on an officer in the corporation for the commission of
a given offense while disregarding the roots of the delinquency within the corpora-
tion cannot provide an effective solution to corporate delinquency. Often there is
only a minimal difference between a corporation that is incapable of changing its
delinquent subculture and the associated decision-making process, and a corpora-
tion that accepts that subculture.68 At times, the reasons for delinquency are
objective (e.g., internal power struggles that paralyze the operation of the corpora-
tion). Rehabilitation may address appropriately the roots of the delinquent
subculture.
Incapacitation seeks to physically prevent reoffending and stop the social
endangerment posed by the offender. The social endangerment posed by the
offender is evaluated in the same way, whether the offender is human or a
corporation. The opportunities to commit offenses are examined objectively,
based on the behavior of the offender, whether human or a corporation. For
example, the opportunity to release a false report to the tax authorities is based on
the behavior of the offender, not on the offender’s identity. In this context, the
measures of incapacitation are determined based on the social endangerment posed
by the offender, regardless of the offender’s legal identity.
In conclusion, there is no legal difference between human and corporate
offenders as far as the general purposes of punishment are concerned, but there
may be some differences in the way in which certain punishments are carried out.
When a fine is imposed, there is not much difference between human and corporate
offenders, and paying the fine is not physically different from paying taxes. But the
question arises how imprisonment is carried out when the offender is a corporation.
The same question may arise in the case of probation, capital penalty, public
service, etc., all of which are interpreted as physical punishments.
Because no physical punishments have been planned ex ante for corporations, it
has been argued that they are inapplicable to corporations and that therefore, in
these cases, corporations are unpunishable.69 This argument is incorrect for two
main reasons. First, in the case of most offenses the punishment can be converted
into other punishments, including fines. Second, in general, all punishments are
67
PETER A. FRENCH, COLLECTIVE AND CORPORATE RESPONSIBILITY 47 (1984).
68
Stuart Field and Nico Jorg, Corporate Liability and Manslaughter: Should We Be Going
Dutch?, [1991] Crim. L.R. 156 (1991).
69
HARRY G. HENN AND JOHN R. ALEXANDER, CORPORATIONS AND OTHER BUSINESS ENTERPRISES
184 (3rd ed., 1983); People v. Strong, 363 Ill. 602, 2 N.E.2d 942 (1936); State v. Traux,
130 Wash. 69, 226 P. 259 (1924); United States v. Union Supply Co., 215 U.S. 50, 30 S.Ct.
15, 54 L.Ed. 87 (1909); State v. Ice & Fuel Co., 166 N.C. 366, 81 S.E. 737 (1914); Commonwealth
v. McIlwain School Bus Lines Inc., 283 Pa.Super. 1, 423 A.2d 413 (1980).
214 6 Punishibility of Artificial Intelligence Technology
applicable and relevant to both humans and corporations,70 although in the case of
some punishments it is necessary to make some adjustments. These adjustments,
however, do not negate the applicability of the punishments.71
Not only has criminal liability been imposed upon corporations for centuries, but
corporations have also been sentenced, and not only to fines. Corporations are
punished in various ways, including imprisonment. Note that corporations are
punished separately from their human officers (directors, managers, employees,
etc.), exactly in the way that criminal liability is imposed upon them separately
from the criminal liability, if any, of their human officers. There is no debate over
the question whether corporations should be punished using a variety of
punishments, including imprisonment, the question concerns only on the way in
which to do it.72
To answer the question of “how,” a general legal technique of conversion is
needed. This operation is carried out in three principal stages. First, the general
punishment itself (e.g., imprisonment, fine, probation, death, etc.) is analyzed
regarding its roots of meaning. Second, these roots are sought in the corporation.
Third, the punishment is adjusted according to the roots found in the corporation.
For example, in the case of imposition of incarceration on corporations, first
incarceration is traced back to its roots in the act of depriving individuals of their
freedom, then a meaning is sought for the concept of freedom for corporations.
After this meaning has been understood, in the third and final stage the court
imposes a punishment that is the equivalent of depriving a corporation of its
freedom. This is how the general legal technique of conversion works in the case
of sentencing of corporations. At times, this requires the court to be creative in the
adjustments required to make punishments applicable to corporations, but the
general framework is clear, workable, and it has been implemented with all types
of punishments imposed on all types of corporations.73
70
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry Into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981); STEVEN BOX, POWER, CRIME AND
MYSTIFICATION 16–79 (1983); Brent Fisse and John Braithwaite, The Allocation of Responsibility
for Corporate Crime: Individualism, Collectivism and Accountability, 11 SYDNEY L. REV.
468 (1988).
71
Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Misconduct, 60 LAW &
CONTEMP. PROBS. 23 (1997); Richard Gruner, To Let the Punishment Fit the Organization:
Sanctioning Corporate Offenders Through Corporate Probation, 16 AM. J. CRIM. L. 1 (1988);
Steven Walt and William S. Laufer, Why Personhood Doesn’t Matter: Corporate Criminal
Liability and Sanctions, 18 AM. J. CRIM. L. 263 (1991).
72
Stuart Field and Nico Jorg, Corporate Liability and Manslaughter: Should We Be Going
Dutch?, [1991] Crim. L.R. 156 (1991).
73
Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Misconduct, 60 LAW &
CONTEMP. PROBS. 23 (1997); Richard Gruner, To Let the Punishment Fit the Organization:
Sanctioning Corporate Offenders Through Corporate Probation, 16 AM. J. CRIM. L. 1 (1988);
Steven Walt and William S. Laufer, Why Personhood Doesn’t Matter: Corporate Criminal
Liability and Sanctions, 18 AM. J. CRIM. L. 263 (1991).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 215
74
United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988).
75
Ibid, at p. 858.
76
Ibid.
77
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry Into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981); STEVEN BOX, POWER, CRIME AND
MYSTIFICATION 16–79 (1983); Brent Fisse and John Braithwaite, The Allocation of Responsibility
for Corporate Crime: Individualism, Collectivism and Accountability, 11 SYDNEY L. REV.
468 (1988).
78
Allegheny Bottling Company case, supra note 75, at p. 861.
79
Ibid, at p. 861: “Such restraint of individuals is accomplished by, for example, placing them in
the custody of the United States Marshal. Likewise, corporate imprisonment can be accomplished
by simply placing the corporation in the custody of the United States Marshal. The United States
216 6 Punishibility of Artificial Intelligence Technology
Thus, imprisonment may be applied not only to human but also to corporate
offenders. Following the same approach, imprisonment is not the only penalty
applicable to corporations, but other penalties can be converted as well, even if
they were originally designed for human offenders. And if this is true for imprison-
ment, which is an essentially human penalty, fine can be easily collected from
corporations in the same way as taxes are. Thus, in determining the type of
punishments and its scope based on the general purposes of punishment, it is
immaterial whether the offense was committed by humans or by corporations.
After the court imposes the appropriate punishment, it may be necessary to make
some adjustments to some of the punishments.
This insight raises, of course, the equivalent question for artificial intelligence
systems. Using the general legal technique of conversion, presented above, as taken
from the corporate delinquency world, human punishments may be applicable and
actually imposed upon artificial intelligence systems, the very same way they are
applicable and actually imposed upon corporation. We shall now examine the
applicability of each of the common punishments in the modern criminal law on
artificial intelligence systems. For the applicability and actual imposition of
penalties upon artificial intelligence systems the punishments of capital penalty,
imprisonment, probation, public service and fine.
Marshal would restrain the corporation by seizing the corporation’s physical assets or part of the
assets or restricting its actions or liberty in a particular manner. When this sentence was
contemplated, the United States Marshal for the Eastern District of Virginia, Roger Ray, was
contacted. When asked if he could imprison Allegheny Pepsi, he stated that he could. He stated
that he restrained corporations regularly for bankruptcy court. He stated that he could close the
physical plant itself and guard it. He further stated that he could allow employees to come and go
and limit certain actions or sales if that is what the Court imposes. Richard Lovelace said some
three hundred years ago, ‘stone walls do not a prison make, nor iron bars a cage.’ It is certainly true
that we erect our own walls or barriers that restrain ourselves. Any person may be imprisoned if
capable of being restrained in some fashion or in some way, regardless of who imposes it. Who am
I to say that imprisonment is impossible when the keeper indicates that it can physically be done?
Obviously, one can restrain a corporation. If so, why should it be more privileged than an
individual citizen? There is no reason, and accordingly, a corporation should not be more
privileged. Cases in the past have assumed that corporations cannot be imprisoned, without any
cited authority for that proposition. . . . This Court, however, has been unable to find any case
which actually held that corporate imprisonment is illegal, unconstitutional or impossible. Con-
siderable confusion regarding the ability of courts to order a corporation imprisoned has been
caused by courts mistakenly thinking that imprisonment necessarily involves incarceration in jail.
. . . But since imprisonment of a corporation does not necessarily involve incarceration, there is no
reason to continue the assumption, which has lingered in the legal system unexamined and without
support, that a corporation cannot be imprisoned. Since the Marshal can restrain the corporation’s
liberty and has done so in bankruptcy cases, there is no reason that he cannot do so in this case as he
himself has so stated prior to the imposition of this sentence”.
6.2 Relevance of Sentencing to Artificial Intelligence Systems 217
80
RUSS VERSTEEG, EARLY MESOPOTAMIAN LAW 126 (2000); G. R. DRIVER AND JOHN C. MILES, THE
BABYLONIAN LAWS, VOL. I: LEGAL COMMENTARY 206, 495–496 (1952): “The capital penalty is most
often expressed by saying that the offender ‘shall be killed’. . .; this occurs seventeen times in the
first thirty-four sections. A second form of expression, which occurs five times, is that ‘they shall
kill’. . . the offender”.
81
Frank E. Hartung, Trends in the Use of Capital Punishment, 284(1) ANNALS OF THE AMERICAN
ACADEMY OF POLITICAL AND SOCIAL SCIENCE 8 (1952).
82
Gregg v. Georgia, 428 U.S. 153, S.Ct. 2909, 49 L.Ed.2d 859 (1979).
83
Provenzano v. Moore, 744 So.2d 413 (Fla. 1999); Dutton v. State, 123 Md. 373, 91 A. 417
(1914); Campbell v. Wood, 18 F.3d 662 (9th Cir. 1994); Wilkerson v. Utah, 99 U.S. (9 Otto)
130, 25 L.Ed. 345 (1878); People v. Daugherty, 40 Cal.2d 876, 256 P.2d 911 (1953); Gray
v. Lucas, 710 F.2d 1048 (5th Cir. 1983); Hunt v. Nuth, 57 F.3d 1327 (4th Cir. 1995).
84
ROBERT M. BOHM, DEATHQUEST: AN INTRODUCTION TO THE THEORY AND PRACTICE OF CAPITAL
PUNISHMENT IN THE UNITED STATES 74 (1999).
85
Peter Fitzpatrick, “Always More to Do”: Capital Punishment and the (De)Composition of Law,
THE KILLING STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE 117 (Austin Sarat ed.,
1999); Franklin E. Zimring, The Executioner’s Dissonant Song: On Capital Punishment and
American Legal Values, THE KILLING STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE
137 (Austin Sarat ed., 1999).
218 6 Punishibility of Artificial Intelligence Technology
penalty, the only general consideration that may support capital penalty for artificial
intelligence sentencing is incapacitation.
There is no doubt that a dead person is incapacitated in relation to commission of
further offenses, therefore the most dominant punishment consideration as to death
penalty is incapacitation.86 The death neutralizes the delinquent capabilities of the
offender and no further offense may be committed. Accordingly, the question is
towards the applicability of capital penalty on artificial intelligence systems: how
can death penalty be imposed upon artificial intelligence systems.
First, the capital penalty should be analyzed as to its roots of meaning. Second,
these roots should be searched for in artificial intelligence systems. Third, the
punishment should be adjusted to these roots in artificial intelligence systems.
Functionally, capital penalty is deprivation of life. Although this deprivation of
life may affect not only the executed offender, but other persons (e.g., relatives,
employees, etc.) as well, the essence of the death penalty is the death of the
offender, which consists on depriving the offender from life. When the offender
is human, life means the person’s very existence as functioning creature. When the
offender is a corporation or an artificial intelligence system, its life may be defined
through its activity.
A living artificial intelligence system is a functioning artificial intelligence
system, therefore the “life” of an artificial intelligence system is its capability of
functioning as such. Stopping the artificial intelligence system’s activity does not
necessarily means “death” of the system. Death means the permanent incapacita-
tion of the system’s “life”. Therefore, capital penalty for artificial intelligence
systems means its permanent shutdown. This act incapacitates the system’s
capabilities and no further offenses or any other activity is expected. When the
artificial intelligence system is shut down by the court’s order, it means that the
society prohibits the operation of that particular entity for it is too dangerous for the
society.
Such applicability of capital penalty on artificial intelligence systems serves both
the purposes of capital penalty and incapacitation (as general purpose of sentenc-
ing) in relation to artificial intelligence systems. When the offender is too dangerous
for the society and the society decided to impose death penalty, prospectively, if
this punishment is acceptable in the particular legal system, it is purposed for the
total and final incapacitation of the offender. This is true for human offenders, for
corporations and for artificial intelligence systems. For artificial intelligence
systems the permanent incapacitation is expressed by an absolute shutdown under
the court’s order with no option of reactivating the system again.
Such a system would not be involved in delinquent events anymore. It may be
argued that such shutdown may affect other innocent persons (e.g., manufacturer of
86
Anne Norton, After the Terror: Mortality, Equality, Fraternity, THE KILLING STATE – CAPITAL
PUNISHMENT IN LAW, POLITICS, AND CULTURE 27 (Austin Sarat ed., 1999); Hugo Adam Bedau,
Abolishing the Death Penalty Even for the Worst Murderers, THE KILLING STATE – CAPITAL
PUNISHMENT IN LAW, POLITICS, AND CULTURE 40 (Austin Sarat ed., 1999).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 219
the system, its programmers, users, etc.). However, this is true not only for artificial
intelligence systems, but for human offenders and corporations as well. As the
offender is executed, it affects his innocent family (in case of human offender) or its
employees, directors, managers, shareholders etc. (in case of corporations). When
the offender is an artificial intelligence system, it also affects other innocent
persons, and this is not unique for artificial intelligence systems. Thus, capital
penalty may be applicable for artificial intelligence systems.
87
Sean McConville, The Victorian Prison: England 1865–1965, THE OXFORD HISTORY OF THE
PRISON 131 (Norval Morris and David J. Rothman eds., 1995); THORSTEN J. SELLIN, SLAVERY AND
THE PENAL SYSTEM (1976); HORSFALL J. TURNER, THE ANNALS OF THE WAKEFIELD HOUSE OF
CORRECTIONS FOR THREE HUNDRED YEARS 154–172 (1904).
88
JOHN HOWARD, THE STATE OF PRISONS IN ENGLAND AND WALES (1777, 1996).
89
David J. Rothman, For the Good of All: The Progressive Tradition in Prison Reform, HISTORY
AND CRIME 271 (James A. Inciardi and Charles E. Faupel eds., 1980).
90
Roy D. King, The Rise and Rise of Supermax: An American Solution in Search of a Problem?,
1 PUNISHMENT AND SOCIETY 163 (1999); CHASE RIVELAND, SUPERMAX PRISONS: OVERVIEW AND
GENERAL CONSIDERATIONS (1999); JAMIE FELLNER AND JOANNE MARINER, COLD STORAGE: SUPER-
MAXIMUM SECURITY CONFINEMENT IN INDIANA (1997).
91
DORRIS LAYTON MACKANZIE AND EUGENE E. HEBERT, CORRECTIONAL BOOT CAMPS: A TOUGH
INTERMEDIATE SANCTION (1996); Sue Frank, Oklahoma Camp Stresses Structure and Discipline,
53 CORRECTIONS TODAY 102 (1991); ROBERTA C. CRONIN, BOOT CAMPS FOR ADULT AND JUVENILE
OFFENDERS: OVERVIEW AND UPDATE (1994).
220 6 Punishibility of Artificial Intelligence Technology
the prisoner suffer. Deterrence supports the imprisonment as the suffer in prison
may be deterring the offender from recidivism and potential offenders from
offending. However, both retribution and deterrence are irrelevant for artificial
intelligence systems, as artificial intelligence systems experience neither suffer
nor fear. Since rehabilitation and incapacitation are relevant for artificial intelli-
gence systems, imprisonment should be evaluated accordingly.
When the offender’s liberty is deprived, the society may use this term for the
offender’s rehabilitation through initiation of inner change. The inner change may
be the consequence of activity in prison, as aforesaid. If the offender accepts that
inner change and does not return to delinquency, imprisonment is considered to be
successful, as the offender has been rehabilitated. Moreover, when the offender is
under strict supervision within the prison, his capabilities to commit further
offenses is dramatically reduced, and this may be considered incapacitation, if
that supervision actually prevents the offender from recidivism.
Accordingly, the question is towards the applicability of imprisonment on
artificial intelligence systems: how can imprisonment be imposed upon artificial
intelligence systems. First, the imprisonment should be analyzed as to its roots of
meaning. Second, these roots should be searched for in artificial intelligence
systems. Third, the punishment should be adjusted to these roots in artificial
intelligence systems.
Functionally, imprisonment is deprivation of liberty. Although this deprivation
of liberty may affect not only the imprisoned offender, but other persons (e.g.,
relatives, employees, etc.) as well, the essence of the imprisonment is the depriva-
tion of the offender’s liberty, which consists on restricting the offender’s activity.
When the offender is human, liberty means the person’s freedom to act in any way.
When the offender is a corporation or an artificial intelligence system, its liberty
may also be defined through its activity. The artificial intelligence system’s liberty
is exercising its capabilities with no restrictions. This concerns both the very
exercising of the capabilities and the content of these capabilities.
Consequently, imposition of imprisonment on artificial intelligence system is
expressed by depriving its liberty to act through restricting its activity for determi-
nate term and under tight supervision. In this time the artificial intelligence system
may be fixed up in order to prevent commission of further offenses. The artificial
intelligence system’s fix may be more efficient under the system’s incapacitation,
and when it is under the court’s order. This situation may serve both purposes of
rehabilitation and incapacitation, which are the relevant sentencing purposes for
artificial intelligence systems. When the artificial intelligence system is under
custody, restriction and supervision, its actual capabilities to offend are
incapacitated.
When the artificial intelligence system is being fixed through inner changes,
initiated by external factors (e.g., programmers under court’s order) and experi-
enced during the term of restriction, it is substantive rehabilitation, for the system
would work out reducing the chances for involvement in further delinquency. The
actual social value of imposing imprisonment on artificial intelligence systems is
real. The dangerous system is being taken away from the society for it to be repaired
6.2 Relevance of Sentencing to Artificial Intelligence Systems 221
and meanwhile it would not be capable of causing further harm to society. When
this process is complete, the system may be returned to full activity. If the system
has no chance of rehabilitation, incapacitation may take the major role and dictate
long period of imprisonment or even capital penalty.
Suspended imprisonment is conditional penalty.92 The offender is warned that if
further offense is committed, the full penalty of imprisonment is imposed for the
newer offense and in addition the offender would have to serve another term in
imprisonment for the first time of offending. This penalty is purposed to keep the
offender away from offending, at least when the condition is still valid. The actual
way of imposition of this penalty is through adding the relevant line to the
offender’s criminal record. Therefore, the relevant question here is not towards
the actual execution of the penalty, but towards its social meaning when imposed on
artificial intelligence systems.
For artificial intelligence systems suspended imprisonment is an alert for
reconsidering its course of conduct. This process may be led by programmers,
users and manufacturers, the same way as human offenders may be assisted by their
relative or professionals (e.g., psychologists, social workers, etc.) and corporate
offenders may be assisted by their officers or professional. This is a more lenient
measure that calls for reconsidering the course of conduct. By its essence suspended
imprisonment is not substantially different in this context from imprisonment,
although for humans it may be extremely different since it prevents the human
offender from suffering and keeps him out of delinquency through intimidation
(deterrence).
92
MARC ANCEL, SUSPENDED SENTENCE 14–17 (1971); Marc Ancel, The System of Conditional
Sentence or Sursis, 80 L. Q. REV. 334, 336 (1964).
93
United Nations, Probation and Related Measures, UN DEPARTMENT OF SOCIAL AFFAIRS 29–30
(1951).
94
DAVID J. ROTHMAN, CONSCIENCE AND CONVENIENCE: THE ASYLUM AND ITS ALTERNATIVES IN PROGRES-
SIVE AMERICA (1980); FRANK SCHMALLEGER, CRIMINAL JUSTICE TODAY: AN INTRODUCTORY TEXT FOR
THE 21st CENTURY 454 (2003).
222 6 Punishibility of Artificial Intelligence Technology
and inspect him for not committing further offenses. Probation is a dominant
rehabilitative penalty, and it matches offenders who have high potential for reha-
bilitation. Consequently, the court needs a accurate diagnosis of that potential,
prepared by the probation service, in order to sentence the offender through
probation.95
Retribution is irrelevant for probation, as probation is not intended to make the
offender suffer. Deterrence is neither relevant for probation, since probation is
perceived as lenient penalty, in which its deterrent value is negligible. Moreover,
both retribution and deterrence are irrelevant for artificial intelligence sentencing,
as aforesaid. Incapacitation is relevant for artificial intelligence sentencing, but it is
not reflected in probation, unless the particular probation’s framework is extremely
tight and the offender’s delinquent capabilities are actually incapacitated. However,
the dominant purpose of probation is, of course, rehabilitation, as probation is
purposed to rehabilitate the offender through giving him relevant social measures
to reintegrate in the society.
Accordingly, the question is towards the applicability of probation on artificial
intelligence systems: how can probation be imposed upon artificial intelligence
systems. First, the probation should be analyzed as to its roots of meaning. Second,
these roots should be searched for in artificial intelligence systems. Third, the
punishment should be adjusted to these roots in artificial intelligence systems.
Functionally, probation is supervising the offender and granting him measures to
reintegrate in the society. These measures should match the particular type of
delinquency, which has been the immediate cause for the offender’s sentencing.96
The process of probation functions as functional correction of the offender. When
an offense is committed by an artificial intelligence system, the system should be
diagnosed whether it may be fixed up or not. At this stage human offenders are
diagnosed as to their potential of rehabilitation. Both kinds of diagnosis are
performed by professionals. Human offenders may be diagnosed by probation-
service staff, social workers, psychologists, psychiatrists, physicians, etc.
Artificial intelligence systems may be diagnosed by technology experts. If the
diagnosis shows no potential for rehabilitation, the ultimate purpose of sentencing
would become incapacitation, as the society wished to prevent further harm to
society. However, if the diagnosis is positive, and the offender has a high potential
to be rehabilitated, probation may be taken into consideration for implementation of
the rehabilitation purpose of sentencing. This is true for both human offenders,
corporations and artificial intelligence systems. The core question within artificial
95
Paul W. Keve, The Professional Character of the Presentence Report, 26 FEDERAL PROBATION
51 (1962).
96
HARRY E. ALLEN, ERIC W. CARLSON AND EVALYN C. PARKS, CRITICAL ISSUES IN ADULT PROBATION
(1979); Crystal A. Garcia, Using Palmer’s Global Approach to Evaluate Intensive Supervision
Programs: Implications for Practice, 4 CORRECTION MANAGEMENT QUARTERLY 60 (2000); ANDREW
WRIGHT, GWYNETH BOSWELL AND MARTIN DAVIES, CONTEMPORARY PROBATION PRACTICE (1993);
MICHAEL CAVADINO AND JAMES DIGNAN, THE PENAL SYSTEM: AN INTRODUCTION 137–140 (2002).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 223
97
John Harding, The Development of the Community Service, ALTERNATIVE STRATEGIES FOR COPING
WITHCRIME 164 (Norman Tutt ed., 1978); HOME OFFICE, REVIEW OF CRIMINAL JUSTICE POLICY (1977);
Ashlee Willis, Community Service as an Alternative to Imprisonment: A Cautionary View,
24 PROBATION JOURNAL 120 (1977).
224 6 Punishibility of Artificial Intelligence Technology
The public service has another dimension, as it relates to community. The public
service is carried out within the offender’s community to signify that the offender is
part of that community, and causing harm to the community reflects the offender.98
In many cases the public service is added to the probation in order to raise the
chances of full rehabilitation of the offender within the community.99 Public service
has more than mere compensational value. It is purposed to make the offender
understand the needs of the community and be sensitive to these needs. The public
service is part of the learning and reintegration processes that the offender
experience.
Retribution is irrelevant for public service, as public service is not intended to
make the offender suffer. Deterrence is neither relevant for public service, since
public service is perceived as lenient penalty, in which its deterrent value is
negligible. Moreover, both retribution and deterrence are irrelevant for artificial
intelligence sentencing, as aforesaid. Incapacitation is relevant for artificial intelli-
gence sentencing, but it is not reflected in public service, unless the particular public
service’s framework is extremely tight and the offender’s delinquent capabilities
are actually incapacitated.
However, the dominant purpose of public service is rehabilitation, as it is
purposed to rehabilitate the offender through learning and reintegration in the
society. Accordingly, the question is towards the applicability of public service
on artificial intelligence systems: how can public service be imposed upon artificial
intelligence systems. First, the public service should be analyzed as to its roots of
meaning. Second, these roots should be searched for in artificial intelligence
systems. Third, the punishment should be adjusted to these roots in artificial
intelligence systems.
Functionally, public service is a supervised compensation to the society through
experiencing integration with the society. The offender widens his experience with
the society and that enables him to be more easily integrated. Widening the
offender’s social experience is beneficial for the society as it includes a compensa-
tional dimension. The social experience is not exclusive for human offenders. Both
corporations and artificial intelligence systems have strong interactions with the
community. The public service may empower and strengthen these interactions and
make them become the basis for the required inner change.
For instance, a medical expert artificial intelligence system, equipped with
machine learning capabilities, is used in private clinic for performance of more
accurate medical diagnosis to the patients. The system has been considered negli-
gent, and the court imposed public service. Consequently, in order to implement
this penalty, the system may be used by the public medical services or public
98
Julie Leibrich, Burt Galaway and Yvonne Underhill, Community Sentencing in New Zealand: A
Survey of Users, 50 FEDERAL PROBATION 55 (1986).
99
James Austin and Barry Krisberg, The Unmet Promise of Alternatives, 28 JOURNAL OF RESEARCH
IN CRIME AND DELINQUENCY 374 (1982); Mark S. Umbreit, Community Service Sentencing: Jail
Alternatives or Added Sanction?, 45 FEDERAL PROBATION 3 (1981).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 225
hospitals. This serves two main goals. First, the system is exposed to more cases and
through machine learning it may optimize its functioning. Second, that may be
considered compensation to the society for the harm caused by the offense.
At the end of the public service term the artificial intelligence system is more
experienced, and if the machine learning process was effective, the system’s
performance is optimized. Since public service is supervised, whether it is
accompanied by probation or not, the machine learning process or other inner
processes are directed for prevention of further offenses within the public service.
By the end of the public service, the artificial intelligence system has contributed
time and resources for the benefit of the society, and that may be considered as
compensation for the social harm caused by the commission of the offense. Thus,
public service of artificial intelligence systems resembles by its substance to human
public service.
It may be argued that this compensation is actually contributed by the
manufacturers or users of the system, since they suffer the absence of the system’s
activity. This is true not only for artificial intelligence systems, but also for human
offenders and corporations. When a human offender carries out public service, his
absence is felt by his family and relatives. When a corporation carries out public
service, its sources are absent for its workers, directors, clients, etc. This absence is
part of carrying out the public service, regardless the identity of the offender,
therefore artificial intelligence systems are not unique in this context.
100
FIORI RINALDI, IMPRISONMENT FOR NON-PAYMENT OF FINES (1976); GERHARDT GREBING, THE FINE IN
COMPARATIVE LAW: A SURVEY OF 21 COUNTRIES (1982).
101
LEON RADZINOWICZ AND ROGER HOOD, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRA-
TION FROM 1750 VOL. 5: THE EMERGENCE OF PENAL POLICY (1986); PETER YOUNG, PUNISHMENT, MONEY
AND THE LEGAL ORDER: AN ANALYSIS OF THE EMERGENCE OF MONETARY SANCTIONS WITH SPECIAL
REFERENCE TO SCOTLAND (1987).
226 6 Punishibility of Artificial Intelligence Technology
age increasing use of fines.102 For the purpose of efficient collection of fines, in
most legal systems the court may impose imprisonment, public service or confisca-
tion of property in case of not paying the fine.103 The imposed fine is not necessarily
proportional to the actual harm caused by the offense, but to the severity of the
offense.
Retribution may be relevant to fines, if the fine is proportional to the social harm
and reflects it. Deterrence may also be relevant to fines, if the fine causes a deterrent
loss of property. However, as aforesaid, both retribution and deterrence are irrele-
vant to artificial intelligence sentencing. Fines have no dominant rehabilitative
value, although paying the fine may require additional occupation of work, and
thus there is less free time to commit offenses. Accordingly, the question is towards
the applicability of fine on artificial intelligence systems: how can public fine be
imposed upon artificial intelligence systems. The main difficulty is for artificial
intelligence systems possess no money or other property of their own.
Whereas corporations possess property of their own, and therefore paying fines
is the easiest way of imposing penalty upon corporations, artificial intelligence
systems do not possess property. First, the fine should be analyzed as to its roots of
meaning. Second, these roots should be searched for in artificial intelligence
systems. Third, the punishment should be adjusted to these roots in artificial
intelligence systems. In addition, this solution should include relevant solutions
for cases of ineffective fines, i.e., when there are difficulties in collecting the fines.
Functionally, fine is forced contribution of valuable property to the society. In
most cases it is reflected by money, but in certain legal systems it is reflected by
other valuable property. In certain legal systems the sum of money is determined by
evaluating the cost of a working day, week or month of the defendant, for the fine to
match this cost.104 Moreover, even if the fine is determined as an absolute sum, the
absence of this sum is translated by the offender, in most cases, as additional
working hours to fulfill that absence. Thus, the fine actually reflects working
hours, days, weeks or months, dependent on the relevant sum.
As discussed above in the context of public service, the productivity of artificial
intelligence system may be evaluated also in working hours for the community.
That is true that artificial intelligence systems do not possess property of their own,
but they possess the capability of working, which is valuable and may be measured
through monetary values. For instance, a working hour of medical expert artificial
102
Gail S. Funke, The Economics of Prison Crowding, 478 ANNALS OF THE AMERICAN ACADEMY OF
POLITICAL AND SOCIAL SCIENCES 86 (1985); Thomas Mathiesen, The Viewer Society: Michel
Foucault’s ‘Panopticon’ Revisited, 1 THEORETICAL CRIMINOLOGY 215 (1997).
103
SALLY T. HILLSMAN AND SILVIA S. G. CASALE, ENFORCEMENT OF FINES AS CRIMINAL SANCTIONS: THE
ENGLISH EXPERIENCE AND ITS RELEVANCE TO AMERICAN PRACTICE (1986); Judith A. Greene, Structur-
ing Criminal Fines: Making an ‘Intermediate Penalty’ More Useful and Equitable, 13 JUSTICE
SYSTEM JOURNAL 37 (1988); NIGEL WALKER AND NICOLA PADFIELD, SENTENCING: THEORY, LAW AND
PRACTICE (1996).
104
MICHAEL H. TONRY AND KATHLEEN HATLESTAD, SENTENCING REFORM IN OVERCROWDED TIMES: A
COMPARATIVE PERSPECTIVE (1997).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 227
intelligence system may be evaluated in $500. Let us assume that this particular
artificial intelligence system has been imposed fine of $1,000. In this case, the fine
is equal to two working hours of the system. Consequently, the system may pay the
fine through the only payment measure it possesses: working hours.
The working hours are contributed to the society, the same way as public service
is contributed. When a human offender does not have the required sum of money to
pay the fine, other penalties are imposed, as aforesaid. One of them is public
service, which may be measured through certain number of working hours. The
working hours as payment measure may serve not only the purpose of paying the
fine (through accurate number of working hours), but also an optional purpose of
the fine’s enforcement together with public service, imprisonment or any other
relevant penalty.
It may be argued that this contribution to society is actually contributed by the
manufacturers or users of the artificial intelligence system, since they suffer the
absence of the system’s activity while paying the fine. This is true not only for
artificial intelligence systems, but also for human offenders and corporations. When
a human offender pays the fine, the absence of the money (or of himself, if
additional working hours are required to fulfill the absence of money) is felt by
his family and relatives. When a corporation pays the fine, its sources are absent for
its workers, directors, clients, etc. This absence is part of paying the fine, regardless
the identity of the offender, therefore artificial intelligence systems are not unique
in this context.
Conclusion
• Hasan, [2005] U.K.H.L. 22, [2005] 4 All E.R. 685, [2005] 2 Cr. App. Rep.
314, [2006] Crim. L.R. 142, [2005] All E.R. (D) 299
• Heilman v. Commonwealth, 84 Ky. 457, 1 S.W. 731 (1886)
• Henderson v. Kibbe, 431 U.S. 145, 97 S.Ct. 1730, 52 L.Ed.2d 203 (1977)
• Henderson v. State, 11 Ala.App. 37, 65 So. 721 (1914)
• Hentzner v. State, 613 P.2d 821 (Alaska 1980)
• Hern v. Nichols, (1708) 1 Salkeld 289, 91 E.R. 256
• Hobbs v. Winchester Corporation, [1910] 2 K.B. 471
• Holbrook, (1878) 4 Q.B.D. 42
• Howard v. State, 73 Ga.App. 265, 36 S.E.2d 161 (1945)
• Huggins, (1730) 2 Strange 882, 93 E.R. 915
• Hughes v. Commonwealth, 19 Ky.L.R. 497, 41 S.W. 294 (1897)
• Hull, (1664) Kel. 40, 84 E.R. 1072
• Humphrey v. Commonwealth, 37 Va.App. 36, 553 S.E.2d 546 (2001)
• Hunt v. Nuth, 57 F.3d 1327 (4th Cir. 1995)
• Hunt v. State, 753 So.2d 609 (Fla.App.2000)
• Hunter v. State, 30 Tenn. 160, 1 Head 160 (1858)
• I.C.R. Haulage Ltd., [1944] K.B. 551, [1944] 1 All E.R. 691
• Ingram v. United States, 592 A.2d 992 (D.C.App.1991)
• Johnson v. State, 142 Ala. 70 (1904)
• Jones v. Hart, (1699) 2 Salkeld 441, 91 E.R. 382
• Jurco v. State, 825 P.2d 909 (Alaska App.1992)
• K., [2001] U.K.H.L. 41, [2002] 1 A.C. 462
• Kimoktoak v. State, 584 P.2d 25 (Alaska 1978)
• Kingston v. Booth, (1685) Skinner 228, 90 E.R. 105
• Knight, (1828) 1 L.C.C. 168, 168 E.R. 1000
• Kumar, [2004] E.W.C.A. Crim. 3207, [2005] 1 Cr. App. Rep. 566, [2005] Crim.
L.R. 470
• Laaman v. Helgemoe, 437 F.Supp. 269 (1977)
• Lambert v. California, 355 U.S. 225, 78 S.Ct. 240, 2 L.Ed.2d 228 (1957)
• Lambert v. State, 374 P.2d 783 (Okla.Crim.App.1962)
• Lane v. Commonwealth, 956 S.W.2d 874 (Ky.1997)
• Langforth Bridge, (1635) Cro. Car. 365, 79 E.R. 919
• Larsonneur, (1933) 24 Cr. App. R. 74, 97 J.P. 206, 149 L.T. 542
• Lawson, [1986] V.R. 515
• Leach, [1937] 1 All E.R. 319
• Lee v. State, 41 Tenn. 62, 1 Cold. 62 (1860)
• Leet v. State, 595 So.2d 959 (1991)
• Lennard’s Carrying Co. Ltd. v. Asiatic Petroleum Co. Ltd., [1915] A.C. 705
• In re Leroy, 285 Md. 508, 403 A.2d 1226 (1979)
• Lester v. State, 212 Tenn. 338, 370 S.W.2d 405 (1963)
• Levett, (1638) Cro. Car. 538
• lifton (Inhabitants), (1794) 5 T.R. 498, 101 E.R. 280
• Liverpool (Mayor), (1802) 3 East 82, 102 E.R. 529
• Long v. Commonwealth, 23 Va.App. 537, 478 S.E.2d 324 (1996)
Cases 235
• People v. Cooper, 194 Ill.2d 419, 252 Ill.Dec. 458, 743 N.E.2d 32 (2000)
• People v. Craig, 78 N.Y.2d 616, 578 N.Y.S.2d 471, 585 N.E.2d 783 (1991)
• People v. Daugherty, 40 Cal.2d 876, 256 P.2d 911 (1953)
• People v. Davis, 33 N.Y.2d 221, 351 N.Y.S.2d 663, 306 N.E.2d 787 (1973)
• People v. Decina, 2 N.Y.2d 133, 157 N.Y.S.2d 558, 138 N.E.2d 799 (1956)
• People v. Disimone, 251 Mich.App. 605, 650 N.W.2d 436 (2002)
• People v. Ferguson, 134 Cal.App. 41, 24 P.2d 965 (1933)
• People v. Freeman, 61 Cal.App.2d 110, 142 P.2d 435 (1943)
• People v. Handy, 198 Colo. 556, 603 P.2d 941 (1979)
• People v. Haney, 30 N.Y.2d 328, 333 N.Y.S.2d 403, 284 N.E.2d 564 (1972)
• People v. Harris, 29 Cal. 678 (1866)
• People v. Heitzman, 9 Cal.4th 189, 37 Cal.Rptr.2d 236, 886 P.2d 1229 (1994)
• People v. Henry, 239 Mich.App. 140, 607 N.W.2d 767 (1999)
• People v. Higgins, 5 N.Y.2d 607, 186 N.Y.S.2d 623, 159 N.E.2d 179 (1959)
• People v. Howk, 56 Cal.2d 687, 16 Cal.Rptr. 370, 365 P.2d 426 (1961)
• People v. Kemp, 150 Cal.App.2d 654, 310 P.2d 680 (1957)
• People v. Kessler, 57 Ill.2d 493, 315 N.E.2d 29 (1974)
• People v. Kirst, 168 N.Y. 19, 60 N.E. 1057 (1901)
• People v. Larkins, 2010 Mich. App. Lexis 1891 (2010)
• People v. Leonardi, 143 N.Y. 360, 38 N.E. 372 (1894)
• People v. Lisnow, 88 Cal.App.3d Supp. 21, 151 Cal.Rptr. 621 (1978)
• People v. Little, 41 Cal.App.2d 797, 107 P.2d 634 (1941)
• People v. Marshall, 362 Mich. 170, 106 N.W.2d 842 (1961)
• People v. Merhige, 212 Mich. 601, 180 N.W. 418 (1920)
• People v. Michalow, 229 N.Y. 325, 128 N.E. 228 (1920)
• People v. Minifie, 13 Cal.4th 1055, 56 Cal.Rptr.2d 133, 920 P.2d 1337 (1996)
• People v. Monks, 133 Cal. App. 440 (Cal. Dist. Ct. App. 1933)
• People v. Mutchler, 140 N.E. 820, 823 (Ill. 1923)
• People v. Newton, 8 Cal.App.3d 359, 87 Cal.Rptr. 394 (1970)
• People v. Pantano, 239 N.Y. 416, 146 N.E. 646 (1925)
• People v. Prettyman, 14 Cal.4th 248, 58 Cal.Rptr.2d 827, 926 P.2d 1013 (1996)
• People v. Richards, 269 Cal.App.2d 768, 75 Cal.Rptr. 597 (1969)
• People v. Sakow, 45 N.Y.2d 131, 408 N.Y.S.2d 27, 379 N.E.2d 1157 (1978)
• People v. Smith, 57 Cal. App. 4th 1470, 67 Cal. Rptr. 2d 604 (1997)
• People v. Sommers, 200 P.3d 1089 (2008)
• People v. Townsend, 214 Mich. 267, 183 N.W. 177 (1921)
• People v. Vogel, 46 Cal.2d 798, 299 P.2d 850 (1956)
• People v. Weiss, 256 App.Div. 162, 9 N.Y.S.2d 1 (1939)
• People v. Whipple, 100 Cal.App. 261, 279 P. 1008 (1929)
• People v. Williams, 56 Ill.App.2d 159, 205 N.E.2d 749 (1965)
• People v. Wilson, 66 Cal.2d 749, 59 Cal.Rptr. 156, 427 P.2d 820 (1967)
• People v. Young, 11 N.Y.2d 274, 229 N.Y.S.2d 1, 183 N.E.2d 319 (1962)
• Pierson v. State, 956 P.2d 1119 (Wyo.1998)
• Pigman v. State, 14 Ohio 555 (1846)
• Polston v. State, 685 P.2d 1 (Wyo.1984)
Cases 237
• Texaco Inc. v. Short, 454 U.S. 516, 102 S.Ct. 781, 70 L.Ed.2d 738 (1982)
• Thomas, (1837) 7 Car. & P. 817, 173 E.R. 356
• Thompson v. State, 44 S.W.3d 171 (Tex.App.2001)
• Thompson v. United States, 348 F.Supp.2d 398 (2005)
• Tift v. State, 17 Ga.App. 663, 88 S.E. 41 (1916)
• Treacy v. Director of Public Prosecutions, [1971] A.C. 537, 559, [1971] 1 All
E.R. 110, [1971] 2 W.L.R. 112, 55 Cr. App. Rep. 113, 135 J.P. 112
• Tully v. State, 730 P.2d 1206 (Okl.Crim.App.1986)
• Turberwill v. Stamp, (1697) Skinner 681, 90 E.R. 303
• In re Tyvonne, 211 Conn. 151, 558 A.2d 661 (1989)
• United States v. Alaska Packers’ Association, 1 Alaska 217 (1901)
• United States v. Albertini, 830 F.2d 985 (9th Cir.1987)
• United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988)
• United States v. Andrews, 75 F.3d 552 (9th Cir.1996)
• United States v. Arthurs, 73 F.3d 444 (1st Cir.1996)
• United States v. Bailey, 444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980)
• United States v. Bakhtiari, 913 F.2d 1053 (2nd Cir.1990)
• United States v. Bryan, 483 F.2d 88, 92 (3d Cir. 1973)
• United States v. Buber, 62 M.J. 476 (2006)
• United States v. Calley, 48 C.M.R. 19, 22 U.S.C.M.A. 534 (1973)
• United States v. Campbell, 675 F.2d 815 (6th Cir.1982)
• United States v. Carter, 311 F.2d 934 (6th Cir.1963)
• United States v. Chandler, 393 F.2d 920 (4th Cir.1968)
• United States v. Contento-Pachon, 723 F.2d 691 (9th Cir.1984)
• United States v. Currens, 290 F.2d 751 (3rd Cir.1961)
• United States v. Doe, 136 F.3d 631 (9th Cir.1998)
• United States v. Dominguez-Ochoa, 386 F.3d 639 (2004)
• United States v. Dorrell, 758 F.2d 427 (9th Cir.1985)
• United States v. Dye Construction Co., 510 F.2d 78 (10th Cir.1975)
• United States v. Freeman, 25 Fed. Cas. 1208 (1827)
• United States v. Freeman, 357 F.2d 606 (2nd Cir.1966)
• United States v. Gomez, 81 F.3d 846 (9th Cir.1996)
• United States v. Greer, 467 F.2d 1064 (7th Cir.1972)
• United States v. Hanousek, 176 F.3d 1116 (9th Cir.1999)
• United States v. Heredia, 483 F.3d 913 (2006)
• United States v. Holmes, 26 F. Cas. 360, 1 Wall. Jr. 1 (1842)
• United States v. Jewell, 532 F.2d 697 (9th Cir.1976)
• United States v. John Kelso Co., 86 F. 304 (Cal.1898)
• United States v. Johnson, 956 F.2d 894 (9th Cir.1992)
• United States v. Kabat, 797 F.2d 580 (8th Cir.1986)
• United States v. Ladish Malting Co., 135 F.3d 484 (7th Cir.1998)
• United States v. LaFleur, 971 F.2d 200 (9th Cir.1991)
• United States v. Lampkins, 4 U.S.C.M.A. 31, 15 C.M.R. 31 (1954)
• United States v. Lee, 694 F.2d 649 (11th Cir.1983)
• United States v. Mancuso, 139 F.2d 90 (3rd Cir.1943)
242 Cases
HARRY E. ALLEN, ERIC W. CARLSON AND EVALYN C. PARKS, CRITICAL ISSUES IN ADULT PROBATION
(1979)
Susan M. Allan, No Code Orders v. Resuscitation: The Decision to Withhold Life-Prolonging
Treatment from the Terminally Ill, 26 WAYNE L. REV. 139 (1980)
Peter Alldridge, The Doctrine of Innocent Agency, 2 CRIM. L. F. 45 (1990)
FRANCIS ALLEN, THE DECLINE OF THE REHABILITATIVE IDEAL (1981)
Marc Ancel, The System of Conditional Sentence or Sursis, 80 L. Q. REV. 334 (1964)
MARC ANCEL, SUSPENDED SENTENCE (1971)
Johannes Andenaes, The General Preventive Effects of Punishment, 114 U. PA. L. REV. 949 (1966)
Johannes Andenaes, The Morality of Deterrence, 37 U. CHI. L. REV. 649 (1970)
Susan Leigh Anderson, Asimov’s “Three Laws of Robotics” and Machine Metaethics, 22 AI SOC.
477 (2008)
Edward B. Arnolds and Norman F. Garland, The Defense of Necessity in Criminal Law: The Right
to Choose the Lesser Evil, 65 J. CRIM. L. & CRIMINOLOGY 289 (1974)
Andrew Ashworth, The Scope of Criminal Liability for Omissions, 84 L. Q. REV. 424 (1989)
Andrew Ashworth, Testing Fidelity to Legal Values: Official Involvement and Criminal Justice,
63 MOD. L. REV. 663 (2000)
ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW (5th ed., 2006)
Andrew Ashworth, Rehabilitation, PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY
1 (Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009)
ISAAC ASIMOV, I, ROBOT (1950)
ISSAC ASIMOV, THE REST OF ROBOTS (1964)
Tom Athanasiou, High-Tech Politics: The Case of Artificial Intelligence, 92 SOCIALIST REVIEW
7 (1987)
James Austin and Barry Krisberg, The Unmet Promise of Alternatives, 28 JOURNAL OF RESEARCH IN
CRIME AND DELINQUENCY (1982)
JOHN AUSTIN, THE PROVINCE OF JURISPRUDENCE DETERMINED (1832, 2000)
BERNARD BAARS, IN THE THEATRE OF CONSCIOUSNESS (1997)
John S. Baker Jr., State Police Powers and the Federalization of Local Crime, 72 TEMP. L. REV.
673 (1999)
STEPHEN BAKER, FINAL JEOPARDY: MAN VS. MACHINE AND THE QUEST TO KNOW EVERYTHING (2011)
CESARE BECCARIA, TRAITÉ DES DÉLITS ET DES PEINES (1764)
Hugo Adam Bedau, Abolishing the Death Penalty Even for the Worst Murderers, THE KILLING
STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE 40 (Austin Sarat ed., 1999)
RICHARD E. BELLMAN, AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE: CAN COMPUTERS THINK? (1978)
JEREMY BENTHAM, AN INTRODUCTION TO THE PRINCIPLES OF MORALS AND LEGISLATION (1789, 1996)
Jeremy Bentham, Punishment and Deterrence, PRINCIPLED SENTENCING: READINGS ON THEORY AND
POLICY (Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009)
JOHN BIGGS, THE GUILTY MIND (1955)
Ned Block, What Intuitions About Homunculi Don’t Show, 3 BEHAVIORAL & BRAIN SCI. 425 (1980)
ROBERT M. BOHM, DEATHQUEST: AN INTRODUCTION TO THE THEORY AND PRACTICE OF CAPITAL PUNISH-
MENT IN THE UNITED STATES (1999)
Addison M. Bowman, Narcotic Addiction and Criminal Responsibility under Durham, 53 GEO.
L. J. 1017 (1965)
STEVEN BOX, POWER, CRIME AND MYSTIFICATION (1983)
RICHARD B. BRANDT, ETHICAL THEORY (1959)
Kathleen F. Brickey, Corporate Criminal Accountability: A Brief History and an Observation,
60 WASH. U. L. Q. 393 (1983)
Bruce Bridgeman, Brains + Programs ¼ Minds, 3 BEHAVIORAL & BRAIN SCI. 427 (1980)
WALTER BROMBERG, FROM SHAMAN TO PSYCHOTHERAPIST: A HISTORY OF THE TREATMENT OF MENTAL
ILLNESS (1975)
Andrew G. Brooks and Ronald C. Arkin, Behavioral Overlays for Non-Verbal Communication
Expression on a Humanoid Robot, 22 AUTON. ROBOTS 55 (2007)
Timothy L. Butler, Can a Computer Be an Author – Copyright Aspects of Artificial Intelligence,
4 COMM. ENT. L.S. 707 (1982)
Kenneth L. Campbell, Psychological Blow Automatism: A Narrow Defence, 23 CRIM. L. Q.
342 (1981)
W. G. Carson, Some Sociological Aspects of Strict Liability and the Enforcement of Factory
Legislation, 33 MOD. L. REV. 396 (1970)
W. G. Carson, The Conventionalisation of Early Factory Crime, 7 INT’L J. OF SOCIOLOGY OF LAW
37 (1979)
Derrick Augustus Carter, Bifurcations of Consciousness: The Elimination of the Self-Induced
Intoxication Excuse, 64 MO. L. REV. 383 (1999)
MICHAEL CAVADINO AND JAMES DIGNAN, THE PENAL SYSTEM: AN INTRODUCTION (2002)
EUGENE CHARNIAK AND DREW MCDERMOTT, INTRODUCTION TO ARTIFICIAL INTELLIGENCE (1985)
Russell L. Christopher, Deterring Retributivism: The Injustice of “Just” Punishment, 96 NW. U. L.
REV. 843 (2002)
ARTHUR C. CLARKE, 2001: A SPACE ODYSSEY (1968)
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981)
SIR EDWARD COKE, INSTITUTIONS OF THE LAWS OF ENGLAND – THIRD PART (6th ed., 1681, 1817, 2001)
Dana K. Cole, Expending Felony-Murder in Ohio: Felony-Murder or Murder-Felony, 63 OHIO
ST. L. J. 15 (2002)
ROBERTA C. CRONIN, BOOT CAMPS FOR ADULT AND JUVENILE OFFENDERS: OVERVIEW AND UPDATE
(1994)
George R. Cross and Cary G. Debessonet, An Artificial Intelligence Application in the Law:
CCLIPS, A Computer Program that Processes Legal Information, 1 HIGH TECH. L.J. 329 (1986)
Homer D. Crotty, The History of Insanity as a Defence to Crime in English Common Law, 12 CAL.
L. REV. 105 (1924)
MICHAEL DALTON, THE COUNTREY JUSTICE (1618, 2003)
Donald Davidson, Turing’s Test, MODELLING THE MIND (1990)
Michael J. Davidson, Feminine Hormonal Defenses: Premenstrual Syndrome and Postpartum
Psychosis, 2000 ARMY LAWYER 5 (2000)
Richard Delgado, Ascription of Criminal States of Mind: Toward a Defense Theory for the
Coercively Persuaded (“Brainwashed”) Defendant, 63 MINN. L. REV. 1 (1978)
DANIEL C. DENNETT, BRAINSTORMS (1978)
DANIEL C. DENNETT, THE INTENTIONAL STANCE (1987)
Daniel C. Dennett, Evolution, Error, and Intentionality, THE FOUNDATIONS OF ARTIFICIAL INTELLI-
GENCE 190 (Derek Partridge and Yorick Wilks eds., 1990, 2006)
René Descartes, Discours de la Méthode pour Bien Conduire sa Raison et Chercher La Vérité
dans Les Sciences (1637)
Anthony M. Dillof, Unraveling Unknowing Justification, 77 NOTRE DAME L. REV. 1547 (2002)
Bibliography 245
Dolores A. Donovan and Stephanie M. Wildman, Is the Reasonable Man Obsolete? A Critical
Perspective on Self-Defense and Provocation, 14 LOY. L. A. L. REV. 435, 441 (1981)
AAGE GERHARDT DRACHMANN, THE MECHANICAL TECHNOLOGY OF GREEK AND ROMAN ANTIQUITY:
A STUDY OF THE LITERARY SOURCES (1963)
Joshua Dressler, Professor Delgado’s “Brainwashing” Defense: Courting a Determinist Legal
System, 63 MINN. L. REV. 335 (1978)
Joshua Dressler, Rethinking Heat of Passion: A Defense in Search of a Rationale, 73 J. CRIM. L. &
CRIMINOLOGY 421 (1982)
Joshua Dressler, Battered Women Who Kill Their Sleeping Tormenters: Reflections on
Maintaining Respect for Human Life while Killing Moral Monsters, CRIMINAL LAW THEORY –
DOCTRINES OF THE GENERAL PART 259 (Stephen Shute and A. P. Simester eds., 2005)
G. R. DRIVER AND JOHN C. MILES, THE BABYLONIAN LAWS, VOL. I: LEGAL COMMENTARY (1952)
ANTONY ROBIN DUFF, CRIMINAL ATTEMPTS (1996)
Fernand N. Dutile and Harold F. Moore, Mistake and Impossibility: Arranging Marriage Between
Two Difficult Partners, 74 NW. U. L. REV. 166 (1980)
Justice Ellis, Criminal Law as an Instrument of Social Control, 17 VICTORIA U. WELLINGTON
L. REV. 319 (1987)
GERTRUDE EZORSKY, PHILOSOPHICAL PERSPECTIVES ON PUNISHMENT (1972)
Judith Fabricant, Homicide in Response to a Threat of Rape: A Theoretical Examination of the
Rule of Justification, 11 GOLDEN GATE U. L. REV. 945 (1981)
DAVID P. FARRINGTON AND BRANDON C. WELSH, PREVENTING CRIME: WHAT WORKS FOR CHILDREN,
OFFENDERS, VICTIMS AND PLACES (2006)
EDWARD A. FEIGENBAUM AND PAMELA MCCORDUCK, THE FIFTH GENERATION: ARTIFICIAL INTELLIGENCE
AND JAPAN’S COMPUTER CHALLENGE TO THE WORLD (1983)
S. Z. Feller, Les Délits de Mise en Danger, 40 REV. INT. DE DROIT PÉNAL 179 (1969)
JAMIE FELLNER AND JOANNE MARINER, COLD STORAGE: SUPER-MAXIMUM SECURITY CONFINEMENT IN
INDIANA (1997)
Robert P. Fine and Gary M. Cohen, Is Criminal Negligence a Defensible Basis for Criminal
Liability?, 16 BUFF. L. REV. 749 (1966)
PAUL JOHANN ANSELM FEUERBACH, LEHRBUCH DES GEMEINEN IN DEUTSCHLAND GÜLTIGEN PEINLICHEN
RECHTS (1812, 2007)
Stuart Field and Nico Jorg, Corporate Liability and Manslaughter: Should We Be Going Dutch?,
[1991] Crim. L.R. 156 (1991)
Herbert Fingarette, Addiction and Criminal Responsibility, 84 YALE L. J. 413 (1975)
ARTHUR E. FINK, CAUSES OF CRIME: BIOLOGICAL THEORIES IN THE UNITED STATES, 1800–1915 (1938)
JOHN FINNIS, NATURAL LAW AND NATURAL RIGHTS (1980)
Brent Fisse and John Braithwaite, The Allocation of Responsibility for Corporate Crime: Individ-
ualism, Collectivism and Accountability, 11 SYDNEY L. REV. 468 (1988)
Peter Fitzpatrick, “Always More to Do”: Capital Punishment and the (De)Composition of Law,
THE KILLING STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE 117 (Austin Sarat ed.,
1999)
OWEN J. FLANAGAN, JR., THE SCIENCE OF THE MIND (2nd ed., 1991)
GEORGE P. FLETCHER, RETHINKING CRIMINAL LAW (1978, 2000)
George P. Fletcher, The Nature of Justification, ACTION AND VALUE IN CRIMINAL LAW 175 (Stephen
Shute, John Gardner and Jeremy Horder eds., 2003)
JERRY A. FODOR, MODULES, FRAMES, FRIDGEONS, SLEEPING DOGS AND THE MUSIC OF THE SPHERES, THE
ROBOT’S DILEMMA: THE FRAME PROBLEM IN ARTIFICIAL INTELLIGENCE (Zenon W. Pylyshyn ed.,
1987)
Keith Foren, Casenote: In Re Tyvonne M. Revisited: The Criminal Infancy Defense in Connecticut,
18 Q. L. REV. 733 (1999)
MICHEL FOUCAULT, MADNESS AND CIVILIZATION (1965)
MICHEL FOUCAULT, DISCIPLINE AND PUNISH: THE BIRTH OF THE PRISON (1977)
Sue Frank, Oklahoma Camp Stresses Structure and Discipline, 53 CORRECTIONS TODAY 102 (1991)
246 Bibliography
Lionel H. Frankel, Criminal Omissions: A Legal Microcosm, 11 WAYNE L. REV. 367 (1965)
Lionel H. Frankel, Narcotic Addiction, Criminal Responsibility and Civil Commitment, 1966 UTAH
L. REV. 581 (1966)
Robert M. French, Subcognition and the Limits of the Turing Test, 99 MIND 53 (1990)
K. W. M. Fulford, Value, Action, Mental Illness, and the Law, ACTION AND VALUE IN CRIMINAL LAW
279 (Stephen Shute, John Gardner and Jeremy Horder eds., 2003)
Gail S. Funke, The Economics of Prison Crowding, 478 ANNALS OF THE AMERICAN ACADEMY OF
POLITICAL AND SOCIAL SCIENCES 86 (1985)
JONATHAN M.E. GABBAI, COMPLEXITY AND THE AEROSPACE INDUSTRY: UNDERSTANDING EMERGENCE BY
RELATING STRUCTURE TO PERFORMANCE USING MULTI-AGENT SYSTEMS (Ph.D. Thesis, University of
Manchester, 2005)
Crystal A. Garcia, Using Palmer’s Global Approach to Evaluate Intensive Supervision Programs:
Implications for Practice, 4 CORRECTION MANAGEMENT QUARTERLY 60 (2000)
HOWARD GARDNER, THE MIND’S NEW SCIENCE: A HISTORY OF THE COGNITIVE REVOLUTION (1985)
DAVID GARLAND, THE CULTURE OF CONTROL: CRIME AND SOCIAL ORDER IN CONTEMPORARY SOCIETY
(2002)
Chas E. George, Limitation of Police Powers, 12 LAW. & BANKER & S. BENCH & B. REV.
740 (1919)
Jack P. Gibbs, A Very Short Step toward a General Theory of Social Control, 1985 AM. B. FOUND
RES. J. 607 (1985)
SANDER L. GILMAN, SEEING THE INSANE (1982)
P. R. Glazebrook, Criminal Omissions: The Duty Requirement in Offences Against the Person,
55 L. Q. REV. 386 (1960)
ROBERT M. GLORIOSO AND FERNANDO C. COLON OSORIO, ENGINEERING INTELLIGENT SYSTEMS: CONCEPTS
AND APPLICATIONS (1980)
Sheldon Glueck, Principles of a Rational Penal Code, 41 HARV. L. REV. 453 (1928)
SIR GERALD GORDON, THE CRIMINAL LAW OF SCOTLAND (1st ed., 1967)
GERHARDT GREBING, THE FINE IN COMPARATIVE LAW: A SURVEY OF 21 COUNTRIES (1982)
Kent Greenawalt, The Perplexing Borders of Justification and Excuse, 84 COLUM. L. REV. 1897
(1984)
Kent Greenawalt, Distinguishing Justifications from Excuses, 49 LAW & CONTEMP. PROBS.
89 (1986)
David F. Greenberg, The Corrective Effects of Corrections: A Survey of Evaluation, CORRECTIONS
AND PUNISHMENT 111 (David F. Greenberg ed., 1977)
Judith A. Greene, Structuring Criminal Fines: Making an ‘Intermediate Penalty’ More Useful and
Equitable, 13 JUSTICE SYSTEM JOURNAL 37 (1988)
Richard Gruner, To Let the Punishment Fit the Organization: Sanctioning Corporate Offenders
Through Corporate Probation, 16 AM. J. CRIM. L. 1 (1988)
JEROME HALL, GENERAL PRINCIPLES OF CRIMINAL LAW (2nd ed., 1960, 2005)
Jerome Hall, Intoxication and Criminal Responsibility, 57 HARV. L. REV. 1045 (1944)
Jerome Hall, Negligent Behaviour Should Be Excluded from Penal Liability, 63 COLUM. L. REV.
632 (1963)
Seymour L. Halleck, The Historical and Ethical Antecedents of Psychiatric Criminology, PSYCHI-
ATRIC ASPECTS OF CRIMINOLOGY 8 (Halleck and Bromberg eds., 1968)
Gabriel Hallevy, The Recidivist Wants to Be Punished – Punishment as an Incentive to Re-offend,
5 INT’L J. PUNISHMENT & SENTENCING 124 (2009)
GABRIEL HALLEVY, A MODERN TREATISE ON THE PRINCIPLE OF LEGALITY IN CRIMINAL LAW (2010)
Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to
Legal Social Control, 4 AKRON INTELL. PROP. J. 171 (2010)
Gabriel Hallevy, Unmanned Vehicles – Subordination to Criminal Law under the Modern Concept
of Criminal Liability, 21 J. L. INF. & SCI. 311 (2011)
Gabriel Hallevy, Therapeutic Victim-Offender Mediation within the Criminal Justice Process –
Sharpening the Evaluation of Personal Potential for Rehabilitation while Righting Wrongs
Bibliography 247
DAVID MANNERS AND TSUGIO MAKIMOTO, LIVING WITH THE CHIP (1995)
Dan Markel, Are Shaming Punishments Beautifully Retributive? Retributivism and the
Implications for the Alternative Sanctions Debate, 54 VAND. L. REV. 2157 (2001)
Robert Martinson, What Works? Questions and Answers about Prison Reform, 35 PUBLIC INTEREST
22 (1974)
Thomas Mathiesen, The Viewer Society: Michel Foucault’s ‘Panopticon’ Revisited, 1 THEORETICAL
CRIMINOLOGY 215 (1997)
Peter McCandless, Liberty and Lunacy: The Victorians and Wrongful Confinement, MADHOUSES,
MAD-DOCTORS, AND MADMEN: THE SOCIAL HISTORY OF PSYCHIATRY IN THE VICTORIAN ERA (Scull
ed., 1981)
Aileen McColgan, In Defence of Battered Women who Kill, 13 OXFORD J. LEGAL STUD. 508 (1993)
Sean McConville, The Victorian Prison: England 1865–1965, THE OXFORD HISTORY OF THE PRISON
131 (Norval Morris and David J. Rothman eds., 1995)
J. R. MCDONALD, G. M. BURT, J. S. ZIELINSKI AND S. D. J. MCARTHUR, INTELLIGENT KNOWLEDGE
BASED SYSTEM IN ELECTRICAL POWER ENGINEERING (1997)
COLIN MCGINN, THE PROBLEM OF CONSCIOUSNESS: ESSAYS TOWARDS A RESOLUTION (1991)
KARL MENNINGER, MARTIN MAYMAN AND PAUL PRUYSER, THE VITAL BALANCE (1963)
Alan C. Michaels, Imposing Constitutional Limits on Strict Liability: Lessons from the American
Experience, APPRAISING STRICT LIABILITY 218 (A. P. Simester ed., 2005)
DONALD MICHIE AND RORY JOHNSTON, THE CREATIVE COMPUTER (1984)
Justine Miller, Criminal Law – An Agency for Social Control, 43 YALE L. J. 691 (1934)
MARVIN MINSKY, THE SOCIETY OF MIND (1986)
JESSICA MITFORD, KIND AND USUAL PUNISHMENT: THE PRISON BUSINESS (1974)
Patrick Montague, Self-Defense and Choosing Between Lives, 40 PHIL. STUD. 207 (1981)
MICHAEL MOORE, LAW AND PSYCHIATRY: RETHINKING THE RELATIONSHIP (1984)
George Mora, Historical and Theoretical Trends in Psychiatry, 1 COMPREHENSIVE TEXTBOOK OF
PSYCHIATRY 1 (Alfred M. Freedman, Harold Kaplan and Benjamin J. Sadock eds., 2nd ed.,
1975)
HANS MORAVEC, ROBOT: MERE MACHINE TO TRANSCENDENT MIND (1999)
Norval Morris, Somnambulistic Homicide: Ghosts, Spiders, and North Koreans, 5 RES JUDICATAE
29 (1951)
TIM MORRIS, COMPUTER VISION AND IMAGE PROCESSING (2004)
Gerhard O. W. Mueller, Mens Rea and the Corporation – A Study of the Model Penal Code
Position on Corporate Criminal Liability, 19 U. PITT. L. REV. 21 (1957)
Michael A. Musmanno, Are Subordinate Officials Penally Responsible for Obeying Superior
Orders which Direct Commission of Crime?, 67 DICK. L. REV. 221 (1963)
MONTY NEWBORN, DEEP BLUE (2002)
ALLEN NEWELL AND HERBERT A. SIMON, HUMAN PROBLEM SOLVING (1972)
EDWARD NORBECK, RELIGION IN PRIMITIVE SOCIETY (1961)
Anne Norton, After the Terror: Mortality, Equality, Fraternity, THE KILLING STATE – CAPITAL
PUNISHMENT IN LAW, POLITICS, AND CULTURE 27 (Austin Sarat ed., 1999)
Scott T. Noth, A Penny for Your Thoughts: Post-Mitchell Hate Crime Laws Confirm a Mutating
Effect upon Our First Amendment and the Government’s Role in Our Lives, 10 REGENT U. L.
REV. 167 (1998)
DAVID ORMEROD, SMITH & HOGAN CRIMINAL LAW (11th ed., 2005)
N. P. PADHY, ARTIFICIAL INTELLIGENCE AND INTELLIGENT SYSTEMS (2005, 2009)
WILLIAM PALEY, A TREATISE ON THE LAW OF PRINCIPAL AND AGENT (2nd ed., 1847)
DAN W. PATTERSON, INTRODUCTION TO ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS (1990)
Monrad G. Paulsen, Intoxication as a Defense to Crime, 1961 U. ILL. L. F. 1 (1961)
Rollin M. Perkins, Negative Acts in Criminal Law, 22 IOWA L. REV. 659 (1937)
Rollin M. Perkins, Ignorance and Mistake in Criminal Law, 88 U. PA. L. REV. 35 (1940)
Rollin M. Perkins, “Knowledge” as a Mens Rea Requirement, 29 HASTINGS L. J. 953 (1978)
Rollin M. Perkins, Impelled Perpetration Restated, 33 HASTINGS L. J. 403 (1981)
250 Bibliography
ANTHONY M. PLATT, THE CHILD SAVERS: THE INVENTION OF DELINQUENCY (2nd ed., 1969, 1977)
Anthony Platt and Bernard L. Diamond, The Origins of the “Right and Wrong” Test of Criminal
Responsibility and Its Subsequent Development in the United States: An Historical Survey,
54 CAL. L. REV. 1227 (1966)
FREDERICK POLLOCK AND FREDERICK WILLIAM MAITLAND, THE HISTORY OF ENGLISH LAW BEFORE THE
TIME OF EDWARD I (rev. 2nd ed., 1898)
Stanislaw Pomorski, On Multiculturalism, Concepts of Crime, and the “De Minimis” Defense,
1997 B.Y.U. L. REV. 51 (1997)
JAMES COWLES PRICHARD, A TREATISE ON INSANITY AND OTHER DISORDERS AFFECTING THE MIND (1835)
GUSTAV RADBRUCH, DER HANDLUNGSBEGRIFF IN SEINER BEDEUTUNG FÜR DAS STRAFRECHTSSYSTEM
(1904)
LEON RADZINOWICZ, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRATION FROM 1750 VOL. 1:
THE MOVEMENT FOR REFORM (1948)
LEON RADZINOWICZ AND ROGER HOOD, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRATION
FROM 1750 VOL. 5: THE EMERGENCE OF PENAL POLICY (1986)
Craig W. Reynolds, Herds and Schools: A Distributed Behavioral Model, 21 COMPUT. GRAPH.
(1987)
ELAINE RICH AND KEVIN KNIGHT, ARTIFICIAL INTELLIGENCE (2nd ed., 1991)
FIORI RINALDI, IMPRISONMENT FOR NON-PAYMENT OF FINES (1976)
Edwina L. Rissland, Artificial Intelligence and Law: Stepping Stones to a Model of Legal
Reasoning, 99 YALE L. J. 1957 (1990)
CHASE RIVELAND, SUPERMAX PRISONS: OVERVIEW AND GENERAL CONSIDERATIONS (1999)
OLIVIA F. ROBINSON, THE CRIMINAL LAW OF ANCIENT ROME (1995)
Paul H. Robinson, A Theory of Justification: Societal Harm as a Prerequisite for Criminal
Liability, 23 U.C.L.A. L. REV. 266 (1975)
Paul H. Robinson and John M. Darley, The Utility of Desert, 91 NW. U. L. REV. 453 (1997)
Paul H. Robinson, Testing Competing Theories of Justification, 76 N.C. L. REV. 1095 (1998)
P. ROGERS, LAW ON THE BATTLEFIELD (1996)
Vashon R. Rogers Jr., De Minimis Non Curat Lex, 21 ALBANY L. J. 186 (1880)
GEORGE ROSEN, MADNESS IN SOCIETY: CHAPTERS IN THE HISTORICAL SOCIOLOGY OF MENTAL ILLNESS
(1969)
Laurence H. Ross, Deterrence Regained: The Cheshire Constabulary’s “Breathalyser Blitz”,
6 J. LEGAL STUD. 241 (1977)
DAVID J. ROTHMAN, CONSCIENCE AND CONVENIENCE: THE ASYLUM AND ITS ALTERNATIVES IN PROGRES-
SIVE AMERICA (1980)
David J. Rothman, For the Good of All: The Progressive Tradition in Prison Reform, HISTORY AND
CRIME 271 (James A. Inciardi and Charles E. Faupel eds., 1980)
CLAUS ROXIN, STRAFRECHT – ALLGEMEINER TEIL I (4 Auf., 2006)
STUART J. RUSSELL AND PETER NORVIG, ARTIFICIAL INTELLIGENCE: A MODERN APPROACH (2002)
WILLIAM OLDNALL RUSSELL, A TREATISE ON CRIMES AND MISDEMEANORS (1843, 1964)
Cheyney C. Ryan, Self-Defense, Pacificism, and the Possibility of Killing, 93 ETHICS 508 (1983)
GILBERT RYLE, THE CONCEPT OF MIND (1954)
Francis Bowes Sayre, Criminal Responsibility for the Acts of Another, 43 HARV. L. REV.
689 (1930)
Francis Bowes Sayre, Mens Rea, 45 HARV. L. REV. 974 (1932)
Francis Bowes Sayre, Public Welfare Offenses, 33 COLUM. L. REV. 55 (1933).
ROBERT J. SCHALKOFF, ARTIFICIAL INTELLIGENCE: AN ENGINEERING APPROACH (1990)
Roger C. Schank, What is AI, Anyway?, THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE 3 (Derek
Partridge and Yorick Wilks eds., 1990, 2006)
Samuel Scheffler, Justice and Desert in Liberal Theory, 88 CAL. L. REV. 965 (2000)
G. Schoenfeld, In Defence of Retribution in the Law, 35 PSYCHOANALYTIC Q. 108 (1966)
FRANK SCHMALLEGER, CRIMINAL JUSTICE TODAY: AN INTRODUCTORY TEXT FOR THE 21ST CENTURY
(2003)
Bibliography 251
WILLIAM ROBERT SCOTT, THE CONSTITUTION AND FINANCE OF ENGLISH, SCOTTISH AND IRISH JOINT-STOCK
COMPANIES TO 1720 (1912).
John R. Searle, Minds, Brains & Programs, 3 BEHAVIORAL & BRAIN SCI. 417 (1980)
JOHN R. SEARLE, MINDS, BRAINS AND SCIENCE (1984)
JOHN R. SEARLE, THE REDISCOVERY OF MIND (1992)
LEE SECHREST, SUSAN O. WHITE AND ELIZABETH D. BROWN, THE REHABILITATION OF CRIMINAL
OFFENDERS: PROBLEMS AND PROSPECTS (1979)
Richard P. Seiter and Karen R. Kadela, Prisoner Reentry: What Works, What Does Not, and What
Is Promising, 49 CRIME AND DELINQUENCY 360 (2003)
THORSTEN J. SELLIN, SLAVERY AND THE PENAL SYSTEM (1976)
Robert N. Shapiro, Of Robots, Persons, and the Protection of Religious Beliefs, 56 S. CAL. L. REV.
1277 (1983)
ROSEMARY SHEEHAN, GILL MCLVOR AND CHRIS TROTTER, WHAT WORKS WITH WOMEN OFFENDERS
(2007)
LAWRENCE W. SHERMAN, DAVID P. FARRINGTON, DORIS LEYTON MACKENZIE AND BRANDON C. WELSH,
EVIDENCE-BASED CRIME PREVENTION (2006)
Nancy Sherman, The Place of the Emotions in Kantian Morality, IDENTITY, CHARACTER, AND
MORALITY 145 (Owen Flanagan & Amelie O. Rotry eds., 1990)
Stephen Shute, Knowledge and Belief in the Criminal Law, CRIMINAL LAW THEORY – DOCTRINES OF
THE GENERAL PART 182 (Stephen Shute and A.P. Simester eds., 2005)
R. U. Singh, History of the Defence of Drunkenness in English Criminal Law, 49 LAW Q. REV.
528 (1933)
VIEDA SKULTANS, ENGLISH MADNESS: IDEAS ON INSANITY, 1580–1890 (1979)
Aaron Sloman, Motives, Mechanisms, and Emotions, THE PHILOSOPHY OF ARTIFICIAL INTELLIGENCE
231 (Margaret A. Boden ed., 1990)
JOHN J.C. SMART AND BERNARD WILLIAMS, UTILITARIANISM – FOR AND AGAINST (1973)
RUDOLPH SOHM, THE INSTITUTES OF ROMAN LAW (3rd ed., 1907)
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231 (1992)
MILAN SONKA, VACLAV HLAVAC AND ROGER BOYLE, IMAGE PROCESSING, ANALYSIS, AND MACHINE
VISION (2008)
WALTER W. SOROKA, ANALOG METHODS IN COMPUTATION AND SIMULATION (1954)
John R. Spencer and Antje Pedain, Approaches to Strict and Constructive Liability in Continental
Criminal Law, APPRAISING STRICT LIABILITY 237 (A. P. Simester ed., 2005)
Jane Stapelton, Law, Causation and Common Sense, 8 OXFORD J. LEGAL STUD. 111 (1988)
G.R. Sullivan, Knowledge, Belief, and Culpability, CRIMINAL LAW THEORY – DOCTRINES OF THE
GENERAL PART 207 (Stephen Shute and A.P. Simester eds., 2005)
G. R. Sullivan, Strict Liability for Criminal Offences in England and Wales Following
Incorporation into English Law of the European Convention on Human Rights, APPRAISING
STRICT LIABILITY 195 (A. P. Simester ed., 2005)
ROGER J. SULLIVAN, IMMANUEL KANT’S MORAL THEORY (1989)
Victor Tadors, Recklessness and the Duty to Take Care, CRIMINAL LAW THEORY – DOCTRINES OF THE
GENERAL PART 227 (Stephen Shute and A.P. Simester eds., 2005)
STEVEN L. TANIMOTO, ELEMENTS OF ARTIFICIAL INTELLIGENCE: AN INTRODUCTION USING LISP (1987)
Lawrence Taylor and Katharina Dalton, Premenstrual Syndrome: A New Criminal Defense?,
19 CAL. W. L. REV. 269 (1983)
JUDITH JARVIS THOMSON, RIGHTS, RESTITUTION AND RISK: ESSAYS IN MORAL THEORY (1986)
BENJAMIN THORPE, ANCIENT LAWS AND INSTITUTES OF ENGLAND (1840, 2004)
Lawrence P. Tiffany and Carl A. Anderson, Legislating the Necessity Defense in Criminal Law,
52 DENV. L. J. 839 (1975)
Janet A. Tighe, Francis Wharton and the Nineteenth Century Insanity Defense: The Origins of a
Reform Tradition, 27 AM. J. LEGAL HIST. 223 (1983)
Jackson Toby, Is Punishment Necessary? 55 J. CRIM. L. CRIMINOLOGY & POLICE SCI. 332 (1964)
252 Bibliography
Kam C. Wong, Police Powers and Control in the People’s Republic of China: The History of
Shoushen, 10 COLUM. J. ASIAN L. 367 (1996)
Ledger Wood, Responsibility and Punishment, 28 AM. INST. CRIM. L. & CRIMINOLOGY 630 (1938)
ANDREW WRIGHT, GWYNETH BOSWELL AND MARTIN DAVIES, CONTEMPORARY PROBATION PRACTICE
(1993)
Andrew J. Wu, From Video Games to Artificial Intelligence: Assigning Copyright Ownership to
Works Generated by Increasingly Sophisticated Computer Programs, 25 AIPLA Q.J. 131
(1997)
REUVEN YARON, THE LAWS OF ESHNUNNA (2nd ed., 1988)
MASOUD YAZDANI AND AJIT NARAYANAN, ARTIFICIAL INTELLIGENCE: HUMAN EFFECTS (1985)
PETER YOUNG, PUNISHMENT, MONEY AND THE LEGAL ORDER: AN ANALYSIS OF THE EMERGENCE OF
MONETARY SANCTIONS WITH SPECIAL REFERENCE TO SCOTLAND (1987)
Rachel S. Zahniser, Morally and Legally: A Parent’s Duty to Prevent the Abuse of a Child as
Defined by Lane v. Commonwealth, 86 KY. L. J. 1209 (1998)
REINHARD ZIMMERMANN, THE LAW OF OBLIGATIONS – ROMAN FOUNDATIONS OF THE CIVILIAN TRADITION
(1996)
Franklin E. Zimring, The Executioner’s Dissonant Song: On Capital Punishment and American
Legal Values, THE KILLING STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE
137 (Austin Sarat ed., 1999)
Index
S
V
Self-defense, 33, 102, 147–149, 168–177,
179, 180 Victim, 26, 27, 50, 51, 61, 64, 65, 74, 83, 85,
95, 96, 98, 123, 174, 187, 188, 199, 201,
Societas delinquere non potest, 41
207, 210, 225
Specific intent, 70–74, 76, 79–82, 85, 94–96
Volition, 37, 38, 68, 69, 71, 72, 82–86, 93–101,
Stimulations, 87, 162
Strict liability, 37, 38, 42, 69, 82, 112, 123
Voluntas reputabitur pro facto, 71, 74,
135–146, 164, 166
77, 79
Substantive immunity, 149, 167–168
T W
Tangible robot, 17, 19, 21 White collar crimes, 29
Thinking machine, 2, 6, 8, 14, 16, 20–24, 92, 99 Willful blindness, 74, 76, 79, 89, 92, 93
Turing test, 7–9