Vous êtes sur la page 1sur 262

Gabriel Hallevy

Liability for
Crimes Involving
Artificial
Intelligence
Systems
Liability for Crimes Involving Artificial
Intelligence Systems
ThiS is a FM Blank Page
Gabriel Hallevy

Liability for Crimes


Involving Artificial
Intelligence Systems
Gabriel Hallevy
Faculty of Law
Ono Academic College

ISBN 978-3-319-10123-1 ISBN 978-3-319-10124-8 (eBook)


DOI 10.1007/978-3-319-10124-8
Springer Cham Heidelberg New York Dordrecht London
Library of Congress Control Number: 2014955453

# Springer International Publishing Switzerland 2015


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts
in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being
entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication
of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the
Publisher’s location, in its current version, and permission for use must always be obtained from
Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center.
Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Preface

The idea of liability for crimes involving artificial intelligence systems has not been
widely researched yet. Advanced technology makes society face new challenges,
not only technological, but legal as well. The idea of criminal liability in the specific
context of artificial intelligence systems is one of these challenges that should be
thoroughly explored. The main question is who should be criminally liable for
offenses involving artificial intelligence systems. The answer may include the
programmers, the manufacturers, the users, and, perhaps, the artificial intelligence
system itself.
In 2010 a few articles of mine were published in the USA and Australia on certain
aspects of this issue. These articles explored the specific aspects that seemed to be
important to open up an academic discussion on this issue. The main idea of these
articles was that criminal law is not supposed to change technology, but should adapt
itself to modern technological insights. They also called for thinking and rethinking
the idea of imposition of criminal liability upon machines and software. Perhaps, no
criminal liability should be imposed on machines, but if basic definitions of criminal
law are not changed, this odd and weird consequence is inevitable.
Dozens of comments arrived for each article, and the time has come for narrow
generalization of this idea. The first generalization of this idea was restricted to
tangible robots, which are equipped with artificial intelligence software and commit
homicide offenses as specific offenses and not through derivative criminal liability.
Thus, my book When Robots Kill was published in 2013 in the USA by UPNE and
Northeastern University Press. Although the book is academic, it made an attempt
to address wider population other than legal academics.
The book was found innovative, and reviews were published in various places
such as the Washington Post, the Boston Globe and the Chronicle Review. Dozens
of comments arrived as well. Some of these comments called for the final and full
academic generalization of this issue, not restricted to tangible robots, not restricted
to homicide offenses and opened for derivative criminal liability. The need was for
an academic professional textbook towards this issue, although it may not address
to wide population. This book is the final and full academic generalization of this
issue. The general idea expressed in this book relates to all types of advanced
artificial intelligence systems, including both fully operational and planned
systems, to all modes of criminal liability, including direct and derivative liability,
and to all types of offenses.
v
vi Preface

The reader would find in this book a mature thorough theory towards the
criminal liability for offenses involving artificial intelligence systems based on
the current criminal law in most modern legal systems. The involvement of the
artificial intelligence systems in these offenses may be as perpetrators, accomplices
and mere instruments for the commission of the offense. One of the points of this
book is that, perhaps, no criminal liability should be imposed on technological
systems, at least yet, but if basic definitions of criminal law are not changed, this
odd consequence is inevitable.

Gabriel Hallevy
Contents

1 Artificial Intelligence Technology and Modern Technological


Delinquency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 The Rise of Artificial Intelligence Technology . . . . . . . . . 1
1.1.2 Outlines of Artificial Intelligence Technology . . . . . . . . . . 6
1.1.3 Daily Usage of Artificial Intelligence Technology . . . . . . . 14
1.2 The Development of the Modern Technological Delinquency . . . . 16
1.2.1 The Aversion from Wide Usage of Advanced Technology . . . 16
1.2.2 Delinquency by Technology . . . . . . . . . . . . . . . . . . . . . . . 21
1.2.3 Modern Analogies of Liability . . . . . . . . . . . . . . . . . . . . . 26
2 Basic Requirements of Modern Criminal Liability . . . . . . . . . . . . . . 29
2.1 Modern Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.1 The Offense’s Requirements (In Rem) . . . . . . . . . . . . . . . 31
2.1.2 The Offender’s Requirements (In Personam) . . . . . . . . . . 34
2.2 Legal Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.1 Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2.2 Punishments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3 External Element Involving Artificial Intelligence Systems . . . . . . . . 47
3.1 The General Structure of the External Element . . . . . . . . . . . . . . 47
3.1.1 Independent Offenses . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.2 Derivative Criminal Liability . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Commission of External Element Components by Artificial
Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.1 Conduct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.2 Circumstances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.3 Results and Causation . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4 Positive Fault Element Involving Artificial Intelligence Systems . . . . 67
4.1 Structure of Positive Fault Element . . . . . . . . . . . . . . . . . . . . . . . 67
4.1.1 Independent Offenses . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.2 Derivative Criminal Liability . . . . . . . . . . . . . . . . . . . . . . 70

vii
viii Contents

4.2 General Intent and Artificial Intelligence Systems . . . . . . . . . . . . 82


4.2.1 Structure of General Intent . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.2 Cognition and Artificial Intelligence Technology . . . . . . . 86
4.2.3 Volition and Artificial Intelligence Technology . . . . . . . . . 93
4.2.4 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.5 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2.6 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.3 Negligence and Artificial Intelligence Systems . . . . . . . . . . . . . . . 120
4.3.1 Structure of Negligence . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3.2 Negligence and Artificial Intelligence Technology . . . . . . 124
4.3.3 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.3.4 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.3.5 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.4 Strict Liability and Artificial Intelligence Systems . . . . . . . . . . . . 135
4.4.1 Structure of Strict Liability . . . . . . . . . . . . . . . . . . . . . . . 135
4.4.2 Strict Liability and Artificial Intelligence Technology . . . . 139
4.4.3 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.4.4 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.4.5 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5 Negative Fault Elements and Artificial Intelligence Systems . . . . . . . 147
5.1 Relevance and Structure of Negative Fault Elements . . . . . . . . . . 147
5.2 Negative Fault Elements by Artificial Intelligence Technology . . . 150
5.2.1 In Personam Negative Fault Elements . . . . . . . . . . . . . . . 150
5.2.2 In Rem Negative Fault Elements . . . . . . . . . . . . . . . . . . . 168
6 Punishibility of Artificial Intelligence Technology . . . . . . . . . . . . . . 185
6.1 General Purposes of Punishments and Sentencing . . . . . . . . . . . . 185
6.1.1 Retribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.1.2 Deterrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6.1.3 Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.1.4 Incapacitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6.2 Relevance of Sentencing to Artificial Intelligence Systems . . . . . . 210
6.2.1 Relevant Purposes to Artificial Intelligence Technology . . . 210
6.2.2 Outlines for Imposition of Specific Punishments
on Artificial Intelligence Technology . . . . . . . . . . . . . . . . 212
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Artificial Intelligence Technology
and Modern Technological Delinquency 1

Contents
1.1 Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 The Rise of Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Outlines of Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.3 Daily Usage of Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2 The Development of the Modern Technological Delinquency . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.1 The Aversion from Wide Usage of Advanced Technology . . . . . . . . . . . . . . . . . . . . . 16
1.2.2 Delinquency by Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.2.3 Modern Analogies of Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.1 Artificial Intelligence Technology

1.1.1 The Rise of Artificial Intelligence Technology

Artificial intelligence technology is the basis for growing number of science fiction
compositions, such as books and movies. Some of them reflect fears from this
technology and some reflect the enthusiasm towards it. The major epistemological
question has always remained whether machines can think. Some agree that they
can “think”, but the question is whether they can think (without commas).
The modern answer to this question may be proposed by artificial intelligence
technology.1 This technology is considered to be modern, but its roots are not
necessarily modern. In fact, since the very dawn of humanity mankind has always
sought tools to ease daily life. In the Stone Age, these tools were made of stone. As
mankind discovered the advantages of metal, these tools were made of metal. As
human knowledge became wider, more and more tools were invented to take
growing roles in human daily life.

1
For the technical review of this issue and the historical developments see GABRIEL HALLEVY,
WHEN ROBOTS KILL – ARTIFICIAL INTELLIGENCE UNDER CRIMINAL LAW 1–37 (2013).

# Springer International Publishing Switzerland 2015 1


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8_1
2 1 Artificial Intelligence Technology and Modern Technological Delinquency

Tools were challenged by complicated tasks. If failed, newer were invented to


meet the challenge. If succeeded, new challenges were posed, and so on up to this
day. Mechanical devices were used to ease daily life since antiquity. Heron of
Alexandria already used fire engines and wind-powered organ during the first
century AD.2 The European first industrial revolution introduced machines to indus-
try and opened the era of massive production. The idea of thinking machines
evolved together with the insight of the human ability to create systematic methods
of rationality. Descartes initiated the human quest for such methods in 1637,3
although he himself did not believe that reason could be achieved through mechan-
ical devices.4
However, Descartes laid the groundwork for symbol-processing machines of the
modern age. Hobbes described in 1651 reason as symbolic calculation.5 During the
seventeenth century Leibniz fostered the hope of discovering general mathematics,
Characteristica Universalis, by means of which thinking could be replaced by
calculation, and Pascal designed machines for addition and multiplication
calculations, probably the first mechanic computer.6 These machines were fully
operated by humans and could not “think”. They were not expected to be actually
thinking.
The modern idea of thinking machine is traditionally related to Lady Bryon’s,
patroness of Charles Babbage, question. Babbage proposed and designed an ana-
lytical engine, which has never been built. The analytical engine was designed as
the first programmable computing machine. In 1843, when exposed to Babbage
work, asked Lady Bryon whether this machine could actually “think”. The idea of
mechanical thinking was extremely odd those days, but this question opened the
human mind for thinking over the feasibility of unnatural intelligence, or “artificial
intelligence” (AI).

2
See e.g., AAGE GERHARDT DRACHMANN, THE MECHANICAL TECHNOLOGY OF GREEK AND ROMAN
ANTIQUITY: A STUDY OF THE LITERARY SOURCES (1963); J. G. LANDELS, ENGINEERING IN THE ANCIENT
WORLD (rev. ed., 2000).
3
René Descartes, Discours de la Méthode pour Bien Conduire sa Raison et Chercher La Vérité
dans Les Sciences (1637) (Eng: Discourse on the Method of Rightly Conducting One’s Reason and
of Seeking Truth in the Sciences).
4
Terry Winograd, Thinking Machines: Can There Be? Are We?, THE FOUNDATIONS OF ARTIFICIAL
INTELLIGENCE 167, 168 (Derek Partridge and Yorick Wilks eds., 1990, 2006).
5
THOMAS HOBBES, LEVIATHAN OR THE MATTER, FORME AND POWER OF A COMMON WEALTH
ECCLESIASTICALL AND CIVIL III.xxxii.2 (1651):

When a man reasoneth, he does nothing else but conceive a sum total, from addition of
parcels; or conceive a remainder. . . These operations are not incident to numbers only, but
to all manner of things that can be added together, and taken one out of another. . . the
logicians teach the same in consequences of words; adding together two names to make an
affirmation, and to affirmations to make a syllogism; and many syllogisms to make a
demonstration.
6
GOTTFRIED WILHELM LEIBNIZ, CHARACTERISTICA UNIVERSALIS (1676).
1.1 Artificial Intelligence Technology 3

However, only when electricity was discovered for daily use, and electronic
computers were invented, the idea of “artificial intelligence” could be examined de
facto. In the 1950s major developments were established in machine-to-machine
translation, so machines could communicate with each other, and in human-to-
machine translation, so humans and machines could communicate through orders
given by human operators to computers. This communication was very limited, but
was adequate for optimal use of these computers. Computers scientists combined
the modern knowledge of natural language in their works, and consequently
knowledge representation has eventually developed.7
Electronic computers’ capability to store large amounts of information and
process that information at high speed challenged scientists to build systems,
which could exhibit human capabilities. Since the 1950s more and more human
abilities have been operated by electronic machines. The private computer has been
invented, and during time its size and cost were reduced, so it became available for
increasing part of the population. In addition, memory capacity, speed, reliability
and robustness of private computers increased dramatically. Thousands of useful
software tools were developed and are in daily practice. This progress made
artificial intelligence be available for population.
Artificial intelligence has been developed as separate sphere of research during
the vast developments of the 1950s. This sphere of research combined both
technological study, studies in logic and eventually cybernetics, i.e., the study of
communication in humans and machines. The studies in logic of the 1920s and
1930s enabled to produce formalized methods for reasoning. These methods
formed new form of logic known as propositional and predicate calculus, and
were based on the works of Church, Gödel, Post, Russell, Tarski, Whitehead,
Kleene and many others.8 Developments in psychology, neurology, statistics and
mathematics during the 1950s were combined as well to this growing research
sphere of artificial intelligence.9
By the end of the 1950s occurred several developments which signified for the
public the emergence of the artificial intelligence. The major one was the develop-
ment of chess-playing programs together with the General Problem Solver (GPS),
which was designed to solve wide range of problems from symbolic integration tom
word puzzles. Suddenly the public was exposed to the artificial intelligence abilities
in daily life. It caused enthusiasm, but also unrealistic expectations for those
times.10 The first science-fiction novels towards robot rebellions in humans and
robots taking control over humans have become very popular, based on these
unrealistic expectations.

7
N. P. PADHY, ARTIFICIAL INTELLIGENCE AND INTELLIGENT SYSTEMS 4 (2005, 2009).
8
DAN W. PATTERSON, INTRODUCTION TO ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS (1990).
9
GEORGE F. LUGER, ARTIFICIAL INTELLIGENCE: STRUCTURES AND STRATEGIES FOR COMPLEX PROBLEM
SOLVING (2001).
10
J. R. MCDONALD, G. M. BURT, J. S. ZIELINSKI AND S. D. J. MCARTHUR, INTELLIGENT KNOWLEDGE
BASED SYSTEM IN ELECTRICAL POWER ENGINEERING (1997).
4 1 Artificial Intelligence Technology and Modern Technological Delinquency

The research in artificial intelligence mainly proceeded in two directions. The


first was building physical devices on digital computers and the second was
developing symbolic representations. The first direction revealed robotics and the
second revealed perception, which could have been trained to classify certain types
of patterns as either similar or distinct. However, the research in artificial intelli-
gence was directed then by the general assumption, that the commonsense knowl-
edge problem is solvable. If humans can solve it, so can machines. This problem
related to the ability to understand facts out of commonsense when not all of the
information is given. This assumption blocked much of the progress in theoretical
artificial intelligence for many years. In fact, this assumption deviated artificial
intelligence research towards connectionism.11
The research in these fields also introduced artificial intelligence technology to
daily use by private consumers. During the 1970s the importance of artificial
intelligence became apparent to major part of the world. Governments in many
countries were seeking approval for long-term commitments of the resources
needed to be fund intensive research programs in artificial intelligence.12 Coopera-
tion between governments and private corporations were very common for the
benefit of developing robotics, software, hardware, computer products, etc. Many
governments around the world realized that producing systems, which can under-
stand speech and visual scenes, learn and refine their knowledge, make decisions
and exhibit many human abilities, can be achieved.
Artificial intelligence technology has been embraced to industrial use since the
1970s. Although some procedures of natural language translation have not been
fully understood and solved, since the 1970s artificial intelligence technology is
practically applicable and used in growing fields of industry.13 These fields
included biomedical microscopy, material analysis and robotics. The usage of
artificial intelligence technology in these fields was very successful, and short
after this technology has been first used in these fields, they became completely
dependent on this technology. It has been understood that the artificial intelligence
accuracy, speed and information efficient usage cannot be replaced by human
common abilities.
During the 1980s artificial intelligence research has developed to designing
expert systems. These years the expert systems were concentrated on the fields of
medicine, finance and anthropology. The main challenge of these expert systems
was to develop suitable representation for the knowledge in each field. For the
knowledge to be accessible, it should have been put into a form from which useful
inferences can be made automatically, and suitable displays and means of access
must be designed for the users. Most expert systems were successful and frequently
used, therefore they should have been maintained, including adding new knowledge

11
STUART J. RUSSELL AND PETER NORVIG, ARTIFICIAL INTELLIGENCE: A MODERN APPROACH (2002).
12
Patterson, supra note 8.
13
STEVEN L. TANIMOTO, ELEMENTS OF ARTIFICIAL INTELLIGENCE: AN INTRODUCTION USING LISP (1987).
1.1 Artificial Intelligence Technology 5

and refreshing older heuristics.14 Later on, the following challenge was to enable
newer technology to be incorporated into these expert systems very short after these
technologies become available.
The development of expert systems caused the basic mechanisms of machine
learning and problem solving to be studied thoroughly. Consequently, artificial
intelligence-based expert systems use was expanded to many more fields. More
traditional human abilities were replaced by artificial intelligence technology. This
expansion made industry be interested in the development of the artificial intelli-
gence technology beyond the academic research. The 1980s academic debate
towards the advantages of artificial intelligence technology and whether it proposes
any useful theory15 has been abandoned. The increasing use of artificial intelligence
technology created actual needs of developments.
Industry’s involvement in artificial intelligence research was increased during
the time for various reasons. First, the artificial intelligence achievements were
doubtless, especially in knowledge engineering. Second, the hardware evolution,
which made it become faster, cheaper, more comfortable, feasible and accessible
for users. Third, the growing needs of industry of problem solving faster and more
thoroughly through the attempt to increase productivity for the benefit of all. Since
artificial intelligence technology could provide suitable answers for these needs,
industry supported its development. Consequently, artificial intelligence technol-
ogy has been embraced in most industrial areas, especially in the aspects of factory
automation, programming industry, office automation and personal computing.16
The combination of growing abilities of artificial intelligence technology,
human curiosity and industrial needs direct the global trend to expansion of usage
of artificial intelligence technologies. This trend grows during the time. More and
more traditional human social functions are replaced by artificial intelligence
technologies.17 This global trend has been increased during the entrance to the
third millennium. South Korea is an example. South Korea government uses
artificial intelligence robots as soldier guards in the border with North Korea, as
teachers in schools and as prison guards.18
The US Air Force wrote in a report that outlines the future usage of drone
aircrafts, titled “Unmanned Aircraft Systems Flight Plan 2009–2047”, that

14
See, e.g., Edwina L. Rissland, Artificial Intelligence and Law: Stepping Stones to a Model of
Legal Reasoning, 99 YALE L. J. 1957, 1961–1964 (1990); ALAN TYREE, EXPERT SYSTEMS IN LAW
7–11 (1989).
15
ROBERT M. GLORIOSO AND FERNANDO C. COLON OSORIO, ENGINEERING INTELLIGENT SYSTEMS:
CONCEPTS AND APPLICATIONS (1980).
16
Padhy, supra note 7, at p. 13.
17
See e.g., Adam Waytz and Michael Norton, How to Make Robots Seem Less Creepy, The Wall
Street Journal, June 2, 2014.
18
Nick Carbone, South Korea Rolls Out Robotic Prison Guards, http://newsfeed.time.com/2011/
11/27/south-korea-debuts-robotic-prison-guards/; Alex Knapp, South Korean Prison To Feature
Robot Guards, http://www.forbes.com/sites/alexknapp/2011/11/27/south-korean-prison-to-fea
ture-robot-guards/.
6 1 Artificial Intelligence Technology and Modern Technological Delinquency

autonomous drone aircrafts are key “to increasing effects while potentially reducing
cost, forward footing and risk”. Much like a chess master can outperform proficient
chess players, future drones will be able to react faster than human pilots ever could,
the report argues. However, the report is aware of the potential legal problem:
“Increasingly humans will no longer be ‘in the loop’ but rather ‘on the loop’ –
monitoring the execution of certain decisions. . . .Authorizing a machine to make
lethal combat decisions is contingent upon political and military leaders resolving
legal and ethical questions”.19

1.1.2 Outlines of Artificial Intelligence Technology

Artificial Intelligence researchers have been trying to develop computer that actu-
ally think from the beginning of the artificial intelligence research.20 This is the
highest peak of artificial intelligence research. However, in order to develop
thinking machine it is necessary to define what exactly thinking is. Definition of
thinking, in relation to both humans and machines, has been found out as compli-
cated task for artificial intelligence researchers. Development of machines, which
have the independent ability of actual thinking, is an important event for mankind
who claims for monopoly over the high level of thinking on earth thinking machine
of that kind is analogous to not less than emergence of new species. Some
researchers even called it Machina Sapiens.
Does human science really want to create the new species? The research for
creation of new species matches this trend. The creation of this species may be for
the benefit of humans, but this is not necessarily the reason for the artificial
intelligence research. The reason may be much deeper, which touches the human
deepest and most latent quest, the very quest that its achievement has been
prevented from humans right after the human first sin.
One of the first moves towards the aim of achieving thinking machine is to define
artificial intelligence. Various definitions have been proposed. Bellman defined it as
“the automation of activities that we associate with human thinking, activities such
as decision-making, problem solving, learning,. . .”,21 Haugeland defined it as “the
exciting new effort to make computers think. . . machines with mind, in the full and
literal sense”,22 Charniak and McDermott defined it as “the study of mental
faculties through the use of computational models”.23

19
W.J. Hennigan, New Drone Has No Pilot Anywhere, So Who’s Accountable?, Los Angeles
Times, January 26, 2012. See also http://www.latimes.com/business/la-fi-auto-drone-
20120126,0,740306.story.
20
EUGENE CHARNIAK AND DREW MCDERMOTT, INTRODUCTION TO ARTIFICIAL INTELLIGENCE (1985).
21
RICHARD E. BELLMAN, AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE: CAN COMPUTERS THINK?
(1978).
22
JOHN HAUGELAND, ARTIFICIAL INTELLIGENCE: THE VERY IDEA (1985).
23
Charniak and McDermott, supra note 20.
1.1 Artificial Intelligence Technology 7

Schalkoff defined it as “a field of study that seeks to explain and emulate


intelligent behavior in terms of computational processes”.24 Kurzweil defined it
as “the art of creating machines that perform functions that require intelligence
when performed by people”,25 Winston defined it as “the study of the computations
that make it possible to perceive, reason, and act”,26 Luger and Stubblefield defined
it as “the branch of computer science that is concerned with the automation of
intelligent behavior”27 and Rich and Knight defined it as “the study of how to make
computers do things at which, at the moment, people are better”.28
From first sight, these definitions indicate more confusion than certainty. How-
ever, according to these definitions and many others, artificial intelligence systems
may be categorized into four main categories: systems that–

(a) act like humans;


(b) think like humans;
(c) think rationally; and-
(d) act rationally.29

Artificial intelligence systems which act like humans are characterized by the
Turing test of 1950.30 This test was designed to provide a satisfactory operational
definition of intelligence. Turing defined intelligent behavior as the ability to
achieve human level performance in all cognitive tasks sufficient to mistake the
human interrogator. Generally, the Turing test proposes that human is listening
conversation between machine and human. The conversation may be done in
writing. The machine is passed the test if the listening human cannot clearly
identify who is the human and who is the machine.31 The Turing test assumes
equal cognitive abilities for all humans, but conversations between machine and a
child, mentally retarded person, tired person and the machine designer are likely to
be very different.32
Artificial intelligence systems which think like humans are difficult to be
identified unless human thinking is defined preliminarily. However, artificial intel-
ligence technologies which were designed as general problem solvers were traced
to make decisions which are very similar to human decisions under the same

24
ROBERT J. SCHALKOFF, ARTIFICIAL INTELLIGENCE: AN ENGINEERING APPROACH (1990).
25
RAYMOND KURZWEIL, THE AGE OF INTELLIGENT MACHINES (1990).
26
PATRICK HENRY WINSTON, ARTIFICIAL INTELLIGENCE (3rd ed., 1992).
27
GEORGE F. LUGER AND WILLIAM A. STUBBLEFIELD, ARTIFICIAL INTELLIGENCE: STRUCTURES AND
STRATEGIES FOR COMPLEX PROBLEM SOLVING (6th ed., 2008).
28
Elaine Rich and Kevin Knight, Artificial Intelligence (2nd ed., 1991).
29
Padhy, supra note 7, at p. 7.
30
Alan Turing, Computing Machinery and Intelligence, 59 MIND 433, 433–460 (1950).
31
Donald Davidson, Turing’s Test, MODELLING THE MIND 1 (1990).
32
Robert M. French, Subcognition and the Limits of the Turing Test, 99 MIND 53, 53–54 (1990).
8 1 Artificial Intelligence Technology and Modern Technological Delinquency

information.33 The modern developments of cognitive science enabled experimen-


tal approaches to the mechanic thinking and tests for the machine to be thinking
were developed. Turing proposed another test for that purpose based on the
previous one. In this test the interrogator’s objective is to identify who of the
conversation participants is man or woman, while one of the participants is
machine.34 This test, as the previous one, is dependent on the communicative
abilities of the human participants not less than on the machine’s abilities.
The Turing test has been questionable especially regarding strong artificial
intelligence. One of the leading critics was Searle’s Chinese Room.35 A human
person is in a locked room, and batches of Chinese writings come into the room.
The human does not know Chinese. However, he is given a rule book, written in his
mother tongue, in which he can look up the bits of Chinese by their shape. The book
gives him a procedure for producing strings of Chinese characters that he sends out
of the room. Those outside the room are exercising the Turing test. They are
convinced that the person inside the room understands Chinese, although he does
not know a word in Chinese.
The person inside the room just follows the instruction book, but neither he nor
the book understands Chinese, even though he and the book can simulate such
understandings. The instruction book is, of course, the program which the computer
operates. However, on that basis some may ask what the meaning, for humans, to
understand a foreign language is. Artificial intelligence systems that think rationally
are difficult to be identified unless rationality is defined preliminarily. If rationality
is “right” thinking, it may be represented by formal logic. Given the correct
information, the machine object is to combine right conclusions. For example,
given that all monkeys are hairy, and M is a monkey, the machine should indicate
that M is hairy. Most modern artificial intelligence systems support formal logic
and act accordingly.
Artificial intelligence systems that act rationally are an advanced variation of
artificial intelligence systems that think rationally. Whereas the latter are able to
make the right conclusions given correct information as outsiders, artificial intelli-
gence systems that act rationally are able to participate in the factual event
committing the right actions given the correct information. For example, artificial
intelligence tennis player in a game, which regards the fast coming ball, is able not
only to calculate the required action in order to hit the ball, but also to act
accordingly and hit the ball.
However, the quest for thinking machine was much deeper than this classifica-
tion or these definitions. Scholars asked themselves what makes the human become
intelligent entity. If that is found, perhaps it would be possible to design intelligent
machines accordingly. The accepted approach since the late 1980s was that there

33
MASOUD YAZDANI AND AJIT NARAYANAN, ARTIFICIAL INTELLIGENCE: HUMAN EFFECTS (1985).
34
STEVEN L. TANIMOTO, ELEMENTS OF ARTIFICIAL INTELLIGENCE: AN INTRODUCTION USING LISP (1987).
35
JOHN R. SEARLE, MINDS, BRAINS AND SCIENCE 28–41 (1984); John R. Searle, Minds, Brains &
Programs, 3 BEHAVIORAL & BRAIN SCI. 417 (1980).
1.1 Artificial Intelligence Technology 9

should be particular attributes to identify intelligent thinking. Accordingly, there


were found five attributes that one would expect an intelligent entity to have–

(a) communication;
(b) internal knowledge;
(c) external knowledge;
(d) goal-driven conduct; and-
(e) creativity.36

These attributes are discussed below.


Communication is considered to be the most important attribute to define
intelligent entity. An intelligent creature is able to be communicated with. Some
can communicate not only with other humans, but with some animals as well.
Communicating with animals is narrower than with human, and not all ideas may be
expressed through that kind of communication. One can let a monkey know how
angry he is, but one cannot let the monkey know about quantum mechanics theory.
This situation is not very different than communicating with 2 years old human
child. As difficult communication with another entity is, the more unintelligent this
entity is considered.
Communication assumes relevant understanding of the information included
within that communication. The ability to understand complicated ideas is tested
through communication. However, not always communication may indicate the
quality of understanding. Some very intelligent persons, who are even considered to
be genius, are very difficult to be communicated with. Some genius autistic persons
are almost impossible to be communicated with. On the other hand, most “normal”
people have advanced skills of communication, but not many of them are easy to be
communicated with upon complicated ideas such as quantum mechanics theory.
Communication is open for all types of communication and is not necessarily
limited to speech. People are possible to be communicated with using writing as
well as speech. Consequently, machines may be considered intelligent even if they
have no ability of speech, exactly as mute people may be very intelligent. Obvi-
ously, there are very many exceptions to the communication attribute of intelli-
gence, but it is still considered very significant. The Turing tests discussed above
are based on this attribute. The question here is, if testing human communication is
so inaccurate, how can society trust this test in identifying artificial intelligence?
Internal knowledge refers to the knowledge of all entities about themselves.
Internal knowledge is parallel to self-awareness. An intelligent entity is supposed to
know on its very existence, that it functions in some way, that it integrates in factual
reality, etc. Formal logic reasoning showed the way to artificial internal knowledge

36
Roger C. Schank, What is artificial intelligence, Anyway?, THE FOUNDATIONS OF ARTIFICIAL
INTELLIGENCE 3, 4–6 (Derek Partridge and Yorick Wilks eds., 1990, 2006).
10 1 Artificial Intelligence Technology and Modern Technological Delinquency

through self-reference.37 Thus, computers are capable of being programmed to


seem as if they know about themselves and that they know about it. However, for
many researchers it seems too artificial. They insist that it would be very difficult to
conclude whether such computers really know about themselves. However, no
alternative to test internal knowledge has been suggested. The question here is,
how can one identify for sure another person’s internal knowledge?
External knowledge refers to factual data about the outside world and factual
reality. This attribute is considered to be very important at the new age when
knowledge may function as commodity, especially in relation to expert systems.38
An intelligent entity is expected to find and utilize data about the outside world, and
it is expected to know the facts combining the factual reality it is exposed to. This
attribute assumes memory and ability to classify the information through what
seems as relevant categories. This is the way humans gather their life experience,
and this is the way humans learn. It is very difficult to act as intelligent entity if any
factual event is treated as brand new each time, over and over again. Although
factual events are new each time, they may some have common characteristics, that
the intelligent entity should identify.
For example, medical expert system, which is designed to diagnose diseases
according to their symptoms, should identify the common characteristics of the
relevant disease in very many cases, although these cases may vary extremely from
each other. An entity who has no such ability acts in similar way to people who
suffer from deep amnesia or forgetfulness. They act adequately, but forget their act
and not gather it to their commutative experience. In this way simple machines are
working, as they can commit a certain task, but do not know they have done it, and
have no ability to draw on that or other experiences to guide them in future tasks.
The question here is, if inexperienced people may still be considered intelligent,
why would not machines?
Goal-driven conduct refers to the difference between random or arbitrary con-
duct and intended one. Goal-driven conduct requires an operative plan to achieve
the relevant goals. For most humans, goal-driven conduct is interpreted as intention.
If one is thirsty in a very hot day and sees a glass of cold water, drinking the water is
a goal-driven conduct, in which the goal is to deal with the thirst. One may say that

37
DOUGLAS R. HOFSTADTER, GÖDEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID 539–604
(1979, 1999).
38
See, e.g., DONALD A. WATERMAN, A GUIDE TO EXPERT SYSTEMS (1986):

It wasn’t until the late 1970s that artificial intelligence scientists began to realize something
quite important: The problem-solving power of a program comes from the knowledge it
possesses, not just from the formalisms and inference schemes it employs. The conceptual
breakthrough was made and can be quite simple stated. To make a program intelligent,
provide it with lots of high-quality, specific knowledge about some problem area.;
DONALD MICHIE AND RORY JOHNSTON, THE CREATIVE COMPUTER (1984); EDWARD A. FEIGENBAUM
ANDPAMELA MCCORDUCK, THE FIFTH GENERATION: ARTIFICIAL INTELLIGENCE AND JAPAN’S COMPUTER
CHALLENGE TO THE WORLD (1983).
1.1 Artificial Intelligence Technology 11

the person intended to drink the water under the intention not to be thirsty anymore.
Goal-driven conduct is not unique to humans. When a cat sees some milk behind an
obstacle, it plans to bypass the obstacle and get the milk. When executing the plan,
the cat commits a goal-driven conduct.
Nevertheless, different creatures may have different goals of different levels of
complexity. The more intelligent the entity is, the more complex its goals are. Some
animals may have goals to call for help for their master who is in distress situation,
but humans may have goals of reaching the outer space, curing deadly diseases,
forming genetic engineering and more. Computers have the ability to plan many of
these goals, and certain computers are already executing these plans. Reductionism
approach to goal-driven conduct dismantles the complicated goal into many simple
goals. Achieving both is considered goal-driven conduct. Computers may be
programmed with recorded goals and plans to achieve them. However, not all
humans have complicated goals at any time under any circumstances. The question
here is, what level of complexity is required in order to be considered intelligent?
Creativity relates to finding new ways of understanding or activity. An intelli-
gent entity is assumed to have some degree of creativity. When a bug tries to get out
of the room through a closed window, it will try it over and over again crashing on
that window time after time. Trying over and over again exactly the same conduct is
not a symptom of creativity. However, sometimes, at some point the bug will get
tired and seek for another way. This would be considered more creative, but in most
cases it will rest a while and try to get out the same way over and over again. For a
bug, it might take 20 times, for a dog it might take less, and for human it might take
much less. Consequently, dogs are considered more intelligent than most bugs.
A computer may be programmed not to repeat the same conduct more than once
and seek other ways to solve the problem. This kind of programs is essential in
general solving problems software. Nevertheless, there are problems that solving
them requires repeating the same behavior a few times. Creativity in these cases
would prevent the solution seeker from solving the problem. For example, calling
someone through the telephone and hearing a busy signal requires repeating the act
of calling the addressee over and over again until having the ability to speak to the
addressee.
In general, creativity has some degrees and levels and it is not homogenous. Not
all humans are considered to be thinking outside the box, and many humans do their
daily tasks exactly the same way day by day for years. Most people drive to their
working place on the same way day by day with no change. What makes their
creativity be different than the above bug’s creativity? Many factory workers do the
same acts for hours, day by day, and they are considered to be intelligent entities.
The question here, is what is the exact level of creativity required to identify
intelligence, especially in the context of artificial intelligence?
Not all humans share all of these five attributes and are still considered intelli-
gent. The irritating question was, why should society use different standards for
humans and machines to measure intelligence. Is intelligence not universally and
objectively measured? However, it seemed that any time a new software established
specific attribute, criticism rejected the achievement by regarding it as not real
12 1 Artificial Intelligence Technology and Modern Technological Delinquency

communication, internal knowledge, external knowledge, goal-driven conduct or


creativity.39 Consequently, new tests were proposed in order to make sure that the
relevant artificial intelligence technology is really intelligent.40
Generally, some of these tests related to representation of knowledge (what does
the machine know), decoding (translation of knowledge from factual reality to its
representation), inference (extracting the content of knowledge), control of combi-
natorial explosion (preventing endless calculation for the same problem), indexing
(arranging knowledge and classifying it), prediction (assessing probabilities of
possible factual events), dynamic modification (self-change of programs due to
experience), generalization (inductive interpretation of factual events) and curiosity
(wondering why or seeking for reasons of the factual events).
All the mentioned above attributes, by their biological sense, are a consequence
of human brain. There is no doubt about that by anyone. These attributes are
achieved through the neurons activity in the human brain, and this activity is
capable of being computed. If these attributes are an outcome of neurons activity,
why cannot they be an outcome of transistors, if these transistors are activated the
very same way functionally?41 The simple, and unwise, answer for this question is
that the artificial intelligence systems are simply not human. This routine of
developing and posing new tests whenever specific artificial intelligence technol-
ogy succeeds to match earlier tests made the quest for finding the ultimate thinking
machine become endless. The reason is rather cultural and psychological than pure
technological.
When thinking about artificial intelligence, most people imagine humans who
just have robotic metal appearance. Most people are not willing to compromise on
less than that.42 However, people sometimes forget that artificial intelligence
happens to be artificial and not human, sometimes abstract and not tangible.
When artificial intelligence technology succeeds in certain test, it proves that the
problem was not in the technology, but in the test itself. The complexity of human
mind is too huge to be tested by simple tests, therefore people replace the test. This
routine taught us many things upon human mind rather on technology, especially
upon the “bureaucracy” of mind43 and intentionality.44
The template of arguments against the identification and feasibility of thinking
machine as machine which possesses real intelligent personhood goes like this:

39
See, e.g., the criticism of Winograd, supra note 4, at pp. 178–181.
40
Schank, supra note 36, pp. 9–12.
41
See PHILLIP N. JOHNSON-LAIRD, MENTAL MODELS 448–477 (1983); But see also COLIN MCGINN,
THE PROBLEM OF CONSCIOUSNESS: ESSAYS TOWARDS A RESOLUTION 202, 209–213 (1991).
42
HOWARD GARDNER, THE MIND’S NEW SCIENCE: A HISTORY OF THE COGNITIVE REVOLUTION (1985);
MARVIN MINSKY, THE SOCIETY OF MIND (1986); ALLEN NEWELL AND HERBERT A. SIMON, HUMAN
PROBLEM SOLVING (1972); Winograd, supra note 4, at pp. 169–171.
43
MAX WEBER, ECONOMY AND SOCIETY: AN OUTLINE OF INTERPRETIVE SOCIOLOGY (1968); Winograd,
supra note 4, at pp. 182–183.
44
Daniel C. Dennett, Evolution, Error, and Intentionality, THE FOUNDATIONS OF ARTIFICIAL INTELLI-
GENCE 190, 190–211 (Derek Partridge and Yorick Wilks eds., 1990, 2006).
1.1 Artificial Intelligence Technology 13

i. For possessing personhood the entity must have attribute A;


ii. Artificial intelligence technology cannot possess attribute A;
iii. Artificial intelligence technology behavior that may identified with attribute A
demonstrates that it can simulate or imitate that attribute;
iv. Simulation of attribute A is not A itself; therefore-

Artificial intelligence technology is not really intelligent. Some scholars called


this template of arguments “hollow shell strategy”.45 It can be identified that
argument (ii) forms with the conclusion some kind of catch-22: artificial intelli-
gence systems are not intelligent since artificial intelligence technology cannot
possess specific attribute of intelligence.
As attribute A may be all contents of classic and advanced test for intelligence.
In fact, the advanced tests for intelligence, in this context, embody the concept that
intelligence is only human, and the intelligence is whatever a human may do
exclusively, whereas machine cannot. As a result, a paradox has occurred. Although
artificial intelligence technology has been developed and advanced in giant steps,
the frustration from artificial intelligence abilities grew up. Any progress in artifi-
cial intelligence research made people understand how far society is from imitating
the human mind. In addition, modern society has still so much to learn and explore
the mysteries and complexities of human mind.
Back to the days of developing the artificial technology, it was very hard to
believe that computer may win human in games such as chess. Then it was thought,
that if it would happen, computers would be considered “intelligent”. It happened.
Moreover, in 1997 a computer won chess game against world champion player, but
still it was not considered “intelligent”.46 In 2011 a computer competed on a quiz
show on TV against two top-champions and won, but still it was not considered
“intelligent”.47 The computer did not understand the jokes on it, but surely won the
quiz. Its skills may utilize for very advanced expert systems. It is good enough to
diagnose diseases, but not to be considered “intelligent” in human eyes.
The development of artificial intelligence technologies brought the world the
“machine learning”. This kind of learning is inductive. The computer analyzes
specific cases and through actions of generalization creates general image of the
facts and use it in the future.48 If it were human, people would have been calling it
an experienced expert. However, people rather not consider it “intelligent”. It
seems, that the original quest for developing a new species on earth, the species

45
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231, 1262
(1992); OWEN J. FLANAGAN, JR., THE SCIENCE OF THE MIND 254 (2nd ed., 1991); John Haugeland,
Semantic Engines: An Introduction to Mind Design, MIND DESIGN 1, 32 (John Haugeland
ed., 1981).
46
MONTY NEWBORN, DEEP BLUE (2002).
47
STEPHEN BAKER, FINAL JEOPARDY: MAN VS. MACHINE AND THE QUEST TO KNOW EVERYTHING (2011).
48
VOJISLAV KECMAN, LEARNING AND SOFT COMPUTING, SUPPORT VECTOR MACHINES, NEURAL
NETWORKS AND FUZZY LOGIC MODELS (2001).
14 1 Artificial Intelligence Technology and Modern Technological Delinquency

of machines or thinking machine, becomes farer with each progress on artificial


intelligence technology.
Constructively and positively, there are two major ways to deal with the quest
for a real thinking machine. One way is through technological research. Accord-
ingly, the artificial intelligence research continues to seek ways to reduce the gap
between machines and humans. In fact, since the 1950s most artificial intelligence
researchers have chosen this way. Any technological development or improvement
in artificial intelligence field may be related to this way. This way is the original
way of the quest for thinking machine. It serves the faith, that 1 day technology
would be able to imitate human mind. This way has succeeded to have significant
achievements.49 The artificial intelligence technology research keeps on developing
and artificial intelligence technology become more advanced than ever.
The second way is the industrial way. As explained below, industry has the
interest that the machine would not perfectly imitate human mind. For industry, this
is an opportunity to use entities who do not suffer from the human problems. Thus,
the disadvantages of machines became advantages for industry. Advantaging the
disadvantages of the machine increased the use of artificial intelligence technology
in industry and made it become integral part of it. The industrial use of artificial
intelligence technology was the catalyst to the emergence of the delinquent thinking
machine.

1.1.3 Daily Usage of Artificial Intelligence Technology

Artificial intelligence technology is being used both in private use and industrial use
for years. As noted above,50 artificial intelligence technology has been embraced in
advanced industry since the 1970s. However, whereas in the beginning the artificial
intelligence technology was used by industry because of the similarity to human
mind, later it has been used rather because of the differences from human mind. It

49
For example, in November 2009, during the Supercomputing Conference in Portland Oregon
(SC 09), IBM scientists and others announced that they succeeded in creating a new algorithm
named “Blue Matter,” which possesses the thinking capabilities of a cat. Chris Capps, “Thinking”
Supercomputer Now Conscious as a Cat, http://www.unexplainable.net/artman/publish/article_
14423.shtml; International Conference for High Performance Computing, Networking, Storage
and Analysis, SC09, http://sc09.supercomputing.org/. This algorithm collects information from
very many units with parallel and distributed connections. The information is integrated and
creates a full image of sensory information, perception, dynamic action and reaction, and cogni-
tion. B.G. FITCH ET AL., IBM RESEARCH REPORT, BLUE MATTER: AN APPLICATION FRAMEWORK FOR
MOLECULAR SIMULATION ON BLUE GENE (2003). This platform simulates brain capabilities, and
eventually, it is supposed to simulate real thought processes. The final application of this algorithm
contains not only analog and digital circuits, metal or plastics, but also protein-based biologic
surfaces.
50
Above at Sect. 1.1.1.
1.1 Artificial Intelligence Technology 15

has been understood by industry, that complete and perfect imitation of human
mind would not be as useful as incomplete ones.
The industry would encourage artificial intelligence technology development, as
long as it is not imitating human mind in complete. Since the way to complete
imitation of human mind is still far, industry and artificial intelligence research still
cooperate. Further cooperation, if complete imitation of human mind would be
relevant, is not guaranteed.
Industry has actually advantaged the disadvantages. For instance, when people
take a simple calculator and type “2+2¼” repeatedly, they will continue to get the
answer “4” each time. If people do that thousands of times, the answer would be
exactly the same every single time. The process activated by the calculator would
be very much of the same each time. However, if a human is asked the same
question of “2+2¼”, he may answer for the first time, if not thinking he is mocked
at, and perhaps for few more times, but not thousands of times. At some point the
human will stop answering for getting bored, irritated, nervous or for losing any
desire to keep on answering.
This phenomenon is considered as huge disadvantage of the machine in the
artificial intelligence researchers’ point of view. It emphasizes the point that human
mind may act arbitrarily, out of irrational reasons etc. However, how would people
react, if the calculator refuses to answer “2+2¼”, even if it is the thousandth time
they typed it? For this kind of tasks people rather someone or something that is not
bored of their quests or caprices, that is not irritated by their questions and that will
serve them well even if it is the thousandth time they ask for the very same thing.
It appears to be that most humans have no such ability because of possessing
human mind. Machines, which have not succeeded in complete imitation of human
mind, have the ability to make this service for us. The above example may seem
theoretical, since no one really types thousand times “2+2¼” on his calculator.
Moreover, to type so thousands of times requires itself non-human skills. However,
these machine skills are required in major part of industry.
Let us think of a costumers-service of large company who serves hundreds of
thousands of customers. The representatives of the customer-service are required to
be very polite and useful for each costumer, regardless the content of the costumer’s
application. How would such a representative act after one call? one hundred calls?
one thousand calls? How would it affect the quality of service? However, the
machine’s technological disadvantage of not getting bored, irritated or tired, is a
pure advantage for industry. An automatic customer-service system services the
thousandth costumer exactly the same way it serviced the first client: politely,
patiently, efficiently and accurately.
Expert systems of medical diagnosis are preferred not to get bored from repeat-
ing identical problems of different patients. Police robots are preferred not to be
frightened from taking apart highly dangerous explosives. Factory robots are
preferred not to get bored from repeating identical activity for thousands of times
everyday. The non-human ability of artificial intelligence technology has been
leveraged to industrial needs. The traditional disadvantages of the artificial intelli-
gence technology, which have been considered as such by artificial intelligence
16 1 Artificial Intelligence Technology and Modern Technological Delinquency

research, have been advantaged and they play a major role in making the decision to
use artificial intelligence technology in modern industry.51
In fact, these advantaged disadvantages are not considered as advantages exclu-
sively for industrial needs. The artificial intelligence technology research together
with industry have entered this technology to private consumption. Personal robot
assistants based on artificial intelligence technology are achievable. Moreover,
artificial intelligence robots are expected to enter the family and private life, even
in the most intimate situations. It has already been suggested “love and sex with
robots”.52 Sex-robots may pose much better alternatives for prostitution, which are
much healthier for the society. No shame, abuse, mental harm or physical harm
would occur through using the artificial intelligence alternative, and the robot will
never be disgusted from its clients’ sexual quests. This may cause a real social
change, in this context.
The same way, household robots are not insulted if asked to repeat over and over
again the same actions. Robots do not require vacations, raising the salary or ask for
favors. Teacher-robots are not likely to teach different matters than programmed
to. Prison-guard robots are not likely to be bribed for disregarding prisoner’s
escape. These non-human skills made artificial intelligence technology become
very popular for both industrial and private needs.
Characterizing the relevant artificial intelligence technology required for these
needs will place it under the complete and perfect imitation of human mind, but
with some of human skills and with partial, imperfect and incomplete imitation of
human mind. These artificial intelligence technology is not yet thinking machine,
but they do have some of the human skills of solving problems and they imitate
some of the human mind abilities. These existing skills of artificial intelligence
technology, which are already used for industrial and private needs, were the
relevant skills for the emergence of the delinquent thinking machine.

1.2 The Development of the Modern Technological


Delinquency

1.2.1 The Aversion from Wide Usage of Advanced Technology

Advanced technology researchers’ reports indicate and predict that artificial intelli-
gence technology is towards wide usage. That means that human society prepares
itself to treat artificial intelligence technology as an integral part of its daily life as a
routine. Some researchers even point on a coexistence with artificial intelligence
technology as these machines become thinking machines rather than “thinking”

51
TERRY WINOGRAD AND FERNANDO C. FLORES, UNDERSTANDING COMPUTERS AND COGNITION: A NEW
FOUNDATION FOR DESIGN (1986, 1987); Tom Athanasiou, High-Tech Politics: The Case of Artificial
Intelligence, 92 SOCIALIST REVIEW 7, 7–35 (1987); Winograd, supra note 4, at p. 181.
52
DAVID LEVY, LOVE AND SEX WITH ROBOTS: THE EVOLUTION OF HUMAN-ROBOT RELATIONSHIPS (2007).
1.2 The Development of the Modern Technological Delinquency 17

machines. These researchers predict its beginning and establishment during the
third or fourth decade of this century.53
In fact, under the Fukuma World Robot Declaration issued in 2004, these
technologies are anticipated to co-exist with humans, assist human both physically
and psychologically, and contribute to the realization of a safe and peaceful
society.54 It is accepted that there are two major types of these technologies.55
The first is new generation industrial technologies, which are capable of
manufacturing wide range of products, performing multiple tasks and working
with human employees. The second is new generation service technologies,
which are capable of performing such tasks as house cleaning, security, nursing,
life-support and entertainment, all in co-existence with humans in homes and
business environment.
In most anticipations published to public, the authors added their evaluation
towards the level of danger to humans and society as a result of using these
technologies, whether in the tangible form (e.g., robots) or in the intangible form
(e.g., software that runs on certain computers or on the net).56 These evaluations
provoked the debate upon safety in using advanced technologies, regardless the real
level of danger evaluated. Most mature people think about safety of an object only
when it is considered to be dangerous. This is true for advanced technologies not
less than for any other object. The accelerated technological developments of
artificial intelligence technology caused many fears from it.
For instance, one of the first natural reactions to seeing an advanced robot as
medical caregiver which provide nursing care is fear from hurting the assisted
human. Would all humans be ready to let their babies and children be under nursing
services of such advanced non-human technologies? Most humans are not experts
in technological issues, and most humans fear from what they do not know. The
consequence is fear from this technology.57 Consequently, when people are

53
Yueh-Hsuan Weng, Chien-Hsun Chen and Chuen-Tsai Sun, Toward the Human-Robot Co-Ex-
istence Society: On Safety Intelligence for Next Generation Robots, 1 INT. J. SOC. ROBOT. 267, 267–
268 (2009).
54
International Robot Fair 2004 Organizing Office, World Robot Declaration (2004) available via
http://www.prnewswire.co.uk/cgi/news/release?id¼117957.
55
The word “technologies” refers to all types of applications using artificial intelligence
technologies, including physical-tangible robots and abstract software.
56
Stefan Lovgren, A Robot in Every Home by 2020, South Korea Says, National Geographic News
(2006) available via http://news.nationalgeographic.com/news/2006/09/060906-robots.html.
57
Moreover, the vacuum created by the lack of knowledge and of certainty is sometimes fed by
science fiction. In past, science fiction was rare and consumed by small group of people. Today,
most people consume science fiction through Hollywood. Most blockbusters of the 1980s, 1990s
and the twenty-first century are classified as science fiction movies. Analyzing most of these films
reveals mostly fear. If we go over these films, we might be able to understand what the public is
being fed by. In 2001: A Space Odyssey (1968), based on Clarke’s novel, Arthur C. Clarke, 2001:
A Space Odyssey (1968), the central computer of the spaceship is out of human control, autono-
mous, and attempts to assassinate the crew. Safety is returned only when the computer is shut
down, 2001: A Space Odyssey (Metro-Goldwyn-Mayer 1968). In the series of The Terminator the
18 1 Artificial Intelligence Technology and Modern Technological Delinquency

thinking of advanced technology, besides thinking of its utility and its unquestion-
able advantages, they think of how to be protected from them. People may accept
the idea of wide usage of advanced technology, only if they feel safe from that
technology.58
The derivative question is what mechanisms of protection may be used by
humanity for the safety of co-existence with artificial intelligence technology.59
The first to indicate dangerousness of this advanced technology was science fiction
literature, and it was the first to suggest protection from it consequently. The first
circle of protection suggested was ethics which is focused on safety. The ethics
issues were addressed to the designers and programmers of these entities in order to
construct built-in software, which would prevent any unsafe activity of this tech-
nology.60 One of the pioneer attempts to create advanced technology ethics was
Isaac Asimov’s.
Asimov stated three famous “laws” of robotics in his science fiction novel I,
Robot of 195061:

(1) A robot may not injure a human being or, through inaction, allow a human
being to come to harm;
(2) A robot must obey the orders given it by human beings, except where such
orders would conflict with the First Law;

machines are taking over humanity, which is almost extinct. Few survivors establish resistance
forces to oppose the machines. In order to survive, all machines must be shut down. Even the
savior, which happens to be a machine, shut down itself. Terminator 2: Judgment Day (TriStar
Pictures, 1991). In the trilogy of The Matrix the machines dominate the earth and enslave humans
to produce energy for the benefit of the machines. The machines control humans through mind-
control by creating illusion of fiction reality, the “matrix”. Only few could escape from the matrix,
they suffer from inferiority in relation to the machines, and they fight for their freedom from the
domination of the machines. The Matrix (Warner Bros. Pictures 1999); The Matrix Reloaded
(Warner Bros. Pictures 2003); The Matrix Revolutions (Warner Bros. Pictures 2003). In I, Robot
(2004), based on Isaac Asimov novel from 1950, advanced model of robots hurts people, one robot
is suspect for murder, and the hero is a detective who does not trust robots. The overall plot is an
attempt of robots to take over humans, I, Robot (20th Century Fox, 2004). The influence of
Hollywood is vast. If this is what the public is fed by, we should expect fear to dominate the public
mind towards artificial intelligence and robotics. As advanced the robot is, the more dangerous it
is. One of the popular issues in science fiction literature and films is the robots rebellion and
taking over.
58
See more in Dylan Matthews, How to Punish Robots when they Inevitably turn against Us?, The
Washington Post (March 5, 2013); Leon Neyfakh, Should We Put Robots on Trial?, The Boston
Globe (March 1, 2013); David Wescott, Robots Behind Bars, The Chronicle Review (March
29, 2013).
59
Yueh-Hsuan Weng and Chien-Hsun Chen and Chuen-Tsai Sun, The Legal Crisis of Next
Generation Robots: On Safety Intelligence, PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE
ON ARTIFICIAL INTELLIGENCE AND LAW 205–209 (2007).
60
See, e.g., Roboethics Roadmap Release 1.1, European Robotics Research Network (2006)
available via http://www.roboethics.org/atelier2006/docs/ROBOETHICS%20ROADMAP%
20Rel2.1.1pdf.
61
ISAAC ASIMOV, I, ROBOT 40 (1950).
1.2 The Development of the Modern Technological Delinquency 19

(3) A robot must protect its own existence, as long as such protection does not
conflict with the First or Second Laws.

When published, these so-called “laws” were considered innovative and could
have given disturbed and terrified public some calmness. After all, harming humans
is not allowed.
Although Asimov referred specifically to robots, these “laws” may easily be
generalized and be applicable to artificial intelligence technologies, regardless its
tangibility. Therefore, these “laws” may be applicable for both tangible robots and
abstract software. However, modern analysis of these “laws” describes a less
promising image. The first two “laws” represented a human-centered approach to
safety in relation to artificial intelligence technology. It represents the general
approach, that as artificial intelligence technology gradually take on greater num-
bers of intensive and repetitious jobs outside industrial factories, it is increasingly
significant for safety rules to support the concept of human superiority over
advanced technologies and over machines.62
The third “law” straddles the borderlines between human-centered and machine-
centered approaches to safety. The advanced technology’s functional purpose is to
satisfy human needs, therefore for the commission of these functions they should
protect themselves, functioning as human property. However, these ethic rules were
insufficient, ambiguous and not wide enough, as Asimov himself admitted.63
For instance, an artificial intelligence robot is in military service. The robot’s
task is to protect some hostages taken by specific terrorist. At some point the
terrorist intends to shoot one of the hostages. The robot understands the situation,
and under the relevant circumstances the only way to stop the innocent person’s
murder is to shoot down the terrorist. When focused on the first “law”, on one hand
the robot is prohibited to be killing or injuring the terrorist, and on the other hand
the robot is prohibited to let the terrorist kill the hostage. There are no other possible
ways. Under this first “law”, what exactly does society expect the robot to do? What
does the society expect any human instead to do? Any solution would breach the
first “law” whatsoever.
If the other two “laws” are examined at this point, the consequence would not be
changed. If the human military commander orders the robot to shoot the terrorist
down, it would contradict the first “law”. Even if the military commander himself is
under immediate danger, it would be impossible for the robot to act. Even if the
terrorist intends to explode ten hostages, the robot is not allowed to protect their
lives by injuring the terrorist. Of course, if the military commander itself is a robot

62
JERRY A. FODOR, MODULES, FRAMES, FRIDGEONS, SLEEPING DOGS AND THE MUSIC OF THE SPHERES,
THE ROBOT’S DILEMMA: THE FRAME PROBLEM IN ARTIFICIAL INTELLIGENCE (Zenon W. Pylyshyn
ed., 1987).
63
Isaac Asimov himself wrote in his introduction to The Rest of Robots that “[t]here was just
enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new
stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the
sixty-one words of the Three Laws.” ISSAC ASIMOV, THE REST OF ROBOTS 43 (1964).
20 1 Artificial Intelligence Technology and Modern Technological Delinquency

or an advanced software that makes the required decisions, that makes thing more
complicated, but the consequence would not be changed.
Under the third “law”, even if the robot itself is in danger, the consequence
would not be changed. This dilemma of the military robot is not rare in an advanced
technology modern society. Any activity of artificial intelligence technology in
such society is puzzled by such dilemmas. Artificial intelligence device in medical
service is required to perform an emergency surgical procedure. The procedure is
intrusive, intended to save the patient’s life, and if not committed within the
following few minutes, the patient dies. The patient objects the procedure. Any
action or inaction of the artificial intelligence device is in contradiction to the first
“law”. Any order of superior is irrelevant to solve the dilemma, since an order to act
causes injury to the patient, and an order to avoid action causes the patient’s sure
death.
Easier dilemmas remain still, although easier, when one option involves no
injury to human body. Above mentioned the usage of artificial intelligence devices
as prison guards,64 in that case, e.g., how exactly should such device act, when a
prisoner attempts to escape and the only way to stop it involves causing injury to the
prisoner? What should sex robot do when ordered to commit sadistic sexual
contact? If the answers are not to act, the question is why in first place do people
use these advanced devices. If the answers are to act, it contradicts the first “law”.
Industry defines the purposes of artificial intelligence devices (robots or any
other artificial entities) to serve human society (as society or as privates) in various
situations, these purposes may involve difficult decisions that should be made by
these entities. The terms “injury” and “harm” may be wider than specific bodily
harm, and these entities may harm people in other ways than bodily harm. More-
over, in various situations causing one sort of harm should be preferred in order to
prevent greater harm. In most cases, such decision involves complicated judgment
which exceeds dogmatic simple rules of ethics.65
The debate towards the Asimov “laws” arise the debate towards the machine
capability for moral accountability.66 The moral accountability of artificial intelli-
gence technology is part of the modern formation of the characteristics of the
thinking machine.
However, artificial intelligence technologies do exist, do participate in human
daily life in industrial environment and private environment, and they do cause
harm from time to time, regardless their capability or incapability for moral
accountability. Thus, ethics sphere is unsuitable for settling the issue. Ethics
requires moral accountability, complicated inner judgment and unsuitable rules.

64
Above at Sect. 1.1.1.
65
Susan Leigh Anderson, Asimov’s “Three Laws of Robotics” and Machine Metaethics, 22 ARTIFI-
CIAL INTELLIGENCE SOC. 477–493 (2008).
66
Stefan Lovgren, Robot Codes of Ethics to Prevent Android Abuse, Protect Humans, National
Geographic News (2007) available via http://news.nationalgeographic.com/news/2007/03/
070316-robot-ethics.html.
1.2 The Development of the Modern Technological Delinquency 21

1.2.2 Delinquency by Technology

Artificial intelligence technology enables its usage in various ways in both industry
and private use. It may be assumed that this technology would become more
advanced in the future as the artificial intelligence research is developing along
the time. The industrial and private uses of this technology widen the range of tasks
artificial intelligence technology can undertake. The more advanced and compli-
cated the tasks are, the higher chances for failure in accomplishing these tasks.
Failure is a wide term, which includes various situations in this context. The
common of these situations is that the task undertaken has not been accomplished
successfully.
However, some failure situations may involve harm and danger to individuals
and society. For instance, for prison guard artificial intelligence software the task
has been defined as preventing escape from prison using minimal power which may
harm the escaping prisoners. For that task, the software may use force through
tangible robots, electric system etc. At some point a prisoner attempts to escape.
The attempt is discovered by the prison guard software, and it sends there a tangible
robot to handle the situation. It is insignificant whether the tangible robot is part of
the artificial intelligence software and it acts according to its orders or it is
independent entity, which is equipped with an artificial intelligence software of
its own.
The tangible robot which was sent to handle the situation prevents the attempt of
the prisoner from becoming successful by holding the prisoner firmly. The prisoner
is injured and argues for over-usage of power. Analyzing the tangible robot actions
(or the software actions, if the robot acted due to specific orders from it) reveals that
although it could choose a more lenient action, it has chosen the specific harmful
action. The reason is that the robot has figured out and evaluated the risk as
unreasonably graver than what it actually was. Consequently, the legal question
in this situation would be who is the responsible for the injury.
Such questions provoke deep thoughts towards the responsibility of the artificial
intelligence entity and many more arguments. If analyzed through ethics and
morality, most scientists would argue that the failure is of the programmer or the
designer, but not of the software itself. The software itself is incapable to consoli-
date the required moral accountability to be responsible for any harm caused by its
actions. According to this point of view, only humans may consolidate such moral
accountability. The software is nothing but a tool in the hands of its programmer,
regardless the quality of its software or its cognitive abilities and regardless its
physical appearance, whether tangible or not.
This argument is related to the debate towards the existence of thinking machine.
Moral accountability is indeed too complicated, not only for machines, but for
humans as well. Morality, in general, has no common definition which is acceptable
by all societies and individuals. Deontological morality and teleological morality
are the most acceptable types of morality, and in very many situations they direct
22 1 Artificial Intelligence Technology and Modern Technological Delinquency

opposite actions.67 As morality is highly difficult to be assessed, moral account-


ability is not necessarily the most appropriate and efficient way to evaluate respon-
sibility in cases of the same type of the example above.
In the given context of artificial intelligence social responsibility, it would
always come back to the debate upon the conceptual ability of machines to become
human-like. This, of course, applies to the research for thinking machine, and
consequently this research would become the research for artificial intelligence
accountability. The relevant question here exceeds the technological question. In
fact, the relevant question here is mostly a social one. What bothers the modern
community is, in fact, how the modern community prefers to evaluate responsibility
in cases of harm and danger to individuals and society.
In human society’s daily life the main social tool to deal with such situations is
the criminal law. In fact, it has been designed for these social purposes. The
criminal law defines the criminal liability of individuals who harm the society or
endanger it. The criminal law has also educative social value for it educates the
individuals how to behave within their society. For example, the criminal law
prohibits rape through creation of specific offense of rape, i.e., the criminal law
defines what is considered as rape and prohibits it. This has the value of punishing
individuals for rape ex post, and prospectively educating individuals not to rape ex
ante as part of the rules of living together in the relevant society. Thus, the criminal
law plays major role in social control, as it creates the actual legal social control.68
In any legal system around the world, the criminal law is considered the most
efficient social measure for education of individuals against anti-social behavior
and for directing the individual behavior. This measure is far from being perfect,
but under modern circumstances and sources, it is the most efficient one. If it is
efficient towards human individuals, it calls for examination if efficient towards
non-human entities, namely—artificial intelligence technology. Of course, the first
step in order to evaluate the efficiency of criminal law towards machines is to
examine the applicability of the criminal law for them.
That raises the acute question in this context, whether machines may be subjects
to criminal law due to the modern concepts of criminal liability. Since criminal law
is not dependent on moral accountability, the debate upon the moral accountability
of machines is irrelevant to this question. Although sometimes the criminal liabil-
ity, some types of moral accountability and some types of ethics may match each
other, it is not necessary in order to impose criminal liability. The question of
applicability of criminal liability upon non-human entity is combined out of two

67
See, e.g., GILBERT RYLE, THE CONCEPT OF MIND 54–60 (1954); RICHARD B. BRANDT, ETHICAL
THEORY 389 (1959); JOHN J.C. SMART AND BERNARD WILLIAMS, UTILITARIANISM – FOR AND AGAINST
18–20 (1973).
68
See, e.g., Justine Miller, Criminal Law – An Agency for Social Control, 43 YALE L. J. 691
(1934); Jack P. Gibbs, A Very Short Step toward a General Theory of Social Control, 1985 AM.
B. FOUND RES. J. 607 (1985); K. W. Lidstone, Social Control and the Criminal Law, 27 BRIT.
J. CRIMINOLOGY 31 (1987); Justice Ellis, Criminal Law as an Instrument of Social Control,
17 VICTORIA U. WELLINGTON L. REV. 319 (1987).
1.2 The Development of the Modern Technological Delinquency 23

commutative secondary questions. The first is whether criminal liability is applica-


ble upon non-human entities, and the second is whether criminal punishments are
applicable upon non-human entities.
The first question is considered to be as one of the deepest issues of criminal law.
Imposition of criminal liability on any offender is dependent on the fulfillment of
the basic requirements of the criminal liability. If, and only if, these requirements
are met, arises the question of punishibility, i.e., how can human society punish
non-human entities which are recognized as offenders. The answers proposed in
this book are positive for both questions. If the answers for these questions are
indeed affirmative, then a new social creature has been recognized by the law. This
social creature may be described, for legal purposes, as a delinquent thinking
machine, regardless its physical appearance. An abstract software may be described
as “machine” or as part of “machine” for this purpose.
The delinquent thinking machine may also be considered as the inevitable
byproduct of the human sincere efforts to create the thinking machine. The techno-
logical reaction for the research for thinking machine, discussed above,69 caused
advanced developments in artificial intelligence technology, which enabled to
imitate human mind skills much better than before. Artificial intelligence technol-
ogy of the second decade of the twenty-first century is capable of very many actions
that were considered science fiction previously.
Each step along this road of technological development is another step in the
evolution of thinking machine. The research for it made the upper and lower
threshold of the thinking machine become very high. So high that in fact it is
required to be human in order to be considered thinking machine. The research is
still going on and the technological race continues either. However, criminal
liability does not necessarily require all human skills. In order to be considered
offender, one has not to use all of the human skills, whether he possesses this skills
or not.
Take as an example the attribute of communication, discussed above.70 Basi-
cally, there are very many types of offenders. Some of them are considered to be
communicative, some of them are not. When examining their criminal liability of
offenders towards specific offenses, their human skills of communication are not
even considered. Society imposes criminal liability for perpetration of offenders
whether the offender was communicative in committing the offense or was the most
uncommunicative offender ever. Since communication is not condition for the
imposition of criminal liability upon any offender, it is not considered towards
the legal process.
Legally, once the basic requirements of criminal liability for the specific offense
are met, no other qualifications, skills or thoughts are considered additionally. One
may argue that although the perpetration of an offense does not require communi-
cation, humans are assumed to be communicative. Therefore, accordingly, criminal

69
Above at Sect. 1.1.3.
70
Above at Sect. 1.1.2.
24 1 Artificial Intelligence Technology and Modern Technological Delinquency

liability is applicable for them. This kind of arguments may be relevant for the
debate on the research for thinking machine, but absolutely not for the question of
criminal liability. Human skills, which are irrelevant to the commission of the
specific offense, if not specifically required by the law, are not considered within
the legal process towards the criminal liability.
Thus, if the specific offender has been (or not) communicative through the
commission of the offense, that is irrelevant for the imposition of criminal liability,
since there is no such requirement for criminal liability. If the specific offender has
been (or not) communicative outside the commission of the offense, that is irrele-
vant as well, since the legal process is bound to concentrate only on the facts
involved in the commission of the offense. So is the situation with many other
attributes considered necessary to declare the recognition of thinking machines.
It seems that along the research to the creation of thinking machine, there has
been created a very significant byproduct. This byproduct has not the skills to be
considered thinking machine, for it has no full skills of imitating human mind in
complete. However, the skills of this byproduct are adequate for various activities
in industry and in private use. The absence of some skills is even considered
advantage when focusing on the industrial and private use of it, although the
artificial intelligence research considers that absence as disadvantage (“advantaging
the disadvantages”).71
The relevant type of activities that this byproduct is capable of is the commission
of offenses. This byproduct, perhaps, is not capable of very many types of creative
activities, but it is capable of committing offenses. The basic reason lies mostly on
the definitions and requirements of criminal law for imposition of both criminal
liability and punishments. These requirements are satisfied through capabilities
which are far lower than those required to create thinking machine. In the context
of criminal law, as long as its requirements are met, there is nothing to prevent the
imposition of criminal liability, whether the subject of criminal law is human or not.
In fact, this is the logic towards the criminal liability of corporations. Through
the eyes of criminal law, to the vast group of subjects to criminal law, a new type of
subjects may be added, in addition to human individuals and corporations, as long
as all relevant requirements of criminal law are met. These subjects to criminal law
may be called “delinquent thinking machines”. The delinquent thinking machine is
not one type of the general “thinking machine”, but it is an inevitable byproduct of
it. This byproduct is considered to be less technologically-advanced, since to
belong to this category of machines it requires no very high-level skills which
attribute the real thinking machine.72
This new offender is a stopping point on the race to the top. It is a less advanced
top, since the requirements for criminal liability are much lower than the race to the
top of achieving the ideal thinking machine. However, the new offender is not a
race to the bottom, but only a stopping point on the race to the top. From the

71
See above at Sect. 1.1.3.
72
HANS MORAVEC, ROBOT: MERE MACHINE TO TRANSCENDENT MIND (1999).
1.2 The Development of the Modern Technological Delinquency 25

criminal law point of view, the technological research may rest at this point, since
entities which the criminal law is applicable for them are already exist. No further
technological development is required in order to impose criminal liability upon
artificial intelligence technology.
As noted above, the imposition of criminal liability upon artificial intelligence
technology is dependent on the match between the criminal law requirements and
the relevant skills and abilities of the entity’s technology. Some researchers argue
that the current law is inadequate for dealing with artificial intelligence technology
and there is a necessary to develop a new legal sphere.73 However, when focusing
on the criminal law, current criminal law is adequate to deal with artificial intelli-
gence technology. Moreover, if technology would significantly advance towards
the creation of virtual offender, that would make the current criminal law much
relevant to deal with the artificial intelligence technology.
The reason is that such technology imitates human mind, and human mind is
already subject to current criminal law. The more closer this technology approaches
to complete imitation of human mind, the more relevant the current criminal law is
to deal with it. Subjecting artificial intelligence technology to the criminal law may
supply the relaxing cures for human fears from wide applicability of advanced
technology.
The criminal law plays a major role in ensuring the personal confidence of
individuals in the society. Each individual knows that all other individuals in the
society are bound to obey the law, especially the criminal law. If the law is breached
by any individual, the law is enforced by the society through its relevant coercive
powers. If any individual is not subject to the criminal law, the personal confidence
of the other individuals is severely harmed. The other individuals know that if the
specific individual breaches the law, nothing happens, and that this individual has
no incentive to obey the law.
This works the same way for all potential offenders, humans or not
(corporations, for instance).74 If any of these offenders is not subject to criminal
law, the other individuals’ personal confidence is harmed. In more comprehensive
manner, the personal confidence of the whole society is harmed. Consequently, the
society should make any required efforts to subject all relevant entities to its
criminal law. Sometimes it requires some conceptual changes in the general
insights of the relevant society.
Thus, when the fear from corporations which were not subject to criminal law
became efficient, criminal law became applicable for them in the seventeenth
century. So, perhaps, should be as to artificial intelligence technology for the
society to annihilate its fears from them.75

73
DAVID LEVY, ROBOTS UNLIMITED: LIFE IN A VIRTUAL AGE (2006).
74
For wider aspect see, e.g., Mary Anne Warren, On the Moral and Legal Status of Abortion,
ETHICS IN PRACTICE (Hugh Lafollette ed., 1997); IMMANUEL KANT, OUR DUTIES TO ANIMALS (1780).
75
David Lyons, Open Texture and the Possibility of Legal Interpretation, 18 LAW PHIL. 297, 297–
309 (1999).
26 1 Artificial Intelligence Technology and Modern Technological Delinquency

1.2.3 Modern Analogies of Liability

In the society other creatures than humans, corporations and artificial intelligence
technology live in co-existence with humans. These creatures are the animals. The
question that may arise accordingly is why should not the law embrace human legal
rules relating to animals towards artificial intelligence technology. Although most
animals are not considered by humans to be intelligent, humans and animals do live
in co-existence for several millenniums. Since humanity first domesticated some
animals for its own benefit, the co-existence of humans and animals is very
intensive. Human society eat animals’ products, fed by their flesh, protected by
them, employ them and use them both for industrial and private needs. Human
society is even responsible for the creation of some new species as a byproduct of
the process of domestication (the emergence of dogs from wolves, for instance).
Consequently, the law relates animals and human co-existence with them since
ancient days.76 Examination of the law towards animals reveals two major aspects
being related to. The first is the relation to animals as property of humans, and the
second is the duty to show mercy towards animals. The first aspect contains the
ownership, possession and other property rights of humans towards animals. Con-
sequently, if damage is caused by an animal, the legally responsible for it is the
human who has the relevant property rights towards the animal.
Generally, such cases are related to tort law and in some countries they are
related to criminal law as well. However, whether it is tort law, criminal law or
both, the legal responsibility is the human’s, not the animal’s, at least not directly.
Only if the animal is considered too dangerous to society, it is incapacitated. The
incapacitation is executed mostly through killing the animal. This was the case if an
ox gored human in ancient times,77 and this is the case when a dog bites humans
under certain circumstances in modern times. No legal system considers an animal
to be, itself, direct subject of the law, especially not criminal law, regardless the
animal’s rate of “intelligence”.
The second aspect of the duty to show mercy towards animals is directed to
humans. Humans are bound to treat animals mercifully. Since humans are regarded
as superior upon animals due to the human intelligence, animals are regarded as
helpless. Consequently, human law prohibits the abuse of power by humans against
animals for its cruelty. The subjects of these legal provisions are humans, not the
animals. The animals abused by humans have no standing in court. The legal
“victim” in these cases is the society, not the animals, therefore these legal
provisions are part of the criminal law in most cases.78
The prosecution accuses the offender who abused animals for it harms the
society, not the animals. The prosecution accuses offenders on behalf of the

76
See, e.g., Exodus 22:1,4, 9,10; 23:4,12.
77
Exodus 21:28–32.
78
See more generally in Gabriel Hallevy, Victim’s Complicity in Criminal Law, 2.2 INT’L
J. PUNISHMENT AND SENTENCING 72 (2006).
1.2 The Development of the Modern Technological Delinquency 27

whole society and not on behalf of the injured animal. This is also the situation with
human victims, when the prosecution does not accuse the offender on behalf of the
victims of the offense, but on behalf of the society in order to let the public order
prevail.
Consequently, this kind of indirect protection on animals is little different from
the protection on property. Most criminal codes prohibit damaging other’s property
in order to protect the property rights of the possessor or owner. However, this kind
of protection has nothing to do with property rights. The legal owner of a cow may
be indicted for abusing the cow cruelly, regardless the property rights towards the
cow. These legal provisions, which exist since ancient ages, form a legal model
which is supposed to be applicable for animals. The inevitable question at this point
is why would not this model be relevant for artificial intelligence technology.
There are three types of entities: humans, animals and artificial intelligence. If
people wish to subordinate the artificial intelligence technology to the criminal law
of humans, they should justify the resemblance between humans and artificial
intelligence technology in this context. In fact, they should explain why artificial
intelligence technology resembles more to humans than to animals. Otherwise, the
above legal model should be satisfactory and adequate for settling the artificial
intelligence activity. The interesting question is to whom artificial intelligence
technology is more resembling—to humans or to animals.
The above legal model has been previously examined for artificial intelligence
technology for controlling unmanned aircrafts,79 “new generation robots”80 and
other machines.81 For some of the legal issues occurred the zoological legal model
could supply answers, but the core problems were not solved by this model.82 When
the artificial intelligence entity could figure out alone, using its software, its
activity, something in the legal responsibility puzzle was still missing. Communi-
cation towards complicated ideas is much easier with artificial intelligence technol-
ogy than with animals. So is the situation towards external knowledge and quality
of reasonable conclusions in various situations.
An artificial intelligence entity is programmed by humans due to the human
formal logic reasoning. This is the core reason for the artificial intelligence entity
activity. Its calculations are explicable through human formal logic reasoning. Most
animals, in most situations, lack this type of reasoning. It is not that animals are not
reasonable, but their reasonability is not necessarily based on human formal logic.

79
JONATHAN M.E. GABBAI, COMPLEXITY AND THE AEROSPACE INDUSTRY: UNDERSTANDING EMERGENCE
BY RELATING STRUCTURE TO PERFORMANCE USING MULTI-AGENT SYSTEMS (Ph.D. Thesis, University of
Manchester, 2005).
80
Wyatt S. Newman, Automatic Obstacle Avoidance at High Speeds via Reflex Control,
PROCEEDINGS OF THE 1989 I.E. INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 1104
(1989).
81
Craig W. Reynolds, Herds and Schools: A Distributed Behavioral Model, 21 COMPUT. GRAPH.
25–34 (1987).
82
See, e.g., Gabriel Hallevy, Unmanned Vehicles – Subordination to Criminal Law under the
Modern Concept of Criminal Liability, 21 J. OF LAW, INFO. & SCI. 311 (2011).
28 1 Artificial Intelligence Technology and Modern Technological Delinquency

Emotionality plays major role in the activity of most living creatures, both animals
and humans. Emotionality may supply the drive and motivation to some of human
activity, as well as animal activity. This is not the case in relation to artificial
intelligence software.
If measured by emotionality, humans and animals are much closer to each other
rather than to artificial intelligence software. However, if measured by pure ratio-
nality, artificial intelligence software may be closer to humans rather than to
animals. Although emotionality affect rationality and rationality affect emotional-
ity, the law still distinguishes between them regarding its applicability. For the law,
especially the criminal law, rationality is the main factor to be considered. Emo-
tionality is being considered relatively in rare cases. For instance, for convicting a
person in rape, the feelings behind the rape are insignificant, but only the elements
of the offense.83
Moreover, the legal model for animals educates humans to be merciful towards
animals, as noted above. This consideration is major under this model. However, in
relation to artificial intelligence technology it is insignificant. Since artificial intel-
ligence technology lacks basic attributes of emotionality, they are not to be
sorrowed, suffering, disappointed or tortured in any emotional manner, this aspect
of the above legal model has no significance towards artificial intelligence technol-
ogy. For instance, wounding a cow for no medical reason is considered abuse, and
in some countries it is considered offense. However, no country considers offense
or abuse wounding a robot.84
As a result, since law prefers rationality on emotionality when evaluating legal
responsibility, and since rationality of artificial intelligence technology is based on
human formal logic reasoning, for the law and in legal aspect, artificial intelligence
technology is much closer to humans than to animals. Consequently, the legal
model that relates to animals mismatches artificial intelligence technology for
evaluation of legal responsibility.85 For the artificial intelligence technology to be
subject to criminal law, the basic concepts of criminal liability should be
introduced. These basic concepts form the general requirements of criminal liabil-
ity, which must be fulfilled for the imposition of any individual—human, corporate
or artificial intelligence technology.

83
See, e.g., State v. Stewart, 624 N.W.2d 585 (Minn.2001); Wheatley v. Commonwealth, 26 Ky.L.
Rep. 436, 81 S.W. 687 (1904); State v. Follin, 263 Kan. 28, 947 P.2d 8 (1997).
84
Andrew G. Brooks and Ronald C. Arkin, Behavioral Overlays for Non-Verbal Communication
Expression on a Humanoid Robot, 22 AUTON. ROBOTS 55, 55–74 (2007).
85
LAWRENCE LESSIG, CODE AND OTHER LAWS OF CYBERSPACE (1999).
Basic Requirements of Modern Criminal
Liability 2

Contents
2.1 Modern Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.1 The Offense’s Requirements (In Rem) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.2 The Offender’s Requirements (In Personam) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2 Legal Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.1 Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2.2 Punishments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.1 Modern Criminal Liability

Modern criminal law deals also with the question of personality, i.e. who is to be
considered an offender under modern criminal law. This is also a question of
applicability, as it relates to the possible applicability of criminal liability in the
personal aspect. When coming across the words “criminal” or “offender”, most
people associate it with “evil”. Criminals are considered socially evil. However,
outlines of criminality include not only severe offenses, but also other behaviors
which are not considered “evil” for most people.
For instance, some of “white collar crimes” are not considered “evil” by most
people in most societies, but rather sophistication or surrendering to complicated
bureaucracy. Most traffic offenses are not considered “evil” as well. Some of the
offenses committed under duty pressures are not considered “evil”. For instance, a
physician operates complicated emergency surgery to save the patient’s life. Under
the relevant circumstances he must hurry up. The operation fails in spite of the
sincere efforts, and the patient dies. Post mortem, it is discovered that one of the
physician acts may be considered negligent. Therefore, this physician is criminally
liable for negligent homicide. Nevertheless, the physician is still not considered
“evil” by most people in most societies.
In fact, even some of the most severe offenses, may not be considered “evil”
under the right circumstances. For example, a physician sees the daily suffer of his

# Springer International Publishing Switzerland 2015 29


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8_2
30 2 Basic Requirements of Modern Criminal Liability

patient, who is a dying patient. The patient asks the physician to unplug her from the
CPR machine so she could end her life honorably and end her suffer. Desperately
and reluctantly he agrees and unplugs her. This is considered to be murder in most
legal systems.1 It is not likely to consider the physician to be “evil” in most modern
societies. The criminal law (and society) may object euthanasia through including it
in homicide offenses, but that does not automatically turn the offender to be
considered “evil”.
More complicated situations in this context are when the physician refuses, just
in purpose to continue seeing her suffering. He enjoys her suffering and even
celebrates it, therefore he refuses to unplug her. In this case, the physician is not
an offender, as he did not commit any criminal offense. Nevertheless, most people
in most modern societies would consider such physician to be evil. The ultimate
conclusion is that evil is not measure for criminal liability. Sometimes the offender
is evil, but sometimes not. Sometimes evil persons are offenders, but
sometimes not.
Morality, regardless its specific type (deontological, teleological, etc.), and
criminal liability are extremely different. Sometimes they are coherent, but it is
not necessary for the imposition of criminal liability. An offender is any person,
morally evil or not, which criminal liability has been imposed upon him. When the
legal requirements of the specific criminal offense are met by the individual
behavior, criminal liability is imposed, and that individual is considered offender.
Sometimes evil is involved, but sometimes not.
As a result, imposition of criminal liability requires the examination of the
applicability of its basic requirements. Criminal law is considered as the most
efficient social measure for social control. The society, as an abstract body or
entity, controls its individuals. There are very many measures for the society to
control its individuals, such as moral, economical and cultural, but one of the most
efficient measures is the law. Since only criminal law include significant sanctions,
the criminal law is considered the most efficient measure to control individuals.
Controlling the individuals through criminal law is legal social control.
Imposition of criminal liability is the application and implementation of the legal
social control. The modern criminal liability is not dependent on morality, of any
kind, and on evil. It is imposed in a very organized pattern, almost mathematical.
There are two accumulative types of requirements in order to impose criminal
liability. One type is from the law (in rem) and the other is from the offender (in
personam). Both types must be fulfilled in order to impose criminal liability, but not
additional conditions are required.

1
See, e.g., Steven J. Wolhandler, Voluntary Active Euthanasia for the Terminally Ill and the
Constitutional Right to Privacy, 69 CORNELL L. REV. 363 (1984); Harold L. Hirsh & Richard
E. Donovan, The Right to Die: Medico-Legal Implications of In Re Quinlan, 30 RUTGERS L. REV.
267 (1977); Susan M. Allan, No Code Orders v. Resuscitation: The Decision to Withhold Life-
Prolonging Treatment from the Terminally Ill, 26 WAYNE L. REV. 139 (1980).
2.1 Modern Criminal Liability 31

2.1.1 The Offense’s Requirements (In Rem)

The first type of requirements includes four major requirements, which are required
from the criminal law itself. If the specific offense, as defined by law, fails to fulfils
even one of these requirements, no court may impose criminal liability upon
individuals for that specific offense. The four requirements are:

(a) legality;
(b) conduct;
(c) culpability; and-
(d) personal liability.

Each requirement represents a fundamental principle in criminal law, i.e. the


principle of legality, the principle of conduct, the principle of culpability and the
principle of personal liability.
Legality is required from the specific offense for it to be considered legal (nullum
crimen sine lege). Legality, in fact, forms the rules of how determining what is
“right” and what is criminally “wrong”. For the specific offense to be considered
“legal” it must fulfill for accumulative conditions towards:

(a) legitimate legal source;


(b) applicability in time;
(c) applicability in place; and-
(d) legitimate interpretation.2

Only when all of these conditions are fulfilled, the specific offense is considered
“legal”, i.e. criminal liability may be imposed for its commission. The specific
offense must have a legitimate legal source which creates and defines it.3 For
instance, in most countries the ultimate legitimate legal source of offenses is
legislation, whereas case-law is illegitimate. The reason is that legislation is enacted
by public representative which are elected and represent the relevant society. Since
criminal law is legal social control, it should reflect the will of the society. This will
may be reflected through the society’s representatives.
The specific offense must be applicable in time, so retroactive offenses are
illegal.4 For individuals to plan their moves, they must know about prohibitions
in advance and not retroactively. Only in rare cases retroactive offenses may be
considered legal. These cases are when the new offense is for the benefit of the
defendant (e.g., more lenient sanction, new defense, etc.) or when the offense

2
For the structure of the principle of legality in criminal law see GABRIEL HALLEVY, A MODERN
TREATISE ON THE PRINCIPLE OF LEGALITY IN CRIMINAL LAW 5–8 (2010).
3
Ibid, at pp. 20–46.
4
Ibid, at pp. 67–78.
32 2 Basic Requirements of Modern Criminal Liability

embraces cogent international custom, jus cogens (e.g., genocide, crimes against
humanity, war crimes, etc.).5
The specific offense must be applicable in place, so extraterritorial offenses are
illegal.6 The criminal law is based on the authority of the sovereign. The
sovereign’s authority is domestic, therefore the criminal law must be domestic as
well. For instance, the criminal law of France is applicable in France, but not in the
US. Thus, extraterritorial offense is illegal (e.g., French offense which is applicable
in the US). However, in rare cases the sovereign is authorized to protect itself or its
inhabitants abroad through extraterritorial offenses (e.g., foreign terrorists who
attack US embassy in Kenya may be indicted in the US under the US criminal
law, although they have never been to the US)7 or in cases of international
cooperation between states.
The specific offense must be formulated and phrased well. It must be general, for
it addresses unspecified public (e.g., “John Doe is not allowed to do. . .” is illegiti-
mate offense).8 It must be feasible, for legal social control must be realistic (e.g.,
“whoever does not fly, shall be guilty. . .” is illegitimate offense).9 It must also be
clear and précised, for individuals must know exactly what they are allowed to do,
and what is prohibited.10 When all these conditions are met, the requirement of
legality is fulfilled and satisfied.
Conduct is required from the specific offense for it to be considered legal
(nullum crimen sine actu). The modern society has no interest in punishing mere
thoughts (cogitationis poenam nemo patitur). Effective legal social control is not
achieved through minds’ police, and it is not really enforceable. The modern
society prefers the freedom of thoughts. Consequently, for the offense to be
considered legitimate it must include requirement of conduct. The conduct is the
objective-external expression of the commission of the offense. If the specific
offense lacks that requirement, it is illegitimate. Through human legal history,
only tyrant and totalitarian regimes used offenses which lack conduct.
Offenses whose conduct requirement is satisfied by inaction are considered
status offenses that criminalize the status of the individual, not his conduct. For
example, offenses that punish the relatives of traitors merely because they are
relatives, regardless of their conduct, are considered status offenses.11 So are

5
See, e.g., in Transcript of Proceedings of Nuremberg Trials, 41 AMERICAN JOURNAL OF INTERNA-
TIONAL LAW 1–16 (1947).
6
Hallevy, supra note 2, at pp. 97–118.
7
Ibid, at pp. 118–129.
8
Ibid, at pp. 135–137.
9
Ibid, at pp. 137–138.
10
Ibid, at pp. 138–141.
11
See, e.g., sub-article 58(c)(1) of the Soviet Penal Code of 1926 as amended in 1950. This
sub-article provided that mature relatives of the first degree of convicted traitor are punished with
five years of exile.
2.1 Modern Criminal Liability 33

offenses that punish individuals of certain ethnic origin.12 Most modern countries
have abolished these offenses, and defendants indicted for such offenses are
acquitted by the court because status offenses contradict the principle of conduct
in criminal law.13 Only when conduct is required, the offense may be considered
legal and legitimate, and criminal liability may be imposed according to it.
Culpability is required from the specific offense for it to be considered legal
(nullum crimen sine culpa). The modern society has no interest in punishing
accidental, thoughtless or random events, but only when the event is occurred due
to the individual’s culpability. If someone is dead, it does not necessarily require an
offender. For instance, one person passes near another exactly when he falls to a
deep hole in ground. The other person is dead, but the passer is not necessarily
culpable. For the imposition of criminal liability the specific offense must require
some level of culpability.
If not, imposition of criminal liability would be no more than cruel maltreatment
of the individuals by the society. Culpability relates to the mental state of the
offender, and it reflects the subjective-internal expression of the commission of
the offense. The required mental state of the offender, which forms the requirement
of culpability, may be reflected both in the particular requirement of the specific
offense and in the general defenses. For instance, the specific offense of manslaugh-
ter requires recklessness as its minimal level of culpability.14 However, if the
offender is insane (general defense of insanity),15 minor (general defense of minor-
ity),16 or acted under self-defense (general defense of self-defense),17 no criminal
liability is imposed. Such an offender is considered absent of adequate culpability.
Only when culpability is required, the offense may be considered legal and legiti-
mate, and criminal liability may be imposed according to it.

12
See above supra note 5.
13
Scales v. United States, 367 U.S. 203, 81 S.Ct. 1469, 6 L.Ed.2d 782 (1961); Larsonneur, (1933)
24 Cr. App. R. 74, 97 J.P. 206, 149 L.T. 542; ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW 106–
107 (5th ed., 2006); Anderson v. State, 66 Okl.Cr. 291, 91 P.2d 794 (1939); State v. Asher, 50 Ark.
427, 8 S.W. 177 (1888); Peebles v. State, 101 Ga. 585, 28 S.E. 920 (1897); Howard v. State, 73 Ga.
App. 265, 36 S.E.2d 161 (1945); Childs v. State, 109 Nev. 1050, 864 P.2d 277 (1993).
14
See, e.g., Smith v. State, 83 Ala. 26, 3 So. 551 (1888); People v. Brubaker, 53 Cal.2d 37, 346
P.2d 8 (1959); State v. Barker, 128 W.Va. 744, 38 S.E.2d 346 (1946).
15
See, e.g., Commonwealth v. Herd, 413 Mass. 834, 604 N.E.2d 1294 (1992); State v. Curry,
45 Ohio St.3d 109, 543 N.E.2d 1228 (1989); State v. Barrett, 768 A.2d 929 (R.I.2001); State
v. Lockhart, 208 W.Va. 622, 542 S.E.2d 443 (2000).
16
See, e.g., Beason v. State, 96 Miss. 165, 50 So. 488 (1909); State v. Nickelson, 45 La.Ann. 1172,
14 So. 134 (1893); Commonwealth v. Mead, 92 Mass. 398 (1865); Willet v. Commonwealth,
76 Ky. 230 (1877); Scott v. State, 71 Tex.Crim.R. 41, 158 S.W. 814 (1913); Price v. State, 50 Tex.
Crim.R. 71, 94 S.W. 901 (1906).
17
See, e.g., Elk v. United States, 177 U.S. 529, 20 S.Ct. 729, 44 L.Ed. 874 (1900); State v. Bowen,
118 Kan. 31, 234 P. 46 (1925); Hughes v. Commonwealth, 19 Ky.L.R. 497, 41 S.W. 294 (1897);
People v. Cherry, 307 N.Y. 308, 121 N.E.2d 238 (1954); State v. Hooker, 17 Vt. 658 (1845);
Commonwealth v. French, 531 Pa. 42, 611 A.2d 175 (1992).
34 2 Basic Requirements of Modern Criminal Liability

Personal liability is required from the specific offense for it to be considered


legal.18 The modern society has no interest in punishing one person for the behavior
of another person, regardless their specific relationships. Effective legal social
control may not be achieved unless all individuals are liable for their own behavior.
If anyone knows that the legal liability for his own behavior is not imposed upon
him, he has no incentive to be avoided from committing offenses or any other anti-
social behavior. Only when a person knows that no other person is liable for his own
behavior, the legal social control may be effective.
Punishment may deter individuals only if they may be punished personally. The
personal liability guarantees that each offender would be criminally liable and
punished only for his own behavior. Thus, when some individuals collaborate
through complicity to commit an offense, each of the accomplices shall be crim-
inally liable only for his own part. The accessory shall be criminally liable for
accessoryship, whereas the joint-perpetrator shall be criminally liable for joint-
perpetration. The variety of types of criminal liability combined with the principle
of personal liability formed the general forms of complicity in criminal law
(e.g., joint-perpetration, perpetration-through-another, conspiracy, incitement,
accessoryship). Only when personal liability is required, the offense may be consid-
ered legal and legitimate, and criminal liability may be imposed according to it.
When all four basic requirements of legality, conduct, culpability and personal
liability are met, the specific offense, which embodies them, is considered to be
legitimate and legal. Only then the society may impose criminal liability upon
individuals for commission of these offenses. However, for the actual imposition of
criminal liability the legitimacy and legality of the specific offense is crucial, but
not adequate. The particular requirements of the specific offense must be fulfilled
by the offender. These requirements are embodied in the definition of the specific
offense.

2.1.2 The Offender’s Requirements (In Personam)

Each specific offense, which fulfils the requirements from it, determines the
requirements needed for the imposition of criminal liability in that specific offense.
Although different specific offenses require different requirements, the formal logic
behind all offenses and their structure is similar. The common formal logic and
structure are significant attributes of the modern criminal liability. In general, these
attributes may be characterized by posing the minimal requirement needed to
impose criminal liability. It means that the specific offense determines only the
lower threshold for the imposition of criminal liability.

18
GABRIEL HALLEVY, THE MATRIX OF DERIVATIVE CRIMINAL LIABILITY 1–61 (2012).
2.1 Modern Criminal Liability 35

Thus, the offender is required to fulfill at least the requirements of the specific
offense. The general requirements of any specific offense are two:

(a) the external (factual) element requirement (actus reus); and-


(b) the (internal) mental element requirement (mens rea).

The modern structure of the factual element requirement is common to most


modern legal systems. This structure applies the fundamental principle of conduct
of criminal liability, and this structure is identical in relation to all types of offenses,
regardless their mental element requirement. The factual element requirement is the
broad objective-external basis of criminal liability (nullum crimen sine actu),19 and
it is designed to answer four main questions about the factual aspects of the
delinquent event:

(a) “What has happened?”;


(b) “Who has done it?”;
(c) “When has it been done?”; and-
(d) “Where has it been done?”

The first question refers to the substantive facts of the event (what has hap-
pened). The second question relates to the identity of the offender. The third
question addresses the time aspect. The fourth question specifies the location of
the event. In some offenses these questions are answered directly within the
definition of the offense. In other offenses some of the questions are answered
through the applicability of the principle of legality in criminal law.20
For instance, the offense “whoever kills another person. . .” does not relate
directly to questions (b), (c) and (d), but the questions are answered through the
applicability of the principle of legality. Because the offense is likely to be
general,21 the answer to the question “who has done it?” is any person who is
legally competent. As this type of offense may not be applicable retroactively,22 the
answer to the question “when has it been done?” is from the time the offense was
validated onward. And because this type of offense may not be applicable extrater-
ritorially under general expansions,23 the answer for the question “where has it been
done?” is within the territorial jurisdiction of the sovereign under general
expansions.

19
Dugdale, (1853) 1 El. & Bl. 435, 118 Eng. Rep. 499, 500: “. . .the mere intent cannot constitute a
misdemeanour when unaccompanied with any act”; Ex parte Smith, 135 Mo. 223, 36 S.W. 628
(1896); Proctor v. State, 15 Okl.Cr. 338, 176 P. 771 (1918); State v. Labato, 7 N.J. 137, 80 A.2d
617 (1951); Lambert v. State, 374 P.2d 783 (Okla.Crim.App.1962); In re Leroy, 285 Md. 508, 403
A.2d 1226 (1979).
20
For the principle of legality in criminal law see Hallevy, supra note 2.
21
Ibid at pp. 135–137.
22
Ibid at pp. 49–80.
23
Ibid at pp. 81–132.
36 2 Basic Requirements of Modern Criminal Liability

However, the answer to the question “what has happened?” must be incorporated
directly into the definition of the offense. This question addresses the core of the
offense, and it cannot be answered through the principle of legality. This approach
is the basis of the modern structure of the factual element requirement, which
consists of three main components: conduct, circumstances, and results. The con-
duct is a mandatory component, whereas circumstances and results are not. Thus, if
the specific offense is defined as having no conduct requirement, it is not legal, and
the courts may not convict individuals based on such a charge and no criminal
liability may be imposed on anyone accordingly. Thus, the conduct component is at
the heart of the answer to the question “what has happened?”.
Status offenses, in which the conduct component is absent, are considered
illegal, and in general they are abolished when discovered.24 But the absence of
circumstances or results in the definition of an offense does not invalidate the
offense.25 These components are aimed at meeting the factual element requirement
with greater accuracy than by conduct alone. Thus, there are four possible formulas
that can satisfy the factual element requirement:

(a) conduct;
(b) conduct + circumstances;
(c) conduct + results; and-
(d) conduct + circumstances + results.

For instance, the homicide offenses are very similar in their factual element
requirement, which may be expressed by the formula: “whoever causes the death of
another person, . . ..”. In this typical formula the word “causes” functions as
conduct, the word “person” functions as circumstances and the word “death”
functions as results. This offense is, consequently, considered to be requiring both
conduct, circumstances and results within its factual element requirement. Most
offenses’ definitions contain only the factual element requirement as the mental
element requirement may be easily deduced from the general provisions of the
criminal law.
The structure of the mental element requirement applies the fundamental princi-
ple of culpability in criminal law (nullum crimen sine culpa). The principle of
culpability has two main aspects: positive and negative. The positive aspect (what
should be in the offender’s mind in order to impose criminal liability) relates to the
mental element, whereas the negative aspect (what should not be in the offender’s
mind in order to impose criminal liability) relates to the general defenses.26
For instance, imposition of criminal liability for wounding another person
requires recklessness as mental element, but it also requires that the offender not

24
See, e.g., in the United States, Robinson v. California, 370 U.S. 660, 82 S.Ct. 1417, 8 L.Ed.2d
758 (1962).
25
GLANVILLE WILLIAMS, CRIMINAL LAW: THE GENERAL PART sec. 11 (2nd ed., 1961).
26
ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW 157–158, 202 (5th ed., 2006).
2.1 Modern Criminal Liability 37

be insane. Recklessness is part of the positive aspect of culpability, and the general
defense of insanity is part of the negative aspect. The positive aspect of culpability
in criminal law has to do with the involvement of the mental processes in the
commission of the offense. In this context, it exhibits two important aspects:

(a) cognition; and-


(b) volition.

Cognition is the individual’s awareness of the factual reality. In some countries,


awareness is called “knowledge,” but in this context there is no substantive differ-
ence between awareness and knowledge, which may relate to data from the present
or the past, but not from the future.27 A person may assess or predict what will be in
the future, but not know or be aware of it. Prophecy skills are not required for
criminal liability. Cognition in criminal law refers to a binary situation: the offender
is either aware to fact X or not. Partial awareness has not been accepted in criminal
law, and it is classified as unawareness.
Volition has to do with the individual’s will, and it is not subject to factual
reality. An individual may want unrealistic events to occur or to have occurred, in
past, the present, and the future. Volition is not binary because there are different
levels of will. The three basic levels are positive (P wants X), neutral (P is
indifferent toward X), and negative (P does not want X). There also may be
intermediate levels of volition. For example, between the neutral and negative
levels there may be the rashness level (P does not want X, but takes unreasonable
risk towards it). If P would absolutely have not wanted X, he would not have taken
any reasonable risk towards it.
Thus, a driver is driving a car behind a very slow truck. The car driver is in a
hurry, but the truck is very slow. The car driver wants to detour the car, he makes
the detour through crossing continuous line and hits a motorcycle rider who passed
by. The hit caused the motorcycle rider death. The car driver did not want to cause
the motorcycle rider’s death by purpose, but taking the unreasonable risk may prove
an intermediate level of volition. If the car driver absolutely would not have wanted
to cause any death to anyone, he would not have taken the unreasonable risk by
committing the dangerous detour.
Both cognitive and volitive aspects are combined to form the mental element
requirement as derived from the positive aspect of culpability in criminal law. In
most modern countries, there are three main forms of mental element, which are
differentiated based on the cognitive aspect. The three forms represent three layers
of positive culpability and they are:

(a) general intent;


(b) negligence; and-
(c) strict liability.

27
G.R. Sullivan, Knowledge, Belief, and Culpability, CRIMINAL LAW THEORY – DOCTRINES OF THE
GENERAL PART 207, 214 (Stephen Shute and A.P. Simester eds., 2005).
38 2 Basic Requirements of Modern Criminal Liability

The highest layer of the mental element is that of general intent, which requires
full cognition. The offender is required to be fully aware of the factual reality. This
form involves examination of the offender’s subjective mind. Negligence is cogni-
tive omission, and the offender is not required to be aware of the factual element,
although based on objective characteristics he could and should have had awareness
of it. Strict liability is the lowest layer of the mental element; it replaces what was
formerly known as absolute liability. Strict liability is a relative legal presumption
of negligence based on the factual situation alone, which may be refuted by the
offender.
Cognition relates the factual reality, as noted above. The relevant factual reality
in criminal law is that which is reflected by the factual element components. From
the perpetrator’s point of view, only the conduct and circumstance components of
the factual element exist in the present. The results components occur in the future.
Because cognition is restricted to the present and to the past, it can relate only to
conduct and circumstances.
Although results occur in the future, the possibility of their occurrence ensuing
from the relevant conduct exists in the present, so that cognition can relate not only
to conduct and circumstances, but also to the possibility of the occurrence of the
results. For example, in the case of a homicide, A aims a gun at B and pulls the
trigger. At this point he is aware of his conduct, of the existing circumstances, and
of the possibility of B’s death as a result of his conduct.
Volition is considered immaterial for both negligence and strict liability, and
may be added only to the mental element requirement of general intent, which
embraces all three basic levels of will. Because in most legal systems the default
requirement for the mental element is general intent, negligence and strict liability
offenses must specify explicitly the relevant requirement. The explicit requirement
may be listed as part of the definition of the offense or included in the explicit legal
tradition of interpretation.
If no explicit requirement of this type is mentioned, the offense is classified as a
general intent offense, which is the default requirement. The relevant requirement
may be met not only by the same form of mental element, but also by a higher level
form. Thus, the mental element requirement of the offense is the minimal level of
mental element needed to impose criminal liability.28 A lower level is insufficient
for imposing criminal liability for the offense.
According to the modern structure of mental element requirement, each specific
offense embodies the minimal requirements for the imposition of criminal liability,

28
See, e.g., article 2.02(5) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT
AND EXPLANATORY NOTES 22 (1962, 1985), which provides:

When the law provides that negligence suffices to establish an element of an offense, such
element also is established if a person acts purposely, knowingly or recklessly. When
recklessness suffices to establish an element, such element also is established if a person
acts purposely or knowingly. When acting knowingly suffices to establish an element, such
element also is established if a person acts purposely.
2.2 Legal Entities 39

and the fulfillment of these requirements is adequate for the imposition of criminal
liability. No additional psychological meanings are required. Thus, any individual
who fulfils the minimal requirements of the relevant offense is considered to be an
offender, and criminal liability may be imposed upon.
The offender under modern criminal law is not required to be immoral or evil,
but only to fulfill all requirements of the offense. This way, the imposition of
criminal liability is very technic and rational. This legal situation has two main
aspects: structural and substantive. For instance, if the mental element of the
specific offense requires only “awareness”, no other component of mental element
is required (structural aspect), and the required “awareness” is defined by criminal
law regardless its meaning in psychology, philosophy, theology, etc. (substantive
aspect).
However, it cannot be denied that the structure of criminal liability has been
designed for humans towards the capabilities of humans, and not for other creatures
or towards other creatures’ capabilities. Mental element requirement relies on the
human spirit, soul and mind. The inevitable question at this point would be whether
artificial intelligence technologies can be examined through human standards of
spirit, soul and mind. The deeper, but not legal, question is how can criminal liability
be imposed, when based upon these insights, upon spiritless and soulless entities.
It should be noted, that although insights of criminal liability rely on the human
spirit and soul, but the imposition of criminal liability itself is not dependent upon
these terms of deep psychological meaning. If an offender fulfills both factual and
mental element requirements of the specific offense, criminal liability may be
imposed, with or without spirit or soul.
In fact, this understanding is not very new and perhaps not very innovative in the
twenty-first century. It has already been preceded by the same understanding in the
seventeenth century through corporations. Although in the seventeenth century
modern artificial intelligence technology was not invented yet, but there were
other non-human creatures which committed offenses and it was necessary to
subject them to criminal law. These legal creatures had neither spirit nor soul, but
they have the ability to be imposed criminal liability. This type of imposition of
criminal liability may be used as a legal model for imposition of criminal liability
on artificial intelligence systems.

2.2 Legal Entities

The full meaning of imposition of criminal liability upon any offender is combined
out of both the responsibility (criminal liability) and its consequences (punish-
ment). Imposing criminal liability without the capability of sentencing may often be
meaningless in social terms and in legal social control terms. These two aspects of
imposition of criminal liability upon legal entities are discussed below. The most
available legal entities in this context are corporations.
40 2 Basic Requirements of Modern Criminal Liability

2.2.1 Criminal Liability

The potential offenders include corporations since the seventeenth century.29


Although corporations were recognized already by Roman law, the evolution of
the modern corporation has begun in the fourteenth century. English law demanded
permission from the King or the parliament to recognize a specific corporation as
legal.30 The early corporations in Middle-Ages were mostly ecclesial bodies, which
functioned in the organization of the church property. From these legal entities
evolved associations, commercial guilds and professional guilds, which formed the
basis for the evolution of the commercial corporation. During the sixteenth and
seventeenth centuries corporations were dominant also as hospitals and
universities.31
Aside these corporations evolved commercial corporations as solution for
division of ownership among several owners of the business.32 When some people
established a new business, the ownership could have been divided between them
through the establishment of corporation and division of shares and stocks
between the “shareholders”. This pattern of ownership division has been
conceptualized as efficient and minimizing the risks of the owners in relation to
the financial problems of the business. Consequently, corporations became very
common.33
The developing usage of corporations during the first industrial revolution led to
identifying them with both the fruits of the revolution and the misery of lower class
people and workers, created by the revolution. Corporations were regarded as
responsible for poverty of the workers, who shared no profits, and to the continuing
abuse of children working for the corporations. The public and social pressure was
increased as the revolution progressed. As a result, legislators considered them-
selves bound to restrict corporations’ activity. By the beginning of the eighteenth
century the British parliament enacted statutes against the abuse of power by
corporations.
It was ironic as the British parliament wanted to deal with the very power given
to them by the state for the social welfare.34 For the statutes to be efficient they

29
William S. Laufer, Corporate Bodies and Guilty Minds, 43 EMORY L. J. 647 (1994); Kathleen
F. Brickey, Corporate Criminal Accountability: A Brief History and an Observation, 60 WASH.
U. L. Q. 393 (1983).
30
WILLIAM SEARLE HOLDSWORTH, A HISTORY OF ENGLISH LAW 475–476 (1923).
31
William Searle Holdsworth, English Corporation Law in the 16th and 17th Centuries, 31 YALE
L. J. 382 (1922).
32
WILLIAM ROBERT SCOTT, THE CONSTITUTION AND FINANCE OF ENGLISH, SCOTISH AND IRISH JOINT-
STOCK COMPANIES TO 1720 462 (1912).
33
BISHOP CARLETON HUNT, THE DEVELOPMENT OF THE BUSINESS CORPORATION IN ENGLAND 1800–1867
6 (1963).
34
See, e.g., 6 Geo. I, c.18 (1719).
2.2 Legal Entities 41

included criminal offenses. The relevant offense used was public nuisance.35 This
trend of legislation has been deepened as the revolution progressed, and in the
nineteenth century most developed countries already had very developed legisla-
tion towards corporations in various contexts. This legislation included criminal
offenses as well for it to be effective. The conceptual question which was raised
was, how can criminal liability be imposed upon corporations.
Criminal liability requires factual element, whereas corporations possesses no
physical body. Criminal liability requires also mental element, whereas
corporations have no mind, brain, spirit or soul.36 Some countries in Europe refused
to impose criminal liability upon non-human creatures, and revived the Roman rule
that corporations are not subject to criminal liability (societas delinquere non
potest). This approach was very problematic and created legal shelters for
offenders.
For instance, when an individual does not pay his taxes, he is criminally liable,
but when this individual is corporation, it is exempt. Consequently, there is an
incentive to work through corporations and evade tax payments. These countries
have eventually subjected corporations to criminal law, but not until the twentieth
century. However, the Anglo-American legal tradition preferred to accept the idea
of criminal liability upon corporations due to its vast social advantages and benefits.
Thus, in 1635 it was for the first time that a corporation was convicted and
criminal liability was imposed upon it.37 This was relatively primitive structure of
imposition of criminal liability for it relied on vicarious liability. However, this type
of liability enabled the courts to impose criminal liability upon corporations in
separate from the criminal liability of any owner, worker or shareholder of the
corporation. This structure continued to be relevant in both eighteenth and nine-
teenth centuries.38 The major disadvantage of this criminal liability structure based
on vicarious liability was that it required valid vicarious relations between the
corporation and another entity, which in most cases happened to be human,
although it could have been another corporation.39

35
New York & G.L.R. Co. v. State, 50 N.J.L. 303, 13 A. 1 (1888); People v. Clark, 8 N.Y.Cr.
169, 14 N.Y.S. 642 (1891); State v. Great Works Mill. & Mfg. Co., 20 Me. 41, 37 Am.Dec.38
(1841); Commonwealth v. Proprietors of New Bedford Bridge, 68 Mass. 339 (1854); Common-
wealth v. New York Cent. & H. River R. Co., 206 Mass. 417, 92 N.E. 766 (1910).
36
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981).
37
Langforth Bridge, (1635) Cro. Car. 365, 79 Eng. Rep. 919.
38
Clifton (Inhabitants), (1794) 5 T.R. 498, 101 Eng. Rep. 280; Great Broughton (Inhabitants),
(1771) 5 Burr. 2700, 98 Eng. Rep. 418; Stratford-upon-Avon Corporation, (1811) 14 East 348, 104
Eng. Rep. 636; Liverpool (Mayor), (1802) 3 East 82, 102 Eng. Rep. 529; Saintiff, (1705) 6 Mod.
255, 87 Eng. Rep. 1002.
39
Severn and Wye Railway Co., (1819) 2 B. & Ald. 646, 106 Eng. Rep. 501; Birmingham, &c.,
Railway Co., (1842) 3 Q. B. 223, 114 Eng. Rep. 492; New York Cent. & H.R.R. v. United States,
212 U.S. 481, 29 S.Ct. 304, 53 L.Ed. 613 (1909); United States v. Thompson-Powell Drilling Co.,
196 F.Supp. 571 (N.D.Tex.1961); United States v. Dye Construction Co., 510 F.2d 78 (10th
42 2 Basic Requirements of Modern Criminal Liability

Consequently, when the human entity acted with no permission (ultra vires), the
corporation was exempt. It was enough for the exempt of the corporation to include
general provision in the corporation’s papers which prohibits the commission of
any criminal offense on behalf of the corporation.40 As a result, the model of
criminal liability of the corporation should have been replaced, as happened in
the late nineteenth and early twentieth centuries in Anglo-American legal
systems.41 The new model was based on the identity theory.
In some types of cases the criminal liability of corporations derives from its
organs, and in other types its criminal liability is independent. When the criminal
offense requires an omission (e.g., not paying taxes, not fulfilling legal
requirements, not observing workers’ rights, etc.), and the duty to act is the
corporation’s, the corporation is criminally liable independently, regardless any
criminal liability of any other entity, human or not. When the criminal offense
requires an act, its organs’ acts are related to it if committed on behalf of it, by
permission or not.42 The same structure works for the mental element, both for
general intent, negligence and strict liability.43
As a result, the criminal liability of the corporation is direct, not vicarious or
indirect.44 If all requirements of the specific offense are met by the corporation, it is
indicted, regardless any proceedings against any human entity. If convicted, the
corporation is punished in separate from any human entity. Punishments on
corporations are considered not less effective than on humans. However, the main
significance of the modern legal structure of criminal liability of corporations is
conceptual.
Since the seventeenth century criminal liability is not unique for humans. Other
entities, non-human, are also subject to criminal law, and it really works most
efficiently. Indeed, some adjustments were necessary for this legal structure to be
applicable, but eventually non-human corporations are subject to criminal law. For
any modern society it seems natural, and so it should be. If the first barrier has been

Cir.1975); United States v. Carter, 311 F.2d 934 (6th Cir.1963); State v. I. & M. Amusements, Inc.,
10 Ohio App.2d 153, 226 N.E.2d 567 (1966).
40
United States v. Alaska Packers’ Association, 1 Alaska 217 (1901).
41
United States v. John Kelso Co., 86 F. 304 (Cal.1898); Lennard’s Carrying Co. Ltd. v. Asiatic
Petroleum Co. Ltd., [1915] A.C. 705.
42
Director of Public Prosecutions v. Kent and Sussex Contractors Ltd., [1944] K.B. 146, [1944]
1 All E.R. 119; I.C.R. Haulage Ltd., [1944] K.B. 551, [1944] 1 All E.R. 691; Seaboard Offshore
Ltd. v. Secretary of State for Transport, [1994] 2 All E.R. 99, [1994] 1 W.L.R. 541, [1994]
1 Lloyd’s Rep. 593.
43
Granite Construction Co. v. Superior Court, 149 Cal.App.3d 465, 197 Cal.Rptr. 3 (1983);
Commonwealth v. Fortner L.P. Gas Co., 610 S.W.2d 941 (Ky.App.1980); Commonwealth
v. McIlwain School Bus Lines, Inc., 283 Pa.Super. 1, 423 A.2d 413 (1980); Gerhard O. W.
Mueller, Mens Rea and the Corporation – A Study of the Model Penal Code Position on Corporate
Criminal Liability, 19 U. PITT. L. REV. 21 (1957).
44
Hartson v. People, 125 Colo. 1, 240 P.2d 907 (1951); State v. Pincus, 41 N.J.Super. 454, 125
A.2d 420 (1956); People v. Sakow, 45 N.Y.2d 131, 408 N.Y.S.2d 27, 379 N.E.2d 1157 (1978).
2.2 Legal Entities 43

crossed in the seventeenth century, the road for crossing another barrier may be
open for the imposition of criminal liability upon artificial intelligence systems.

2.2.2 Punishments

Given that sentencing considerations are relevant for corporations, the question is
how can society impose human punishments upon them. For instance, how can
society impose imprisonment, fine or capital penalty upon corporations. That
requires a legal technique of conversion from human penalties to corporation
penalties. The facts are that not only criminal liability is imposed upon corporations
for centuries, but corporations are sentenced as well, and not only by fines.
Corporations are punished by various punishments, including imprisonment. It
should be noted, that the corporation is punished separately from the human officers
(directors, managers, employees, etc.), exactly as criminal liability is imposed upon
it separately from the criminal liability of the human officers, if any. There is no
debate over the question whether corporations should be punished by various
punishments, as imprisonment for example, but the question is only on the actual
way to do that.45
For answering the question of “how”, there is a necessary with a general legal
technique of conversion. This general technique contains three major stages as
follows:

(a) The general punishment itself (e.g., imprisonment, fine, probation, death,
etc.) is analyzed as to its roots of meaning;
(b) These roots are searched for in the corporation; and-
(c) The punishment is adjusted to these roots in the corporation.

For instance, imposition of imprisonment on corporations. First, imprisonment is


analyzed as to its roots as deprivation of freedom of the individual. Second, the
court searches for the meaning of freedom for the corporation. When this meaning
is understood, the third and final stage becomes relevant. Accordingly, the court
imposes punishment which reflects the deprivation of freedom of the particular
corporation.
This is the way the general legal technique of conversion works with
corporation’s sentencing. This requires the court be sometimes creative as to the
adjustments required for applicability of the punishment upon corporations. How-
ever, the general framework is clear, workable and actually implemented. This
general frameworks is applicable for all types of punishments imposed on all types

45
Stuart Field and Nico Jorg, Corporate Liability and Manslaughter: Should We Be Going
Dutch?, [1991] Crim. L.R. 156 (1991).
44 2 Basic Requirements of Modern Criminal Liability

of corporations.46 A dominant example is the case of Allegheny Bottling


Company.47
In this case the court found the defendant, a corporation, guilty of price-fixing
(antitrust). It was consented that under the relevant circumstances, if the defendant
were human, the appropriate punishment would have been imprisonment for certain
term. The question was the question of applicability of imprisonment on
corporations. As general principle the court declared that it “does not expect a
corporation to have consciousness, but it does expect it to be ethical and abide by
the law”.48 The court has not seen any substantive difference between human and
corporations in this matter and added that “[t]his court will deal with this company
no less severely than it will deal with any individual who similarly disregards the
law”.49
That is for the basic principle of equalizing the punishments of human and
corporation defendants.50 In this case the corporation was sentenced to 3 years
imprisonment, a fine of 1 million dollars, and the corporation has been placed on
probation for a period of 3 years. Consequently, the court should have discuss the
corporate imprisonment idea, and it has done it due to the above three stages.
First, the court asked what the general meanings of imprisonment are. It has
accepted the definitions of imprisonment as “constraint of a person either by force
or by such other coercion as restrains him within limits against his will” and as
“forcible restraint of a person against his will”. The court’s conclusion was simple
and clear: “[t]he key to corporate imprisonment is this: imprisonment simply means
restraint” and “restraint, that is, a deprivation of liberty”. The court’s conclusion has
been strengthened by several provisions of the law and case-laws as well. Conse-
quently, “[t]here is imprisonment when a person is under house arrest, for example,
where a person has an electronic device which sends an alarm if the person leaves
his own house”.
This has ended the first stage. On the second stage, the court searched for the
meaning of this punishment for corporations. The court concluded that “[c]orporate
imprisonment requires only that the Court restrain or immobilize the corporation”.51

46
Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Misconduct, 60 LAW &
CONTEMP. PROBS. 23 (1997); Richard Gruner, To Let the Punishment Fit the Organization:
Sanctioning Corporate Offenders Through Corporate Probation, 16 AM. J. CRIM. L. 1 (1988);
Steven Walt and William S. Laufer, Why Personhood Doesn’t Matter: Corporate Criminal
Liability and Sanctions, 18 AM. J. CRIM. L. 263 (1991).
47
United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988).
48
Ibid, at p. 858.
49
Ibid.
50
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry Into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981); STEVEN BOX, POWER, CRIME AND
MYSTIFICATION 16–79 (1983); Brent Fisse and John Braithwaite, The Allocation of Responsibility
for Corporate Crime: Individualism, Collectivism and Accountability, 11 SYDNEY L. REV.
468 (1988).
51
Allegheny Bottling Company case, supra note 48, at p. 861.
2.2 Legal Entities 45

At this point the court could have implemented the imprisonment penalty on
corporations accordingly. Thus, on its third and final stage, in this context, the
court made imprisonment applicable for corporations, and actually implemented
corporate imprisonment as follows:

Such restraint of individuals is accomplished by, for example, placing them in the custody
of the United States Marshal. Likewise, corporate imprisonment can be accomplished
by simply placing the corporation in the custody of the United States Marshal. The
United States Marshal would restrain the corporation by seizing the corporation’s
physical assets or part of the assets or restricting its actions or liberty in a particular
manner. When this sentence was contemplated, the United States Marshal for the
Eastern District of Virginia, Roger Ray, was contacted. When asked if he could
imprison Allegheny Pepsi, he stated that he could. He stated that he restrained
corporations regularly for bankruptcy court. He stated that he could close the physical
plant itself and guard it. He further stated that he could allow employees to come and go
and limit certain actions or sales if that is what the Court imposes.
Richard Lovelace said some three hundred years ago, ‘stone walls do not a prison make,
nor iron bars a cage.’ It is certainly true that we erect our own walls or barriers that
restrain ourselves. Any person may be imprisoned if capable of being restrained in
some fashion or in some way, regardless of who imposes it. Who am I to say that
imprisonment is impossible when the keeper indicates that it can physically be done?
Obviously, one can restrain a corporation. If so, why should it be more privileged than
an individual citizen? There is no reason, and accordingly, a corporation should not be
more privileged.
Cases in the past have assumed that corporations cannot be imprisoned, without any cited
authority for that proposition. . . . This Court, however, has been unable to find any case
which actually held that corporate imprisonment is illegal, unconstitutional or impossi-
ble. Considerable confusion regarding the ability of courts to order a corporation
imprisoned has been caused by courts mistakenly thinking that imprisonment necessar-
ily involves incarceration in jail. . . . But since imprisonment of a corporation does not
necessarily involve incarceration, there is no reason to continue the assumption, which
has lingered in the legal system unexamined and without support, that a corporation
cannot be imprisoned. Since the Marshal can restrain the corporation’s liberty and has
done so in bankruptcy cases, there is no reason that he cannot do so in this case as he
himself has so stated prior to the imposition of this sentence.52

Thus, imprisonment is actually applied not only for human offenders, but also for
corporate offenders. Through this approach, not only imprisonment is applicable
for corporations, but any other type of penalty, even if originally designed for
human offenders. Imprisonment is significantly human penalty, whereas fine may
be easily collected from corporations (the same way as taxes are collected, e.g.).
However, imprisonment is actually applied and imposed upon corporations.

52
Ibid, at p. 861.
External Element Involving Artificial
Intelligence Systems 3

Contents
3.1 The General Structure of the External Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.1 Independent Offenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.2 Derivative Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Commission of External Element Components by Artificial Intelligence Technology . . . 60
3.2.1 Conduct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.2 Circumstances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.3 Results and Causation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.1 The General Structure of the External Element

The external element of the criminal liability is reflected by the factual element
requirement of the offense. The general structure of factual element requirement is
consolidated for all types of criminal liability. Nevertheless, it may be more
comfortable to divide the discussion to the general structure within independent
offenses and derivative criminal liability.

3.1.1 Independent Offenses

The structure of the factual element requirement is common to most modern legal
systems. This structure applies the fundamental principle of conduct of criminal
liability, and this structure is identical in relation to all types of offenses, regardless
their mental element requirement. The factual element requirement is the broad

# Springer International Publishing Switzerland 2015 47


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8_3
48 3 External Element Involving Artificial Intelligence Systems

objective-external basis of criminal liability (nullum crimen sine actu),1 and it is


designed to answer four main factual aspects of the delinquent event:

(a) The general description of the occurrence (“What has happened?”);


(b) The offender’s identity (“Who has done it?”);
(c) The event’s time (“When has it been done?”); and-
(d) The event’s location (“Where has it been done?”).

These aspects form together the basic description of the commission of any
offense as required to the imposition of criminal liability. That basic and general
description may be stated in the general formula “Something (1) has been done by
someone (2) sometime (3) somewhere (4)”. Of course, there may be other factual
aspects of the event, but modern criminal law is content with these four aspects.
The first factual aspect refers to the substantive facts of the event (what has
happened). The second factual aspect relates to the identity of the offender. The
third factual aspect addresses the time aspect. The fourth factual aspect specifies the
location of the event. In some offenses these factual aspects are answered directly
within the definition of the offense. In other offenses some of the questions are
answered through the applicability of the principle of legality in criminal law.2
For instance, the offense “whoever kills another person. . .” does not relate
directly to factual aspects (b), (c) and (d), but these factual aspect are answered
through the applicability of the principle of legality. Because the offense is likely to
be general,3 the answer to the factual aspect towards the offender’s identity is any
person who is legally competent. As this type of offense may not be applicable
retroactively,4 the answer to the factual aspect of the event’s time is from the time
the offense was validated onward. And because this type of offense may not be
applicable extraterritorially under general expansions,5 the answer for the aspect of
the event’s location is within the territorial jurisdiction of the sovereign under
general expansions.
Nevertheless, the answer to the factual aspect towards the general description of
the occurrence must be incorporated directly into the definition of the offense. This
factual aspect addresses the core of the offense, and it cannot be answered through
the principle of legality. This approach is the basis of the modern structure of the

1
Dugdale, (1853) 1 El. & Bl. 435, 118 Eng. Rep. 499, 500:

. . .the mere intent cannot constitute a misdemeanour when unaccompanied with any act;
Ex parte Smith, 135 Mo. 223, 36 S.W. 628 (1896); Proctor v. State, 15 Okl.Cr. 338, 176 P. 771
(1918); State v. Labato, 7 N.J. 137, 80 A.2d 617 (1951); Lambert v. State, 374 P.2d 783 (Okla.
Crim.App.1962); In re Leroy, 285 Md. 508, 403 A.2d 1226 (1979).
2
For the principle of legality in criminal law see GABRIEL HALLEVY, A MODERN TREATISE ON THE
PRINCIPLE OF LEGALITY IN CRIMINAL LAW (2010).
3
Ibid at pp. 135–137.
4
Ibid at pp. 49–80.
5
Ibid at pp. 81–132.
3.1 The General Structure of the External Element 49

factual element requirement, which consists of three main components: conduct,


circumstances, and results.
The conduct is a mandatory component, whereas circumstances and results are
not. Thus, if the specific offense is defined as having no conduct requirement, it is
not legal, and the courts may not convict individuals based on such a charge and no
criminal liability may be imposed on anyone accordingly. Thus, the conduct
component is at the heart of the answer to the factual aspect towards the general
description of the occurrence.
Status offenses, in which the conduct component is absent, are considered
illegal, and in general they are abolished when discovered.6 But the absence of
circumstances or results in the definition of an offense does not invalidate the
offense.7 These components are aimed at meeting the factual element requirement
with greater accuracy than by conduct alone. Thus, there are four possible formulas
that can satisfy the factual element requirement:

(a) conduct;
(b) conduct + circumstances;
(c) conduct + results; and-
(d) conduct + circumstances + results.

In all of these four formulas the conduct component must be required, otherwise
the offense is not considered to be legal, and no criminal liability may be imposed
for committing it. For instance, the homicide offenses are very similar in their
factual element requirement, which may be expressed by the formula: “whoever
causes the death of another person, . . ..”. In this typical formula the word “causes”
functions as conduct, the word “person” functions as circumstances and the word
“death” functions as results. This offense is, consequently, considered to be requir-
ing both conduct, circumstances and results within its factual element requirement.
Most offenses’ definitions contain only the factual element requirement as the
mental element requirement may be easily deduced from the general provisions
of the criminal law.

3.1.2 Derivative Criminal Liability

Each derivative criminal liability form must meet factual element requirements.
These requirements are formed within a general template into which content is
filled. That includes the templates for criminal attempt, joint-perpetration, perpe-
tration-through-another, incitement, and accessoryship, and the content that must
be filled into each of the templates.

6
See, e.g., Robinson v. California, 370 U.S. 660, 82 S.Ct. 1417, 8 L.Ed.2d 758 (1962).
7
GLANVILLE WILLIAMS, CRIMINAL LAW: THE GENERAL PART sec. 11 (2nd ed., 1961).
50 3 External Element Involving Artificial Intelligence Systems

The factual element requirement of criminal attempt is always relative to the


factual element of the object-offense. The factual element requirement varies from
each offense, but the factual element of criminal attempt is always relative to that of
the object-offense. This relativity is manifest in the absence of at least one factual
element component in relation to the object-offense. If no factual element compo-
nent is absent, the attempt has been successful and became the completed offense.
The absence of factual element components varies with every attempt, but as long
as even one component is absent, the offense remains within the range of attempt.
The identity of the absent component is immaterial as far as meeting the factual
element requirement of the attempt is concerned.
The absent component may be conduct, circumstances, results, or a portion of
any of them. In result offenses, in most cases the absent component is the result
because in general it is the result that completes the commission of the offense, as
the last step of perpetration. If the offense has not been completed, it is likely that
the result component is missing. For example, murder requires the specific result of
death. A shoots B purposefully in order to kill him. If B dies, A is criminally liable
for murder. If B survives, even severely injured, A is criminally liable for attempted
murder. The absence of the result in the second scenario makes the offense
attempted murder, and the presence of the result in the first scenario makes it
murder.
The absent component need not necessarily be the result, however, not even in
result offenses. Any absent component of the factual element requirement prevents
the attempt from reaching full commission of the offense, so that the absence of any
of the factual element components may fulfill the factual element requirement of
the attempt. The absence of the conduct component is exemplified in the case of
attempted rape, if the offender fails to penetrate the victim’s vagina because of a
physical problem. The offense of rape requires such penetration as the conduct
component of the factual element requirement of rape. Because conduct component
is absent in this example, the offense is considered attempted rape.
Absence of the conduct component raises the question about whether it is
legitimate to impose criminal liability on the attempter given that a component of
the factual element is missing. But the conduct component is mandatory for the
factual element of the object-offense, not necessarily for derivative criminal liabil-
ity. Inaction may be a legitimate form of conduct in derivative criminal liability, but
only there; inaction is not legitimate in the case of object-offenses perpetration
primarily because it causes to incriminate the offender’s status and not behavior.
Consequently, under these circumstances inaction functions as conduct, in other
words, from offender’s point of view it is the best way to execute the criminal plan.
An example of the absence of the circumstance component is the case of
attempted statutory rape (consensual sex with a minor). A has consensual sex
with B believing that she is a minor. Later A finds out that B is an adult. The
circumstance component of statutory rape offense requires that the consenting
person be a minor, and given that both A and B were adults, the circumstance
component is absent so that the factual event is considered attempted statutory
3.1 The General Structure of the External Element 51

rape.8 An example of the absence of the results component is the above case of
attempted murder.
The absence of one component may form the upper borderline of the factual
element requirement for the attempt. The question arises what is the minimal
factual element requirement for the criminal attempt. The answer overlaps the
lower borderline of the criminal attempt according to its general course. The
minimal factual element may consist of the absence of all three components:
conduct, circumstances, and results. Such sweeping absence, however, requires
that the delinquent event enter within the range of criminal attempt, so that only
after the attempter made the decision to execute the criminal plan can the delin-
quent event be considered an attempt.
If according to the criminal plan, after the decision to execute the plan has been
made, the attempter begins executing the plan by inaction, and none of the factual
element requirements are met, the action is still considered an attempt. Thus, the
general template for the factual element requirement of the criminal attempt is
relative to the object-offense. Absence of any factual element component of factual
element requirement of the object-offense can still satisfy the factual element
requirement of the criminal attempt, as long as the event has entered the range of
an attempt, i.e., the decision to commit the offense has been made.
The general template requirement of joint-perpetration is that all factual element
components of the object-offense be met in the case of joint-perpetration as if the
offense were perpetrated by one person. The factual element components are not
required to be present separately for each of the joint-perpetrators as if they were
separate principal perpetrators, but the required factual elements may be met jointly
by the joint-perpetrators provided that all components are present. The exact
division of the relevant components among the joint-perpetrators is immaterial;
some of the components may be fulfilled by one of the joint-perpetrators and some
by others.
There is no minimal portion of the factual element components mandatory to be
physically committed by each of the joint-perpetrators. This requirement treats all
members of the joint enterprise as one body that is liable for the commission of the
offense. This body may have different organs, but it is not necessary to relate
specific components to specific organs. By analogy, the different components of
murder need not be related to different organs of the murderer (e.g., stabbing to his
hand, standing in front of the victim to his legs, etc.); similarly, it is not required to
relate each of these components specifically to individuals among the joint-
perpetrators.
It is sufficient to relate the complete fulfillment of the factual element require-
ment jointly to the offenders as one body in order to impose criminal liability on the
joint-perpetrators.9 Commission of the offense through joint-perpetration is the

8
Gabriel Hallevy, Victim’s Complicity in Criminal Law, 2 INT’L J. OF PUNISHMENT & SENTENCING
72 (2006).
9
Manley, (1844) 1 Cox C.C. 104; State v. Bailey, 63 W.Va. 668, 60 S.E. 785 (1908).
52 3 External Element Involving Artificial Intelligence Systems

same as the execution of the criminal plan as planned by the joint-perpetrators. At


the preparatory (conspiracy) stage, all joint-perpetrators functioned as conspirators
and agreed as one body to execute the plan. The specific role planned for each of the
perpetrators is chosen, in general, according to their skills, but that role is immate-
rial for their classification as joint-perpetrators.10
For instance, A hires the services of B, a professional assassin, to murder C. The
conspiracy to murder C contains an agreement that guarantees a fee to be paid to B
by A. According to the agreement, B is to assassinate C alone. Although none of the
factual element components of the murder are associated with A, he is classified as
a joint-perpetrator of murder. The division of the actual (physical) parts among the
joint-perpetrators is immaterial for this classification. This situation is common in
organized crime. Most leaders of such organizations prefer to stay away from the
physical perpetration of the offense, hoping in this way to be immune from
conviction. Their role is to direct the commission of the offense along the lines of
the conspiracy. They participate in the conspiracy, including making the decision
about committing the offense, but they do not participate in its physical execution.
Occasionally, the conspiracy takes place at several levels: the leaders of the
organization conspire among themselves to commit an offense of a certain type,
after which direction is passed to the executive staff, who conspire to carry out the
offense physically. All these conspirators are considered to be joint-perpetration,
regardless the actual role they play in the commission of the offense. This concept
of collective conduct is part of the modern conceptualization of joint-perpetration in
criminal law.
The collective fulfillment of the factual element makes it unnecessary to exam-
ine the presence of factual element components for each of the perpetrators. If all
components of factual element were present for all perpetrators, the offense would
not be considered joint-perpetration but as committed by multiple principal
perpetrators. The latter situation disregards the special connection between the
joint-perpetrators, which makes them part of one joint enterprise dedicated to the
commission the same offense. This legal concept is also relevant in situations in
which the components of the factual element cannot be related particularly to
individual offenders for lack of evidence, but the complete perpetration can be
related to the group as one body.
For example, it is proven that A and B together caused C’s death, but it is not
known who was the person who actually pulled the trigger. Both A and B are
considered joint-perpetrators of murder. The collective conduct concept is, there-
fore, valuable for its evidentiary aspects. Acceptance of this concept, however, may

10
Nye & Nissen v. United States, 336 U.S. 613, 69 S.Ct. 766, 93 L.Ed. 919 (1949); Pollack
v. State, 215 Wis. 200, 253 N.W. 560 (1934); Baker v. United States, 21 F.2d 903 (4th Cir.1927);
Miller v. State, 25 Wis. 384 (1870); Anderson v. Superior Court, 78 Cal.App.2d 22, 177 P.2d
315 (1947); People v. Cohen, 68 N.Y.S.2d 140 (1947); People v. Luciano, 277 N.Y. 348, 14 N.
E.2d 433 (1938); State v. Bruno, 105 F.2d 921 (2nd Cir.1939); State v. Walton, 227 Conn. 32, 630
A.2d 990 (1993).
3.1 The General Structure of the External Element 53

hurt some reasonable doubt defenses based on the accurate identification of the
actual (physical) offender.
For example, A and B, two identical twins, conspire to murder C. According to
the plan, A is to kill C in a mall where he would be filmed by the security cameras,
while at the same time exactly B would commit theft in another mall where he
would also be filmed by security cameras. At their trial they argue for reasonable
doubt because the prosecution cannot identify the actual (physical) murderer
beyond reasonable doubt. Another example is of D and E who tie F to a tree.
Both of them shoot F, who is killed, but only one bullet is found in F’s corpse, and it
is not known whose bullet it was. At their trial they argue for reasonable doubt
because the prosecution cannot identify the actual murderer beyond reasonable
doubt.
In both examples, the legal consequences are conviction of both offenders as
joint-perpetrators of murder. The collective conduct concept eliminates the need for
relating the components of the factual element exactly to the individual physical
perpetrators. In both examples, the offenders conspired jointly to commit the
murder, and executed their plan accordingly. They are both joint-perpetrators,
regardless of their physical contribution to the commission of the offense.11
Analytically, there can be four types of collective conduct in joint-perpetration.
The first is non-overlapping division of the factual element components, with the
components divided between the joint-perpetrators without overlap of the same
components across different joint-perpetrators, so that one of the perpetrators is
responsible for each component. Consider the case of an offense that contains four
components of the factual element, and there are three joint-perpetrators.
This type of non-overlapping division refers to situations in which only one
perpetrator is responsible for each component, but jointly all components of the
factual element are covered by the delinquent group acting as one body. The legal
consequence is that all joint-perpetrators are criminally liable for the commission of
the offense, regardless of their particular role in the actual perpetration, as long as
they are classified as joint-perpetrators (they participated in the conspiracy, decided
to execute the criminal plan, and began its execution).
The second type is a partially-overlapping division of the factual element
components, with the components divided among the joint-perpetrators and partial
overlapping the same components across different perpetrators. Consequently,
several perpetrators are responsible for some of the components. This type of
partially-overlapping division refers to situations in which some joint-perpetrators,
but not all, are responsible for some of the components of the factual element, but
jointly all components of the factual element are covered by the delinquent group
acting as one body. The legal consequence is that all joint-perpetrators are crimi-
nally liable for the commission of the offense, regardless of their particular role in
the actual perpetration, as long as they are classified as joint-perpetrators.12

11
United States v. Bell, 812 F.2d 188 (5th Cir.1987).
12
Harley, (1830) 4 Car. & P. 369, 172 Eng. Rep. 744.
54 3 External Element Involving Artificial Intelligence Systems

In the third type all the factual element components are covered by one of the
joint-perpetrators who is responsible for all the components. Concentration of all
factual element components in one joint-perpetrator is typical of offenses
committed by hierarchical criminal organizations, when the executive level (lead-
ership, inspection, advising, etc.) is separated from the operative members of the
organization. This type of collective conduct does not necessarily result in the
concentration of all components in the person of one joint-perpetrator, but occa-
sionally they are covered by several joint-perpetrators.
Concentration of all the factual element components in several joint-perpetrators
is typical of offenses committed by criminal organizations, when the operational
level is separated from the leadership. In smaller criminal organizations the leader
directs the commission of the offense at the conspiracy stage, and after the joint
decision has been made he steps asides and does not participate in the actual
execution of the criminal plan. Nevertheless, all the components of the factual
element have been fulfilled by the delinquent group that acted as one body. The
legal consequence is that all joint-perpetrators are criminally liable for the commis-
sion of the offense, including the leader, regardless of their specific role in the
actual perpetration, as long as they are classified as joint-perpetrators.13
In the fourth type, all the joint-perpetrators cover all the factual element
components. This type of coverage of all factual element components by all the
joint-perpetrators refers to situations in which all the perpetrators fully complete the
offense separately. In this type of situation, criminal liability may be imposed on
each of the perpetrators as a principal perpetrator, regardless their joint enterprise,
and the advantage of the law of joint-perpetration with respect to the factual
element becomes insignificant. Participation in the conspiracy does not guarantee
the imposition of criminal liability in joint-perpetration, unless execution of the
criminal plan has begun.
If the offence has been committed separately by the various offenders, regardless
of any previous conspiracy, this is not joint-perpetration anymore but multi-
principal perpetration.14 But if the conspiracy produces a plan according to which

13
Bingley, (1821) Russ. & Ry. 446, 168 Eng. Rep. 890; State v. Adam, 105 La. 737, 30 So.
101 (1901); Roney v. State, 76 Ga. 731 (1886); Smith v. People, 1 Colo. 121 (1869); United States
v. Rodgers, 419 F.2d 1315 (10th Cir.1969).
14
Pinkerton v. United States, 328 U.S. 640, 66 S.Ct. 1180, 90 L.Ed. 1489 (1946); State v. Cohen,
173 Ariz. 497, 844 P.2d 1147 (1992); State v. Carrasco, 124 N.M. 64, 946 P.2d 1075 (1997);
People v. McGee, 49 N.Y.2d 48, 424 N.Y.S.2d 157, 399 N.E.2d 1177 (1979); State v. Stein,
94 Wash.App. 616, 972 P.2d 505 (1999); Commonwealth v. Perry, 357 Mass. 149, 256 N.E.2d
745 (1970); United States v. Buchannan, 115 F.3d 445 (7th Cir.1997); United States v. Alvarez,
755 F.2d 830 (11th Cir.1985); United States v. Chorman, 910 F.2d 102 (4th Cir.1990); United
States v. Moreno, 588 F.2d 490 (5th Cir.1979); United States v. Castaneda, 9 F.3d 761 (9th
Cir.1993); United States v. Walls, 225 F.3d 858 (7th Cir.2000); State v. Duaz, 237 Conn. 518, 679
A.2d 902 (1996); Harris v. State, 177 Ala. 17, 59 So. 205 (1912); Apostoledes v. State, 323 Md.
456, 593 A.2d 1117 (1991); State v. Anderberg, 89 S.D. 75, 228 N.W.2d 631 (1975); Espy v. State,
54 Wyo. 291, 92 P.2d 549 (1939); State v. Hope, 215 Conn. 570, 577 A.2d 1000 (1990).
3.1 The General Structure of the External Element 55

the offense is to be committed separately by all the offenders (e.g., one serving as a
backup for the other), the offense is joint-perpetration.
The general template requirement of perpetration-through-another is that all
factual element components of the offense are covered as if by one person. Thus,
the components of the physical acts of both the perpetrator-through-another and of
the other person are considered as if they were committed by one person. It is not
necessary for each person to separately meet the requirements for factual element
components as if they were separate principal perpetrators. The factual element
requirement may be met jointly by both persons, provided that all the components
are present. The exact division among the two persons of the relevant components is
immaterial. Some of the components may be covered by one person, some by the
other.
There is no minimal portion of the factual element components that must be
physically committed by either of the two. The template requirement regards the
perpetrator-through-another to be the person responsible for the commission of the
offense as principal perpetrator, whether he committed the offense directly or used
instrumentally another person, denying his free choice. The arm of the perpetrator
may be greatly extended through the other person, but the arm functions as a mere
instrument, without a real opportunity to choose. It is immaterial which person
physically fulfills the factual element requirement as long as the perpetrating body
as a whole covers all the components.15
The criminal liability of the perpetrator-through-another for the commission of
the offense is based on the execution of the criminal plan that was completed earlier
by the perpetrator-through-another, and on the execution of the plan by another
person being used instrumentally. For example, A wishes to rob a bank and plans
for B to break into the bank and remove the money from the safe. The plan is
executed while A is far away from the bank. Apparently, B provides the physical
fulfillment of the factual element components of robbery, whereas A did not
participate physically the robbery. But because of the instrumental use of B by A,
A is considered the perpetrator-through-another of the robbery, although he did not
account for any specific factual element component.
According to this view of factual element requirement, the perpetrator-through-
another is conceptualized as the principal perpetrator, functionally identical with
any other principal perpetrator who instrumentally uses other devices for the
commission of the offense. The fact that in this case the “device” happens to be
human is immaterial in this context. Analytically, the factual element requirement
can be met in four ways in the case of perpetration-through-another.
The first way is non-overlapping division of the factual element components,
with the components divided between the perpetrator-through-another and the other
person without overlap of the same components between them, so that one person is
responsible for each component. Consider the case of an offense that contains four
components of the factual element, and there are two persons: one perpetrator-

15
Dusenbery v. Commonwealth, 220 Va. 770, 263 S.E.2d 392 (1980).
56 3 External Element Involving Artificial Intelligence Systems

through-another and another person being used instrumentally. This type of


non-overlapping division refers to situations in which a portion of the factual
element components is covered physically by the perpetrator-through-another and
the other portion by the other person through instrumental use of him.
These situations lead to the imposition of criminal liability on the perpetrator-
through-another as a principal perpetrator. Because the factual element requirement
is met by this enterprise of perpetration-through-another, criminal liability can be
imposed. The criminal liability of the other person is examined based on his
personal characteristics, as discussed above (i.e., “innocent agent” or “semi-inno-
cent agent”).16 The second type is a partially-overlapping division of the factual
element components, with the components divided between the perpetrator-
through-another and the other person, with partial overlap of the same components
between them, so that both persons are responsible for some of the components.
This type of partially overlapping division of the factual element components
between the two persons refers to situations in which some of the components are
physically committed by the perpetrator-through-another, some by the other person,
and some by both. The perpetrator-through-another made instrumental use of the
other person for some of the factual element but not for all of them. The partial
overlap is immaterial. These situations lead to the imposition of criminal liability
on the perpetrator-through-another as a principal perpetrator. Because the factual
element requirement is met by this enterprise of perpetration-through-another,
criminal liability can be imposed. The criminal liability of the other person is
examined based on his personal characteristics, as discussed above (i.e., “innocent
agent” or “semi-innocent agent”).
In the third type all the factual element components are covered by the other
person, who is responsible for all the components, and the perpetrator-through-
another does not participate at all in the physical commission of the offense.17
Concentration of all factual element components in the other person indicates full
instrumental use of the other person by the perpetrator-through-another, a classic
case of perpetration-through-another.
These situations lead to imposition of criminal liability on the perpetrator-
through-another as a principal perpetrator. Because the factual element requirement
is met by this enterprise of perpetration-through-another, criminal liability can be
imposed. The criminal liability of the other person is examined based on his
personal characteristics, as discussed above (i.e., “innocent agent” or “semi-inno-
cent agent”). In the fourth type all the factual element components are covered by
the perpetrator-through-another, who is responsible for all the components, and the
other person does not participate at all in the physical commission of the offense.

16
NICOLA LACEY AND CELIA WELLS, RECONSTRUCTING CRIMINAL LAW – CRITICAL PERSPECTIVES ON
CRIME AND THE CRIMINAL PROCESS 53 (2nd ed., 1998).
17
Butt, (1884) 49 J.P. 233, 15 Cox C.C. 564, 51 L.T. 607, 1 T.L.R. 103; Stringer and Banks, (1991)
94 Cr. App. Rep. 13; Manley, (1844) 1 Cox C.C. 104; Mazeau, (1840) 9 Car. & P. 676, 173 Eng.
Rep. 1006.
3.1 The General Structure of the External Element 57

Concentration of all factual element components in the perpetrator-through-


another represents principal perpetration of the offense by the perpetrator-
through-another. But these situations can still be relevant for perpetration-through-
another when the other person is used to prepare the conditions necessary for the
commission of the offense, before the actual commission has begun. For example,
A instrumentally uses B to leave a window open for him in the house. A few hours
later A breaks into the house using the open window. In these situations criminal
liability for the perpetration of the offense may be imposed regardless of the
perpetration-through-another law and the perpetrator-through-another is considered
as the principal perpetrator.
The general template for the factual element requirement of incitement involves
causing another person, using any means available, to commit an offense out of free
choice and with full awareness.18 The factual element requirement of incitement is
similar to that of result offenses. The factual core of incitement is causing another
person to commit the offense. Commission of the offense in this context may be
satisfied by bringing the incited person into the social endangerment sphere by
causing to him to commit an offense out of free choice and with full awareness.
Perpetration-through-another can also cause another person to physically com-
mit the offense. The difference between perpetration-through-another and incite-
ment, in this context, is that incitement must cause the incited person to choose
freely and out of full awareness to commit the offense, but not necessarily to
commit it. The causation required in incitement is between the inciting conduct
and the aware and free choice to commit the offense. The perpetrator-through-
another is required to make instrumental use of the other person, not to motivate
him mentally to make that aware and free choice. Incitement requires that the aware
and free choice to commit the offense be that of the incited person, not of the inciter.
Thus, the factual element requirement of incitement includes all three types of
components: conduct, circumstances, and results. The factual element of incitement

18
Compare article 26 of the German penal code which provides:

Als Anstifter wird gleich einem Täter bestraft, wer vorsätzlich einen anderen zu dessen
vorsätzlich begangener rechtswidriger Tat bestimmt hat;

Article 121-7 of the French penal code provides:

Est également complice la personne qui par don, promesse, menace, ordre, abus d’autorité
ou de pouvoir aura provoqué à une infraction ou donné des instructions pour la commettre;

Section 5.02(1) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT AND
EXPLANATORY NOTES 76 (1962, 1985) provides:

A person is guilty of solicitation to commit a crime if with the purpose of promoting or


facilitating its commission he commands, encourages or requests another person to engage
in specific conduct that would constitute such crime or an attempt to commit such crime or
would establish his complicity in its commission or attempted commission.
58 3 External Element Involving Artificial Intelligence Systems

is not derived from the object-offense because incitement relies on external deriva-
tion and its components are not dependent on the factual element of the object-
offense. The conduct component consists of measures taken by the inciter to cause
the incited person to make the decision to commit the offense freely and with full
awareness of his acts. The purpose of these measures contradicts instrumental use
of the incited person in the commission of the offense. The measures of incitement
may include seduction, solicitation, convincing, encouragement, abetting, advice,
and more. As long as the measures do not contradict the aware and free choice of
the incited person, they may be considered to be part of the incitement.
The circumstances component of incitement consists of the identity of the
incited person. The incited person cannot be the same person as the inciter. This
requirement is intended not only to eliminate cases of self-principal-incitement,
which are classified as principal perpetration, the offender having persuaded him-
self to commit the offense; it is also meant to differentiate incitement from
conspiracy. If the inciter participates in the early planning of the offense, he is
not considered an inciter but a conspirator or joint-perpetrator.
For example, A solicits B to jointly commit an offense, and B agrees to A’s
proposal. This agreement is a clear expression of conspiracy, not of incitement. As
both perpetrators plan the commission of the offense together and agree to commit
it, A’s conduct is part of the conspiracy. Conspiracy may include the inner efforts of
some of the conspirators, which may resemble incitement but still be conspiracy.
The result component of incitement consists of the decision of the incited person to
commit the offense. Successful incitement does not necessarily include the actual
commission of the offense.
Because incitement consists of “planting” the delinquent idea in the incited
person’s mind, this purpose is fulfilled when the incited person enters the sphere
of social endangerment, which begins with the decision made freely and with full
awareness to commit the offense. It is immaterial, therefore, as far as the inciter’s
criminal liability is concerned, whether or not the incited person completed the
commission of the offense. The incitement is complete when the incited person
makes the decision to commit the offense. All three components must be present in
order to impose criminal liability for incitement.
The general template for the factual element requirement of accessoryship
includes any conduct characterized not by factual attributes but by its purpose, as
expressed by the mental element requirement of accessoryship.19 That purpose is to
render assistance to the perpetrators of the offense, and it does not relate directly to

19
See e.g., article 27(1) of the German penal code provides:

Als Gehilfe wird bestraft, wer vorsätzlich einem anderen zu dessen vorsätzlich begangener
rechtswidriger Tat Hilfe geleistet hat;
Article 121-7 of the French penal code provides:

Est complice d’un crime ou d’un délit la personne qui sciemment, par aide ou assistance, en
a facilité la préparation ou la consommation;
3.1 The General Structure of the External Element 59

the commission of the offense. In other words, the accessory’s purpose is not the
successful commission of the offense, but to render assistance to the perpetrators.
The accessory’s motive may be to contribute to the completion of the offense, but
this is not necessary for the imposition of criminal liability for accessoryship.
The factual element requirement of accessoryship is similar to that of conduct
offenses. The core of the factual element of accessoryship is the conduct that is
intended to render assistance to the perpetrators, which includes any measures taken
by the accessory to render assistance to the perpetrators. These measures may
include various types of assistance, according to the accessory’s understanding;
in different offenses, the accessories may use different types of conduct.
The circumstance component of accessoryship consists of the timing of the
accessoryship: the assisting conduct must be rendered before the commission of
the offense or simultaneously with it. If the assisting conduct has been rendered
after the completion of the offense, it is no longer accessoryship. If the accessory
renders the assistance after the completion of the offense, according to the early
planning in which he participated, he is a joint-perpetrator. If the accessory did not
participate in the early planning, he is considered to be an accessory after the fact.
The factual element of accessoryship does not require the component of results,
and therefore the accessory is not required to render effective assistance to the
perpetrators or to contribute to the actual commission of the offense. Even if the
accessory interferes with the commission of the offense or prevents its completion,
his action is still considered to be accessoryship. If the accessory subjectively
considered his conduct to be rendering assistance to the perpetrators and committed
his act with this purpose, the factual element of the accessoryship is satisfied.
For example, A knows that B intends to break into C’s house at a certain time.
With the purpose of assisting B, he calls C outside so that C would not oppose B. C
walks out of the house but locks the door behind him, making the burglary more
difficult to commit. Although A in practice hindered the commission of the offense,
he is still considered an accessory because according to A’s subjective understand-
ing, his conduct was intended to render assistance to B in committing the offense.
Because this was the purpose of A’s action and because it occurred before the
commission of the offense, A’s conduct is considered accessoryship. Both the
conduct and circumstance components must be present in order to impose criminal
liability for accessoryship.

Article 8 of the Accessories and Abettors Act, 1861, 24 & 25 Vict. c.94 as amended by the
Criminal Law Act, 1977, s. 65(4) provides:

Whosoever shall aid, abet, counsel, or procure the commission of any indictable offence,
whether the same be an offence at common law or by virtue of any Act passed, shall be
liable to be tried, indicted, and punished as a principal offender.
60 3 External Element Involving Artificial Intelligence Systems

3.2 Commission of External Element Components by Artificial


Intelligence Technology

The factual element requirement structure (actus reus) is identical in relation to all
types of offenses regardless their mental element requirement. This structure
contains one mandatory component (conduct) in all offenses and two possible,
but not mandatory, components in some offenses (circumstances and results). The
capability of artificial intelligence technology to fulfill the factual element require-
ment under this structure is discussed below.

3.2.1 Conduct

Conduct, as aforesaid, is the only mandatory component of the factual element


requirement, i.e., offense which does not require conduct is illegal and illegitimate,
and no criminal liability may be legitimately imposed according to it. In indepen-
dent offenses the conduct component of the factual element requirement may be
expressed by both act or omission. In derivative criminal liability the conduct
component may be expressed also by inaction under some restrictions. These
forms of conduct are to be examined in relation to the artificial intelligence
machines capabilities.
Act in criminal law is defined as material performance through factual–external
presentation. According to this definition, the materiality of the act is manifest
through its factual–external presentation, which differentiates the act from
subjective-internal matters that are related to the mental element. Because thoughts
have no factual–external presentation, they are not related to the factual but to the
mental element. Will may initiate acts, but in itself it has no factual–external
presentation, and is considered to be part of the mental element. Consequently,
involuntary or unwilled actions, as well as reflexes, are still considered acts for the
factual element requirement.20
However, although unwilled acts or reflexes are still considered acts for the
factual element requirement, criminal liability is not necessarily imposed on such
offenders for reasons of the mental element requirement or general defenses. For
example, B physically pushes A in the direction of C. Although the physical contact
between A and C is involuntary for both of them, it is still considered an “act.” It is
likely that no criminal liability is imposed on A for assault, because the mental
element requirement for assault has not been met. Acts committed as a result of loss
of self-control are still considered acts, and loss of self-control is a general defense
which exempts the offender from criminal liability if proven and accepted.

20
Fain v. Commonwealth, 78 Ky. 183, 39 Am.Rep. 213 (1879); Tift v. State, 17 Ga.App.
663, 88 S.E. 41 (1916); People v. Decina, 2 N.Y.2d 133, 157 N.Y.S.2d 558, 138 N.E.2d
799 (1956); Mason v. State, 603 P.2d 1146 (Okl.Crim.App.1979); State v. Burrell,
135 N.H. 715, 609 A.2d 751 (1992); Bonder v. State, 752 A.2d 1169 (Del.2000).
3.2 Commission of External Element Components by Artificial Intelligence Technology 61

This definition of act concentrates on the factual aspects of the act, and does not
involve mental aspects in the definition of factual elements.21 The definition is also
broad enough to include actions which originate in telekinesis, psychokinesis, etc.,
if these are possible,22 as long as they have factual–external presentation.23 If “act”
is restricted only to “willed muscular construction” or “willed bodily movement”,24
it would also prevent imposition of criminal liability in cases of perpetration-
through-another (e.g., B in the above example), since no act has been performed.
Thus according to these definitions, in order to assault anyone and be exempt from
criminal liability, the offender has to push an innocent person upon the victim.
In the past, the requirement of act consisted on willed bodily movement.25 Such
requirement involves mental aspects (will, in this case) within the factual element
requirement that is supposed to be purely objective-external requirement. There-
fore, the modern criminal law rejects such mongrel requirement. The question of
will belongs to the mental element requirement, and there it should be discussed.
Consequently, when examining the factual element requirement, no aspects of
mental element requirement should be taken into consideration. Consequently,
the criminal law considers act as any material performance through factual–exter-
nal presentation, whether willed or not.
Accordingly, artificial intelligence technology is capable of performing “acts”,
which satisfy the conduct requirement. This is true not only for strong artificial
intelligence technology, but for much lower technologies as well. When a machine,
(e.g., robot equipped with artificial intelligence technology) moves its hydraulic
arms or other devices of its, it is considered act. That is correct when the movement
is a result of inner calculations of the machine, but not only then. Even if the
machine is fully operated by human operator through remote control, any move-
ment of the machine is considered an act.
As a result, even sub-artificial intelligence technology machines have the factual
capability of performing acts, regardless the motives or reasons for the act. It does
not necessarily mean that these machines are criminally liable for these acts, since
the imposition of criminal liability is dependent in the mental element requirement

21
As did some other definitions, which the most popular of them is “willed muscular movement”.
See HERBERT L. A. HART, PUNISHMENT AND RESPONSIBILITY: ESSAYS IN THE PHILOSOPHY OF LAW
101 (1968); OLIVER W. HOLMES, THE COMMON LAW 54 (1881, 1923); ANTONY ROBIN DUFF, CRIMINAL
ATTEMPTS 239–263 (1996); JOHN AUSTIN, THE PROVINCE OF JURISPRUDENCE DETERMINED (1832, 2000);
GUSTAV RADBRUCH, DER HANDLUNGSBEGRIFF IN SEINER BEDEUTUNG FÜR DAS STRAFRECHTSSYSTEM 75, 98
(1904); CLAUS ROXIN, STRAFRECHT – ALLGEMEINER TEIL I 239–255 (4 Auf., 2006); BGH 3, 287.
22
See, e.g., Bolden v. State, 171 S.W.3d 785 (2005); United States v. Meyers, 906 F. Supp. 1494
(1995); United States v. Quaintance, 471 F. Supp.2d 1153 (2006).
23
Scott T. Noth, A Penny for Your Thoughts: Post-Mitchell Hate Crime Laws Confirm a Mutating
Effect upon Our First Amendment and the Government’s Role in Our Lives, 10 REGENT U. L. REV.
167 (1998); HENRY HOLT, TELEKINESIS (2005); PAMELA RAE HEATH, THE PK ZONE: A CROSS-
CULTURAL REVIEW OF PSYCHOKINESIS (PK) (2003).
24
JOHN AUSTIN, THE PROVINCE OF JURISPRUDENCE DETERMINED (1832, 2000).
25
OLIVER W. HOLMES, THE COMMON LAW 54 (1881, 1923).
62 3 External Element Involving Artificial Intelligence Systems

as well. However, for the question of performing an act in order to satisfy the
conduct component requirement, any material performance through factual–exter-
nal presentation is considered an act, whether the physical performer is strong
artificial intelligence entity or not.
Omission in criminal law is defined as inaction contradicting a legitimate duty to
act. According to this definition, the term “legitimate duty” is of great significance.
The opposite of action is not omission but inaction. If doing something is an act,
then not doing anything is inaction. Omission is an intermediate degree of conduct
between action and inaction. Omission is not mere inaction, but inaction that
contradicts a legitimate duty to act.26 Therefore, the omitting offender is required
to act but fails to do so. If no act has been committed, but no duty to act is imposed,
no omission has been committed.27
Therefore, punishing for omission is punishing for doing nothing in specific
situations where there should have been done something due to certain legitimate
duty. For instance, in most countries parents have the legal duty to take care of their
children. In these countries the breach of this duty may form a specific offense. The
parent in this situation is not punished for acting in a wrong way, but for not acting
although he had a legal duty to act is specific way. The requirement to act must be
legitimate in the given legal system, and in most legal systems the legitimate duty
may be imposed both by law and by contract.28
For the question of differences as to the quality of criminal liability, the modern
concept of conduct in criminal law acknowledges no substantive or functional
differences between acts and omissions for the imposition of criminal liability.29
Therefore, any offense may be committed both by act and by omission. Socially and
legally, commission of offenses by omission is no less severe than commission by
act.30 Most legal systems accept this modern approach, and there is no need to
explicitly require omission to be part of the factual element of the offense.
The offense defines the prohibited conduct, which may be committed both
through acts and through omissions.31 On that ground, artificial intelligence

26
See e.g., People v. Heitzman, 9 Cal.4th 189, 37 Cal.Rptr.2d 236, 886 P.2d 1229 (1994); State
v. Wilson, 267 Kan. 550, 987 P.2d 1060 (1999).
27
Rollin M. Perkins, Negative Acts in Criminal Law, 22 IOWA L. REV. 659 (1937); Graham
Hughes, Criminal Omissions, 67 YALE L. J. 590 (1958); Lionel H. Frankel, Criminal Omissions:
A Legal Microcosm, 11 WAYNE L. REV. 367 (1965).
28
P. R. Glazebrook, Criminal Omissions: The Duty Requirement in Offences Against the Person,
55 L. Q. REV. 386 (1960); Andrew Ashworth, The Scope of Criminal Liability for Omissions,
84 L. Q. REV. 424, 441 (1989).
29
Lane v. Commonwealth, 956 S.W.2d 874 (Ky.1997); State v. Jackson, 137 Wash.2d 712, 976
P.2d 1229 (1999); Rachel S. Zahniser, Morally and Legally: A Parent’s Duty to Prevent the Abuse
of a Child as Defined by Lane v. Commonwealth, 86 KY. L. J. 1209 (1998).
30
Mavji, [1987] 2 All E.R. 758, [1987] 1 W.L.R. 1388, [1986] S.T.C. 508, Cr. App. Rep.
31, [1987] Crim. L.R. 39; Firth, (1990) 91 Cr. App. Rep. 217, 154 J.P. 576, [1990] Crim. L.R. 326.
31
See, e.g., section 2.01(3) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT
AND EXPLANATORY NOTES (1962, 1985).
3.2 Commission of External Element Components by Artificial Intelligence Technology 63

technology is capable of committing omissions (commission by omission), which


satisfy the conduct requirement. This is true not only for strong artificial intelli-
gence technology, but for much lower technologies as well. Physically, commission
through omission requires doing nothing. There is no doubt that any machine is
capable of doing nothing, therefore any machine is physically capable of commit-
ting an omission.
Of course, for the inaction to be considered omission, there should be a legal
duty which contradicts the inaction. If such duty exists, originated by law or
contract, and the duty is addressed to the machine, there is no doubt that the
machine is capable of committing an omission towards that duty. This is the
situation regarding inaction as well. Inaction is the complete factual opposite of
an act. If an act is to do something, inaction is not to do it or to do nothing. Whereas
omission is inaction contradicting a legitimate duty to act, inaction requires no such
contradiction.
Omission is not to do when there is a duty to do, whereas inaction is not to do
when there is no duty to do anything. The inaction is accepted as legitimate form of
conduct only in derivative criminal liability (e.g., attempt, joint-perpetration, per-
petration-through-another, incitement, accessoryship, etc.), and not in complete and
independent offenses.32 In these instances when inaction is accepted as legitimate
form of conduct, they are physically committed the same way as omissions.
Consequently, if artificial intelligence technology is capable of commission
through omission of conduct, it is capable of commission through inaction. It
does not necessarily mean that machines or robots are automatically criminally
liable for these omissions or inactions, since the imposition of criminal liability is
dependent on the mental element requirement as well, and not only on the satisfac-
tion of the factual element requirement. Thus, the mandatory component of the
factual element requirement (conduct) is capable of being committed through
machines.
These machines are not required to be very sophisticated, and not even based on
artificial intelligence technology. Very simple machines are capable of performing
conduct under the definitions and requirements of criminal law. For the imposition
of criminal liability upon any entity this is an essential step, even if not adequate.
No criminal liability may be imposed if the conduct requirement is not satisfied, but
conduct alone is not adequate for the imposition of criminal liability.

3.2.2 Circumstances

Circumstances describe the conduct, but do not derive from it. They paint the
offense with its criminal colors. For instance, the circumstances of “without
consent” in the offense of rape describe the conduct “having sexual intercourse”
as criminal. Having sexual intercourse is not necessarily criminal, unless it is

32
GABRIEL HALLEVY, THE MATRIX OF DERIVATIVE CRIMINAL LIABILITY 171–184 (2012).
64 3 External Element Involving Artificial Intelligence Systems

“without consent”. As aforesaid, circumstances are not mandatory component of


factual element requirement. Some offenses do require circumstances in addition to
conduct and some do not.
According to the circumstances’ definition, circumstances specify the criminal
conduct in more accurate terms. When defining specific offenses, circumstances are
required especially when the conduct component is too wide or vague, and there is a
necessary to specify it in order to avoid over-criminalization of situations which are
considered legal by the society. In most specific offenses, the circumstances
represent the factual data that make the conduct become criminal.
For example, as aforesaid, in most legal systems the conduct in the specific
offense of rape is having sexual intercourse, although the specific verb may vary.
But in itself, having sexual intercourse is not an offense, and it becomes one only if
it is committed without consent. The factual element of “without consent” is what
makes the conduct of “having sexual intercourse” criminal. In this offense, the
factual component “without consent” functions as circumstance.33
In addition, the factual component “with a woman” functions as circumstance as
well for it describes the conduct, since raping a chair is not an offense. Eventually,
in this example, the factual element of the specific offense of rape is defined as
“having sexual intercourse with a woman without consent”.34 Whereas “having
sexual intercourse” is the mandatory conduct component, “with a woman without
consent” functions as circumstances components which describe the conduct more
accurately for the factual element requirement of rape be specified adequately in
order to avoid over-criminalization.
According to the definition of circumstances, circumstances are not derived from
the conduct to allow distinguishing the circumstances from the results compo-
nent.35 For example, in homicide offenses the conduct is required to cause the
“death” of a “human being.” The death describes the conduct and also derives from
it because it is the conduct that caused the death. Therefore, in homicide offenses
“death” does not function as a circumstance. The factual data that functions as a
circumstance is “human being.” The victim has been a human being long before the
conduct took place, and therefore it does not derive from the conduct. But it also
describes the conduct (causing death of a human being, not of an insect), and
therefore functions as a circumstance.
As a result and on that ground, artificial intelligence technology is capable of
satisfying the circumstances component requirement of the factual element. When
the circumstances are external to the offender, the identity of the offender is
immaterial, therefore the offender may be human or machine, and that would not
affect these circumstances. For instance, in the above example of rape the
circumstances “with a woman” are external to the offender. The victim is required

33
See, e.g., Pierson v. State, 956 P.2d 1119 (Wyo.1998).
34
See, e.g., State v. Dubina, 164 Conn. 95, 318 A.2d 95 (1972); State v. Bono, 128 N.J.Super.
254, 319 A.2d 762 (1974); State v. Fletcher, 322 N.C. 415, 368 S.E.2d 633 (1988).
35
S.Z. Feller, Les Délits de Mise en Danger, 40 REV. INT. DE DROIT PÉNAL 179 (1969).
3.2 Commission of External Element Components by Artificial Intelligence Technology 65

to be a woman regardless the identity of the rapist. The raped woman is still
considered “woman” whether it has been attacked by human, machine or has not
been attacked at all.
In some offenses the circumstances are not external to the offender, but related
somehow to the offender’s conduct. These circumstances assimilate within the
conduct component in this context. For instance, in the above example of rape
the circumstances “without consent” describe the conduct as if they were part of it
(how exactly the offender had sexual intercourse with the victim?). For satisfying
the requirement of this type of circumstances, the offender just has to commit the
conduct, but in more particular way. Consequently, for this type of circumstances,
fulfilling the requirement is not different than committing the conduct.

3.2.3 Results and Causation

Results do not function as mandatory component of factual element requirement.


Some offenses do require results in addition to conduct and some do not. Contrary
to circumstances, results are defined as factual component that derives from the
conduct. According to this definition, results specify the criminal conduct in more
accurate terms. Results are defined as deriving from the conduct to allow
distinguishing the results from the circumstances. For example, in homicide
offenses the conduct is required to cause the “death” of a “human being.” The
death describes the conduct and also derives from it because it is the conduct that
caused the death. Therefore, in homicide offenses “death” does not function as a
circumstance but as a result.36
Within the structure of the factual element requirement the results derive from
the conduct through factual causation. Although additional conditions exist for this
derivation,37 the factual requirement is factual causation. Consequently, proof of
results requires proving a factual causation.38 The factual causation is defined as
derivation connection in which, were it not for the conduct, the results would not
have occurred the way they have.

36
This is the results component of all homicide offenses. See SIR EDWARD COKE, INSTITUTIONS OF
THE LAWS OF ENGLAND – THIRD PART 47 (6th ed., 1681, 1817, 2001):

Murder is when a man of sound memory, and of the age of discretion, unlawfully killeth
within any county of the realm any reasonable creature in rerum natura under the king’s
peace, with malice aforethought, either expressed by the party or implied by law, [so as the
party wounded, or hurt, etc die of the wound or hurt, etc within a year and a day after the
same].
37
E.g., legal causation as part of the mental element requirement.
38
Henderson v. Kibbe, 431 U.S. 145, 97 S.Ct. 1730, 52 L.Ed.2d 203 (1977); Commonwealth
v. Green, 477 Pa. 170, 383 A.2d 877 (1978); State v. Crocker, 431 A.2d 1323 (Me.1981); State
v. Martin, 119 N.J. 2, 573 A.2d 1359 (1990).
66 3 External Element Involving Artificial Intelligence Systems

According to this definition, the results are the ultimate consequences of the
conduct, i.e., causa sine qua non,39 or the ultimate cause. The factual causation
relates not only to the mere occurrence of the results but also to the way in which
they occurred. For example, A hits B and B dies. For B was terminally ill A may
argue that B would have died anyway in the near future, so B’s death is not the
ultimate result of A’s conduct, and the results would have occurred anyway even if
it were not for A’s conduct. But because the factual causation has to do with the way
in which the results occurred, the requirement is met in this example: B would not
have died the way he did had A not hit him.
As a result and on that ground, artificial intelligence technology is capable of
satisfying the results component requirement of the factual element. In order to
achieve the results the offender has to initiate the conduct. The commission of the
conduct forms the results, and the existence of the results is examined objectively if
derived from the very commission of the conduct.40 Thus, when the offender
commits the conduct, and the conduct is done, the conduct (not the offender) is
the cause for the results to occur, if occurred. The offender is not required to
commit, separately, any results, but only the conduct. Although the offender
initiates the factual process which forms the results, this process is initiated only
through the commission of the conduct component.41 Thus, since artificial intelli-
gence technology is capable of committing the conduct of all kinds, in the context
of criminal law, it is capable of causing results out of this conduct.
For instance, when an artificial intelligence system operates firing system and
makes it shoot a bullet towards a human individual, it is fulfillment of conduct
component in homicide offenses. At that point the conduct is examines whether it
caused that individual’s death through a causation test. If it did, the results compo-
nent is fulfilled as well as the conduct, although physically the system “did” nothing
but the conduct component. Since the conduct component fulfillment is within the
capabilities of artificial intelligence technologies, so is the results component.

39
See, e.g., Wilson v. State, 24 S.W. 409 (Tex.Crim.App.1893); Henderson v. State, 11 Ala.App.
37, 65 So. 721 (1914); Cox v. State, 305 Ark. 244, 808 S.W.2d 306 (1991); People v. Bailey,
451 Mich. 657, 549 N.W.2d 325 (1996).
40
Morton J. Horwitz, The Rise and Early Progressive Critique of Objective Causation, THE
POLITICS OF LAW: A PROGRESSIVE CRITIQUE 471 (David Kairys ed., 3rd ed., 1998); Benge, (1865)
4 F. & F. 504, 176 Eng. Rep. 665; Longbottom, (1849) 3 Cox C. C. 439.
41
Jane Stapelton, Law, Causation and Common Sense, 8 OXFORD J. LEGAL STUD. 111 (1988).
Positive Fault Element Involving Artificial
Intelligence Systems 4

Contents
4.1 Structure of Positive Fault Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.1.1 Independent Offenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.2 Derivative Criminal Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 General Intent and Artificial Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.1 Structure of General Intent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.2 Cognition and Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.2.3 Volition and Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2.4 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.5 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2.6 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.3 Negligence and Artificial Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3.1 Structure of Negligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3.2 Negligence and Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.3.3 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.3.4 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.3.5 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.4 Strict Liability and Artificial Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.4.1 Structure of Strict Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.4.2 Strict Liability and Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.4.3 Direct Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.4.4 Indirect Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.4.5 Combined Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

4.1 Structure of Positive Fault Element

The positive fault element of the criminal liability is reflected by the mental element
requirement of the offense. The general structure of mental element requirement is
consolidated for all types of criminal liability. Nevertheless, it may be more
comfortable to divide the discussion to the general structure within independent
offenses and derivative criminal liability.

# Springer International Publishing Switzerland 2015 67


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8_4
68 4 Positive Fault Element Involving Artificial Intelligence Systems

4.1.1 Independent Offenses

The structure of the mental element requirement applies the fundamental principle
of culpability in criminal law (nullum crimen sine culpa). The principle of culpa-
bility has two main aspects: positive and negative. The positive aspect (what should
be in the offender’s mind in order to impose criminal liability) relates to the mental
element, whereas the negative aspect (what should not be in the offender’s mind in
order to impose criminal liability) relates to the general defenses.1
For instance, imposition of criminal liability for wounding another person
requires recklessness as mental element, but it also requires that the offender not
be insane. Recklessness is part of the positive aspect of culpability, and the general
defense of insanity is part of the negative aspect. The positive aspect of culpability
in criminal law has to do with the involvement of the mental processes in the
commission of the offense. In this context, it exhibits two important aspects:

(a) cognition; and-


(b) volition.

Cognition is the individual’s awareness of the factual reality. In some countries,


awareness is called “knowledge,” but in this context there is no substantive differ-
ence between awareness and knowledge, which may relate to data from the present
or the past, but not from the future.2 A person may assess or predict what will be in
the future, but not know or be aware of it. Prophecy skills are not required for
criminal liability. Cognition in criminal law refers to a binary situation: the offender
is either aware to fact X or not. Partial awareness has not been accepted in criminal
law, and it is classified as unawareness.
Volition has to do with the individual’s will, and it is not subject to factual
reality. An individual may want unrealistic events to occur or to have occurred, in
past, the present, and the future. Volition is not binary because there are different
levels of will. The three basic levels are positive (P wants X), neutral (P is
indifferent toward X), and negative (P does not want X). There also may be
intermediate levels of volition. For example, between the neutral and negative
levels there may be the rashness level (P does not want X, but takes unreasonable
risk towards it). If P would absolutely have not wanted X, he would not have taken
any reasonable risk towards it.
Thus, a driver is driving a car behind a very slow truck. The car driver is in a
hurry, but the truck is very slow. The car driver wants to detour the car, he makes
the detour through crossing continuous line and hits a motorcycle rider who passed
by. The hit caused the motorcycle rider death. The car driver did not want to cause
the motorcycle rider’s death by purpose, but taking the unreasonable risk may prove

1
ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW 157–158, 202 (5th ed., 2006).
2
G.R. Sullivan, Knowledge, Belief, and Culpability, CRIMINAL LAW THEORY – DOCTRINES OF THE
GENERAL PART 207, 214 (Stephen Shute and A.P. Simester eds., 2005).
4.1 Structure of Positive Fault Element 69

an intermediate level of volition. If the car driver absolutely would not have wanted
to cause any death to anyone, he would not have taken the unreasonable risk by
committing the dangerous detour.
Both cognitive and volitive aspects are combined to form the mental element
requirement as derived from the positive aspect of culpability in criminal law. In
most modern countries, there are three main forms of mental element, which are
differentiated based on the cognitive aspect. The three forms represent three layers
of positive culpability and they are:

(a) general intent;


(b) negligence; and-
(c) strict liability.

The highest layer of the mental element is that of general intent, which requires
full cognition. General intent is occasionally referred to as mens rea. The offender
is required to be fully aware of the factual reality. This form involves examination
of the offender’s subjective mind. Negligence is cognitive omission, and the
offender is not required to be aware of the factual element, although based on
objective characteristics he could and should have had awareness of it. Strict
liability is the lowest layer of the mental element; it replaces what was formerly
known as absolute liability. Strict liability is a relative legal presumption of
negligence based on the factual situation alone, which may be refuted by the
offender.
Cognition relates the factual reality, as noted above. The relevant factual reality
in criminal law is that which is reflected by the factual element components. From
the perpetrator’s point of view, only the conduct and circumstance components of
the factual element exist in the present. The results components occur in the future.
Because cognition is restricted to the present and to the past, it can relate only to
conduct and circumstances.
Although results occur in the future, the possibility of their occurrence ensuing
from the relevant conduct exists in the present, so that cognition can relate not only
to conduct and circumstances, but also to the possibility of the occurrence of the
results. For example, in the case of a homicide, A aims a gun at B and pulls the
trigger. At this point he is aware of his conduct, of the existing circumstances, and
of the possibility of B’s death as a result of his conduct.
Volition is considered immaterial for both negligence and strict liability, and
may be added only to the mental element requirement of general intent, which
embraces all three basic levels of will. Because in most legal systems the default
requirement for the mental element is general intent, negligence and strict liability
offenses must specify explicitly the relevant requirement. The explicit requirement
may be listed as part of the definition of the offense or included in the explicit legal
tradition of interpretation.
If no explicit requirement of this type is mentioned, the offense is classified as a
general intent offense, which is the default requirement. The relevant requirement
may be met not only by the same form of mental element, but also by a higher level
70 4 Positive Fault Element Involving Artificial Intelligence Systems

form. Thus, the mental element requirement of the offense is the minimal level of
mental element needed to impose criminal liability.3 A lower level is insufficient
for imposing criminal liability for the offense.
According to the modern structure of mental element requirement, each specific
offense embodies the minimal requirements for the imposition of criminal liability,
and the fulfillment of these requirements is adequate for the imposition of criminal
liability. No additional psychological meanings are required. Thus, any individual
who fulfils the minimal requirements of the relevant offense is considered to be an
offender, and criminal liability may be imposed upon.

4.1.2 Derivative Criminal Liability

Each derivative criminal liability form requires the presence of the mental element.
This requirement is formed within the general template into which the content is
filled. The mental element requirement must match its corresponding factual basis,
which is embodied in the factual element requirement, as discussed above.4
The centrality of the mental element requirement of the criminal attempt derives
from the essence of the attempt, which helps explain its social justification.5 This
requirement may be defined as specific intent to complete the offense accompanied
by general intent components relating to existing factual element components of the
object-offense. The factual element requirement of the criminal attempt derives
from the object-offense, but lacks some of its components.
The mental element of the attempt must match the factual element, but the
criminal attempt is executed owing to the purpose to complete the commission of
the offense. Consequently, the mental element of the attempt must reflect the
mental relation of the offender to existing factual element components and to the
purpose to complete the offense. The purposefulness characteristic of derivative
criminal liability reflects the mental relation to the delinquent event, and should
therefore be reflected in the mental element requirement.
Thus, the central axis of the mental element requirement of the attempt is that of
purposefulness. The attempter’s purpose is to complete the commission of the

3
See, e.g., article 2.02(5) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT
AND EXPLANATORY NOTES 22 (1962, 1985), which provides:

When the law provides that negligence suffices to establish an element of an offense, such
element also is established if a person acts purposely, knowingly or recklessly. When
recklessness suffices to establish an element, such element also is established if a person
acts purposely or knowingly. When acting knowingly suffices to establish an element, such
element also is established if a person acts purposely.
4
Above at Chap. 3.
5
Robert H. Skilton, The Mental Element in a Criminal Attempt, 3 U. PITT. L. REV. 181 (1937); Dan
Bein, Preparatory Offences, 27 ISR. L. REV. 185 (1993); Larry Alexander and Kimberly
D. Kessler, General intent and Inchoate Crimes, 87 J. CRIM. L. & CRIMINOLOGY 1138 (1997).
4.1 Structure of Positive Fault Element 71

offense. For example, A aims his gun at B and pulls the trigger but the bullet misses
B. The act can be considered attempted murder only if A’s purpose was to kill B;
otherwise attempted murder is not relevant to these factual elements. The social
endangerment in delinquent events of this type focuses not on the facts but on the
offender’s mind.
The reflection of purposefulness in the mental element requirement of the
attempt is broad, and it includes both cognitive and volitive aspects. Volition
embodies the purposefulness and cognition supports it, so that the two form a
double, cumulative requirement of mental element components:

(a) specific intent to complete the commission of the offense; and-


(b) general intent in relation to the existing components of the factual element.

Specific intent in criminal law is the mental purpose, aim, target, and object of
the offender. It is the highest level of volition recognized by criminal law. The
purpose of the specific intent is the completion of the commission of the offense.6
As long as the offense has not been completed, completion of the offense exceeds
the factual element components that have taken place de facto in the course of the
delinquent event. The “regular” mental element components must relate to the
existing factual element components, as part of the mental element structure.
Accordingly, the purpose of the completion of the offense, which exceeds the
factual elements of the attempt, requires a special mental element component,
which is the specific intent.
Specific intent is “specific” for structural reasons, as it relates to objects that are
beyond the existing factual element and even beyond factual reality. In attempts,
the completion of the offense is not part of factual reality but of the offender’s will.
This will is so strong that it stands for the act (voluntas reputabitur pro facto). Such
strong will can be reflected only through the highest volition level accepted by
criminal law, which is embodied in the specific intent requirement. Lower levels do
not reflect such strong will. If A stabs B with the intent to kill him, the lethal will
cannot be reflected in indifference, rashness, or negligence.7 Only specific intent
can reflect that will.8
Some legal systems make a substantive distinction between the terms “specific
intent” and “general intent,” the latter relating to a broader sense of general intent.

6
Whybrow, (1951) 35 Cr. App. Rep. 141; Mohan, [1976] Q.B. 1, [1975] 2 All E.R. 193, [1975]
2 W.L.R. 859, 60 Cr. App. Rep. 272, [1975] R.T.R. 337, 139 J.P. 523; Pearman, (1984) 80 Cr. App.
Rep. 259; Hayles, (1990) 90 Cr. App. Rep. 226; State v. Harvill, 106 Ariz. 386, 476 P.2d
841 (1970); Bell v. State, 118 Ga.App. 291, 163 S.E.2d 323 (1968); Larsen v. State, 86 Nev.
451, 470 P.2d 417 (1970); State v. Goddard, 74 Wash.2d 848, 447 P.2d 180 (1968); People
v. Krovarz, 697 P.2d 378 (Colo.1985).
7
Donald Stuart, General intent, Negligence and Attempts, [1968] CRIM. L.R. 647 (1968).
8
Morrison, [2003] E.W.C.A. Crim. 1722, (2003) 2 Cr. App. Rep. 563; Jeremy Horder, Varieties of
Intention, Criminal Attempts and Endangerment, 14 LEGAL STUD. 335 (1994).
72 4 Positive Fault Element Involving Artificial Intelligence Systems

Other legal systems make a structural distinction between specific intent, which
relates to purposes and motives, from “intent,” which has to do with the occurrence
of results. But regardless of the term used by various legal systems, the relevant
mental element component that is required is the one that substantively reflects the
highest level of volition (positive will), and which structurally relates to the purpose
of the completion of the offense and not to a given component of the factual element
requirement.9 A lower level of will and lack of will to achieve the purpose are not
adequate for criminal attempt.
For example, A plays Russian roulette with B. When it is B’s turn, A treats B’s
life or death carelessly. Because A does not will B’s death, it is not considered an
attempt on A’s part. In another example, C drives behind a heavy and slow truck.
The way is marked by a solid white dividing line, which prohibits passing. C takes
an unreasonable risk and passes the truck and narrowly misses D, riding a motor-
cycle in the opposite direction. Because C did not will D’s death, the offense is not
considered an attempt. Only if the offender acts with the purpose of completing the
offense can the offense be considered a criminal attempt.
In general, the object of specific intent may be both a purpose and a motive, but
in attempts the object of specific intent is the purpose (not the motive) of the
completion of the offense. Because a high foreseeability of realization of the
purpose is accepted as a substitute for proof of specific intent, in attempts specific
intent may be proven by a proof of foreseeability.10 For example, A aims a gun at B
and pulls the trigger, but the bullet misses B. A argues that he did not intend to kill
B. But he knows (subjectively) that shooting a person creates a very high probabil-
ity for death. A is therefore presumed to foresee B’s death and is presumed to have
intended to kill B. Because B did not die, the presumed intent to kill B functions as
the specific intent to complete the offense (homicide), as required imposing crimi-
nal liability for attempted homicide.
If the specific intent is conditional, it is not different from specific intent to
complete the offense, in other words, conditional specific intent in attempts

9
RG 16, 133; RG 65, 145; RG 70, 201; RG 71, 53; BGH 12, 306; BGH 21, 14; Mohan, [1976]
Q.B. 1, [1975] 2 All E.R. 193, [1975] 2 W.L.R. 859, 60 Cr. App. Rep. 272, [1975] R.T.R. 337,
139 J.P. 523; State v. Ayer, 136 N.H. 191, 612 A.2d 923 (1992); State v. Smith, 170 Wis.2d
701, 490 N.W.2d 40 (App.1992); United States v. Dworken, 855 F.2d 12 (1st Cir.1988); Braxton
v. United States, 500 U.S. 344, 111 S.Ct. 1854, 114 L.Ed.2d 385 (1991); United States v. Gracidas-
Ulibarry, 231 F.3d 1188 (9th Cir.2000); Commonwealth v. Ware, 375 Mass. 118, 375 N.E.2d 1183
(1978).
10
People v. Harris, 72 Ill.2d 16, 17 Ill.Dec. 838, 377 N.E.2d 28 (1978); State v. Butler, 322 So.2d
189 (La.1975); State v. Earp, 319 Md. 156, 571 A.2d 1227 (1990); Flanagan v. State, 675 S.W.2d
734 (Tex.Crim.App.1982); Smallwood v. State, 106 Md.App. 1, 661 A.2d 747 (1995); Woollin,
[1999] A.C. 82, [1998] 4 All E.R. 103, [1998] 3 W.L.R. 382, [1998] Crim. L.R. 890; Pearman,
(1984) 80 Cr. App. Rep. 259; Mohan, [1976] Q.B. 1, [1975] 2 All E.R. 193, [1975] 2 W.L.R. 859,
60 Cr. App. Rep. 272, [1975] R.T.R. 337, 139 J.P. 523.
4.1 Structure of Positive Fault Element 73

functions as specific intent.11 For example, A is afraid that his car would be stolen.
He attaches a battery to his car, so that potential burglars who touch the car would
be electrocuted and die. B attempts to break into the car, receives an electric shock,
and survives. A argues that he did not intend to kill anyone. Indeed, he did not want
anyone to break into his car, and therefore had no specific intent to complete the
homicide offense. A had a conditional intent whereby anyone attempting to break
into the car will be electrocuted and killed. The condition having been met, the
specific intent to complete the offense is presumed to exist.12
The specific intent reflects the purposefulness of the criminal attempt quite
effectively. But specific intent relates to the unrealized purpose, not to the factual
element components. During the commission of the attempt, some of the factual
element components of the offense may be present. The question is what should the
mental state of the attempter’s mind be in relation to these components. This mental
state should be such that it can support the specific intent to carry out the purpose.
In general, specific intent is supported by general intent alone. The volitive basis
for specific intent is a fully aware will. Thus, for the will to be considered specific
intent the offender should be aware of it. Will that the offender is not aware of is
impulse or reflex. If he is not aware, the individual has no ability to activate his
internal resistance mechanisms, and the will turns into irresistible impulse. In most
legal systems, irresistible impulse is not an adequate basis for criminal liability.13
The only form of mental element that requires awareness is general intent.
Specific intent can be accompanied only by general intent. The criminal attempt
can therefore be classified as a general intent offense, which includes specific intent
in addition to the “regular” components of general intent. Structurally, general
intent components relate to existing factual element components (e.g., awareness of
the circumstances). This structure is relevant for both object-offenses and deriva-
tive criminal liability. Therefore, in addition to the specific intent, the mental

11
Bentham, [1973] 1 Q.B. 357, [1972] 3 All E.R. 271, [1972] 3 W.L.R. 398, 56 Cr. App. Rep.
618, 136 J.P. 761; Harvick v. State, 49 Ark. 514, 6 S.W. 19 (1887); People v. Connors, 253 Ill.
266, 97 N.E. 643 (1912); Commonwealth v. Richards, 363 Mass. 299, 293 N.E.2d 854 (1973);
State v. Simonson, 298 Minn. 235, 214 N.W.2d 679 (1974); People v. Vandelinder, 192 Mich.
App. 447, 481 N.W.2d 787 (1992).
12
Husseyn, (1977) 67 Cr. App. Rep. 131; Walkington, [1979] 2 All E.R. 716, [1979]
1 W.L.R. 1169, 68 Cr. App. Rep. 427, 143 J.P. 542; Haughton v. Smith, [1975] A.C. 476,
[1973] 3 All E.R. 1109, [1974] 3 W.L.R. 1, 58 Cr. App. Rep. 198, 138 J.P. 31; Easom, [1971]
2 Q.B. 315, [1971] 2 All E.R. 945, [1971] 3 W.L.R. 82, 55 Cr. App. Rep. 410, 135 J.P. 477.
13
George E. Dix, Criminal Responsibility and Mental Impairment in American Criminal Law:
Responses to the Hinckley Acquittal in Historical Perspective, 1 LAW AND MENTAL HEALTH:
INTERNATIONAL PERSPECTIVES 1, 7 (Weisstub ed., 1986); State v. Hartley, 90 N.M. 488, 565 P.2d
658 (1977); Vann v. Commonwealth, 35 Va.App. 304, 544 S.E.2d 879 (2001); State v. Carney,
347 N.W.2d 668 (Iowa 1984); ISAAC RAY, THE MEDICAL JURISPRUDENCE OF INSANITY 263 (1838);
FORBES WINSLOW, THE PLEA OF INSANITY IN CRIMINAL CASES 74 (1843); SHELDON S. GLUECK, MENTAL
DISORDERS AND THE CRIMINAL LAW 153, 236–237 (1925); Edwin R. Keedy, Irresistible Impulse as a
Defense in the Criminal Law, 100 U. PA. L. REV. 956, 961 (1952); Oxford, (1840) 9 Car. & P. 525,
173 Eng. Rep. 941; Burton, (1863) 3 F. & F. 772, 176 Eng. Rep. 354.
74 4 Positive Fault Element Involving Artificial Intelligence Systems

element requirement of the attempt consists of general intent in relation to the


existing factual element components.14
For example, in most legal systems the offense of injury requires “recklessness”,
which consists of:

(a) awareness of the conduct, of the circumstances, and of the possibility of the
occurrence of the results (this is the cognitive aspect of recklessness); and-
(b) recklessness (indifference or rashness) in relation to the results (this is the
volitive aspect of recklessness).

If the attempted injury lacks the result component (the victim was not injured),
the mental element requirement consists of specific intent to injure the victim and of
general intent components in relation to the existing factual element components
(awareness of the conduct and of the circumstances). No additional mental element
component is required with relation to the results because these have not occurred.
All general intent components have substitutes that can facilitate their proof in
court. All the substitutes that are relevant to object-offenses are also relevant in
derivative criminal liability forms, including attempt. Therefore, awareness of the
conduct and circumstances may be proven by the willful blindness presumption,
awareness of the possibility of the occurrence of the results may be proven by the
awareness presumption, and all volitive components may be proven by the foresee-
ability presumption.
The combined mental element requirement of the attempt is not part of the
mental element requirement of the object-offense. These mental elements may
include different requirements. For example, the offense of injury requires reck-
lessness, whereas attempted injury requires specific intent. This difference may be
explained by the interaction between the specificity range of the factual element and
the adjustment range of the mental element. Because the factual element of the
attempted offense is characterized by the absence of some of the components
relative to the complete offense, the mental element “compensates” for this absence
through a higher level requirement. This compensation is the expression of the
maxim that the will stands for the act (voluntas reputabitur pro facto).
The factual and mental elements of joint-perpetration are derived from the
object-offense. The mental element requirement of joint-perpetration may be
defined as all general intent components of the object-offense must be covered by
all joint-perpetrators. The mental element requirement of joint-perpetration is
significantly different from the factual element requirement. Because the factual
element requirement is affected by the collective conduct concept, collective

14
Pigg, [1982] 2 All E.R. 591, [1982] 1 W.L.R. 762, 74 Cr. App. Rep. 352, 146 J.P. 298; Khan,
[1990] 2 All E.R. 783, [1990] 1 W.L.R. 813, 91 Cr. App. Rep. 29, 154 J.P. 805; G.R. Sullivan,
Intent, Subjective Recklessness and Culpability, 12 OXFORD J. LEGAL STUD. 380 (1992); John
E. Stannard, Making Up for the Missing Element: A Sideways Look at Attempts, 7 LEGAL STUD.
194 (1987); J.C. Smith, Two Problems in Criminal Attempts, 70 HARV. L. REV. 422 (1957).
4.1 Structure of Positive Fault Element 75

fulfillment of the factual element requirement by the joint-perpetration as one body


is satisfactory.
Not every one of the joint-perpetrators is required to account for all the factual
element components, only the joint-perpetration as one body, regardless of the
internal division of functions. With regard to the mental element, however, there is
no parallel “collective awareness concept” or some other concept of collective
mental element. Every one of the joint-perpetrators must fully meet the mental
element requirement.15
Thus, if one party does not fully meet the mental element requirement of the
object-offense, regardless of his factual role in the enterprise, no criminal liability
can be imposed on him for joint-perpetration. Although the factual element in the
joint-perpetration is examined collectively, as one body, each of the parties is
examined separately as to the mental element. The reason for this requirement
has to do with the very essence of the joint-perpetration. The delinquent group acts
as one body to commit the object-offense, and to this end all members of the group
require coordination by the criminal plan. The members of the group act as if they
were the long arms of the unified body in order to commit the offense.
The required coordination is part of the essence of joint-perpetration. The
criminal plan (designed by the conspirators), coordination between the joint-
perpetrators, and their synchronization with the criminal plan are factors that
distinguish joint-perpetration from multi-principal perpetration. When two
offenders commit the offense with no coordination between them and not in
accordance with a common criminal plan, it is not joint-perpetration but multi-
principal perpetration. For joint-perpetration to occur, the perpetrators must act
jointly, and for various individuals to act jointly with one purpose they must be
aware of their cooperation, of their joint activity, and of the criminal plan. All
individuals must be aware of these factors.
As a result, the mental element requirement of the object-offense is mandatory
for each of the joint-perpetrators separately. The mental element of each joint-
perpetrator must relate to all factual element components of the offense, regardless
of the factual role played by the joint-perpetrator. For example, A and B are joint-
perpetrators of an offense that includes two factual components: A committed
component 1 and B committed component 2, so that the factual element require-
ment is satisfied because the perpetrators cover all factual element components as
one body. But both A and B must cover the mental element components relating to
both factual components. This does not represent an overlap because culpability is
subjective and individual, whereas factual elements are external and may be
common.
Therefore, the joint-perpetrators can share a common factual element, but not a
mental element, which remains individual. Because the joint-perpetrators are
required to be aware of their joint enterprise throughout the conspiracy,

15
See e.g., United States v. Hewitt, 663 F.2d 1381 (11th Cir.1981); State v. Kendrick, 9 N.C.App.
688, 177 S.E.2d 345 (1970).
76 4 Positive Fault Element Involving Artificial Intelligence Systems

coordination and synchronization with the criminal plan general intent is the only
form of mental element that is sufficient. Thus, the mental element of joint-
perpetration is general intent in relation to the factual element. The specific
components of the required general intent depend upon the mental element require-
ment of the object-offense. For example, the offense of injury requires recklessness
(a cognitive aspect of awareness and a volitive aspect of recklessness). Each of the
joint-perpetrators of the offense is required to show recklessness.
How is the purposefulness expressed in joint-perpetration? Purposefulness
characterizes all forms of derivative criminal liability and may be expressed
through no less than intent or specific intent. But if the mental element requirement
of joint-perpetration is identical with that of the object-offense, and the mental
element requirement of the object-offense may be satisfied by less than intent, how
is purposefulness expressed in joint-perpetration? The answer is simple. The main
factor that makes joint-perpetration joint is participation of the offenders in the
early planning. The early planning is the conspiracy by which the criminal plan
comes into being. In most legal systems conspiracy functions as a specific offense
and as the early planning of joint-perpetration.
To impose criminal liability on conspirators, specific intent is required.16 To
classify the delinquent event as joint-perpetration, it is necessary to prove conspir-
acy. Therefore specific intent of conspiracy is needed in order to impose criminal
liability for joint-perpetration. The specific intent of conspiracy is for the purpose of
committing the offense by executing the criminal plan. This purpose matches the
purposefulness of derivative criminal liability. Thus, although specific intent is not
directly required for the criminal liability of joint-perpetration, it is required for the
classification of the delinquent event as joint-perpetration. This requirement
prevents imposing criminal liability for joint-perpetration for mistakes, incidental
circumstances, or unawareness.
All general intent components have substitutes that may facilitate their proof in
court. These substitutes, which are relevant for the offenses, are also relevant for the
derivative criminal liability forms, including joint-perpetration. Thus, awareness of
conduct and of circumstances may be proven by the willful blindness presumption,
awareness of the possibility of the occurrence of results may be proven by the
awareness presumption, and all volitive components may be proven by the foresee-
ability presumption.

16
Albert J. Harno, Intent in Criminal Conspiracy, 89 U. PA. L. REV. 624 (1941); United States
v. Childress, 58 F.3d 693 (D.C.Cir.1995); Bolton, (1991) 94 Cr. App. Rep. 74, 156 J.P. 138;
Anderson, [1986] 1 A.C. 27, [1985] 2 All E.R. 961, [1985] 3 W.L.R. 268, 81 Cr. App. Rep. 253;
Liangsiriprasert v. United States Government, [1991] 1 A.C. 225, [1990] 2 All E.R. 866, [1990]
3 W.L.R. 606, 92 Cr. App. Rep. 77; Siracusa, (1989) 90 Cr. App. Rep. 340. For the purposefulness
of conspiracy see e.g., Blamires Transport Services Ltd. [1964] 1 Q.B. 278, [1963] 3 All E.R. 170,
[1963] 3 W.L.R. 496, 61 L.G.R. 594, 127 J.P. 519, 47 Cr. App. Rep. 272; Welham v. Director of
Public Prosecutions, [1961] A.C. 103, [1960] 1 All E.R. 805, [1960] 2 W.L.R. 669, 44 Cr. App.
Rep. 124; Barnett, [1951] 2 K.B. 425, [1951] 1 All E.R. 917, 49 L.G.R. 401, 115 J.P. 305, 35 Cr.
App. Rep. 37, [1951] W.N. 214; West, [1948] 1 K.B. 709, [1948] 1 All E.R. 718, 46 L.G.R. 325,
112 J.P. 222, 32 Cr. App. Rep. 152, [1948] W.N. 136.
4.1 Structure of Positive Fault Element 77

Because the factual element of each joint-perpetrator may be characterized by


the absence of some components relative to the complete offense, the mental
element may “compensate” for this absence through higher-level requirements.
The compensation is the expression of the maxim that the will stands for the act
(voluntas reputabitur pro facto). The conduct of some of the joint-perpetrators may
be inaction, but their active relation to the delinquent event is expressed by their
participation in the early planning (conspiracy) and by their mental relation to the
event, which is embodied in the mental element.
The factual and mental elements of perpetration-through-another are derived
from the object-offense. The mental element requirement of perpetration-through-
another may be defined as all general intent components of the object-offense must
be covered by the perpetrator-through-another. The mental element requirement of
perpetration-through-another is significantly different from the factual element
requirement. Because the factual element requirement is affected by the collective
conduct concept, the collective fulfillment of the factual element requirement by the
perpetration-through-another as one body is satisfactory.
Not every one of the parties (the perpetrator-through-another and the other
person) is required to account for all factual element components, only the
perpetration-through-another as one body, regardless the internal division of
functions. With regard to the mental element, however, there is no parallel “collec-
tive awareness concept” or some other concept of collective mental element. The
perpetrator-through-another (not the other person) is required to fully meet the
mental element requirement.17
Thus, if the perpetrator-through-another does not fully meet the mental element
requirement of the object-offense, regardless of his factual role in the enterprise, no
criminal liability can be imposed on him for perpetration-through-another.
Although the factual element in perpetration-through-another is examined collec-
tively, as one body, each of the parties is examined separately as to the mental
element. The reason for this requirement has to do with the very essence of
perpetration-through-another, in which the other person is used instrumentally by
the perpetrator-through-another, as if the other person were a mere instrument
without the ability to make a free choice in full awareness to commit the offense.
In this case, the person being instrumentally used is not expected to form a
positive mental element with respect to the commission of the offense. From the
point of view of the perpetrator-through-another, he himself is responsible for the
commission of the offense and not the other person. Similarly, if the offender robs a
bank using a gun, it would be unnecessary to examine the mental relation of the gun
to the commission of the offense, since the offender alone is responsible for the
commission of the offense. Because in the case of perpetration-through-another
there is no functional difference between the gun and the other person, the mental

17
United States v. Tobon-Builes, 706 F.2d 1092 (11th Cir.1983); United States v. Ruffin, 613 F.2d
408 (2nd Cir.1979).
78 4 Positive Fault Element Involving Artificial Intelligence Systems

relation of the other person to the offense is immaterial for the criminal liability of
perpetration-through-another.
The other person’s mental relation to the commission of the offense is significant
for his own criminal liability, if any. If the other person functions as a “semi-
innocent agent,” he may be criminally liable for negligence offenses associated
with the same factual element. But the other person’s criminal liability, if any, does
not affect the criminal liability of the perpetrator-through-another for the commis-
sion of the object-offense. Regardless the factual-physical role, if any, of the
perpetrator-through-another, he must consolidate the mental element of the
object-offense.
For example, a surgical nurse wishes to kill a patient and pollutes the surgical
instruments with lethal bacteria. After the surgery the patient dies as a result of the
infection caused by the bacteria. Given that the surgeon was not aware of the
polluted instruments, the nurse used the surgeon instrumentally to kill the patient.
But because it was the surgeon’s duty to make sure the instruments are sterilized,
the surgery was performed with the surgeon’s negligence (negligence does not
require awareness). The nurse is criminally liable for perpetration-through-another
of murder, and the surgeon is criminally liable for negligent homicide. The
surgeon’s criminal liability does not affect the nurse’s criminal liability as
perpetrator-through-another of murder.
In perpetration-through-another, the perpetrators act as one body to commit the
object-offense. The perpetrator-through-another must, therefore, act according to
the criminal plan, which includes the instrumental use of the other person. The other
person functions as the long arms of the perpetrator-through-another in order to
commit the offense. Instrumental use of another person may be accidental or
negligent, but instrumental use in accordance with a criminal plan requires at
least awareness of both the criminal plan and of the instrumental use. Consequently,
the perpetrator-through-another must be aware of these two factors as part of his
mental relation to the delinquent event.
Thus, the mental element requirement of the object-offense is mandatory for the
perpetrator-through-another and it should relate to all factual element components
of the offense, regardless of the factual role of the perpetrator-through-another
within the specific delinquent enterprise. For example, A instrumentally uses B for
the commission of offense, which includes two factual components: A committed
component 1 and B committed component 2. Thus, the factual element requirement
is satisfied because as one body all factual element components are present. A,
however, must also cover the mental element components relating to both factual
components. B’s mental element, if any, is immaterial for A’s criminal liability.
Because the perpetrator-through-another must be aware of the criminal plan and
of the instrumental use of the other person owing to the criminal plan, general intent
is the only form of mental element that is sufficient. Thus, the mental element of
perpetration-through-another is general intent in relation to the factual element. The
specific components of the required general intent depend upon the mental element
requirement of the object-offense. For example, the offense of injury requires
4.1 Structure of Positive Fault Element 79

recklessness (a cognitive aspect of awareness and a volitive aspect of recklessness).


The perpetrator-through-another of the offense is required to show recklessness.
How the purposefulness is expressed in perpetration-through-another? Purpose-
fulness characterizes all forms of derivative criminal liability and may be expressed
through no less than intent or specific intent. But if the mental element requirement
of perpetration-through-another is identical with that of the object-offense, and the
mental element requirement of the object-offense may be satisfied by less than
intent, how is purposefulness expressed in perpetration-through-another? The main
factor that makes the perpetration-through-another “through another” is the instru-
mental use of the other person in accordance with the criminal plan. This instru-
mental use is by nature purposeful, aimed at committing the offense through the
other person.
This purpose matches the purposefulness of derivative criminal liability.
Although specific intent is not directly required for the criminal liability of perpe-
tration-through-another, it is required for the classification of the delinquent event
as perpetration-through-another. This requirement prevents imposing criminal lia-
bility for perpetration-through-another for mistakes, incidental circumstances, or
unawareness of instrumental use.
All general intent components have substitutes that may facilitate their proof in
court. These substitutes, which are relevant for the offenses, are also relevant for the
derivative criminal liability forms, including perpetration-through-another. Thus,
awareness of conduct and of circumstances may be proven by the willful blindness
presumption, awareness of the possibility of the occurrence of results may be
proven by the awareness presumption, and all volitive components may be proven
by the foreseeability presumption.
Because the factual element of the perpetrator-through-another may be
characterized by the absence of some components relative to the complete offense,
the mental element may “compensate” for this absence through higher-level
requirements. The compensation is the expression of the maxim that the will stands
for the act (voluntas reputabitur pro facto). The conduct of the ultimate perpetrator-
through-another may be inaction, but his active relation to the delinquent event is
expressed by his participation in the early planning, the instrumental use of the
other person as part of the execution of the criminal plan, and the mental relation to
the event, which is embodied in the mental element.
The factual and mental elements of incitement are not derived from the object-
offense. The mental element requirement of incitement may be defined as intent to
cause the incited person to make a free decision to commit an offense accompanied
by general intent components related to the factual element components of the
incitement. The mental element requirement of the incitement is independent on the
object-offense, and the mental element components of incitement may be different
from those of the object-offense.18

18
People v. Miley, 158 Cal.App.3d 25, 204 Cal.Rptr. 347 (1984).
80 4 Positive Fault Element Involving Artificial Intelligence Systems

For example, the offense of injury requires recklessness (the cognitive aspect of
awareness and the volitive aspect of recklessness). But incitement to commit injury
requires intent (the cognitive aspect of awareness and the volitive aspect of intent).
The components of the mental element of incitement are identical with those of any
result offense that requires intent. This analysis is required because the factual
element of incitement requires a result component.
Although the default volitive requirement of result offenses is recklessness,
incitement has volitive requirement of intent. In most legal systems this require-
ment is explicitly included in the definition of incitement, but in other legal systems
the incitement is interpreted as requiring intent.19 The reason for requiring intent is
the general characteristic of purposefulness, which characterizes all forms of
derivative criminal liability, including incitement. The purpose of incitement is to
cause the incited person to make a free decision, with full awareness, to commit the
offense. Recklessness cannot support such a level of will, but intent can.
Intent and specific intent embody the highest-level will accepted by criminal
law. Some legal systems make a substantive distinction between the terms “specific
intent” and “general intent,” the latter relating to a broader sense of general intent.
Other legal systems make a structural distinction between specific intent, which
relates to purposes and motives, from “intent,” which has to do with the occurrence
of results. Thus, the more accurate term for incitement would be intent because it
relates to the factual component of the results and not to purposes, which are
beyond the factual element.
The mental element required for incitement is general intent because only
general intent can support intent. The intent to cause the incited person to make a
free-choice decision in accordance with the inciter’s criminal plan requires aware-
ness of the plan and of its aim. It is possible that negligent acts could also cause a
person to commit an offense, but negligent acts are not sufficient to be considered
incitement. The inciter is considered as such only if the incitement is the factual

19
See e.g., article 26 of the German penal code, which provides:

Als Anstifter wird gleich einem Täter bestraft, wer vorsätzlich einen anderen zu dessen
vorsätzlich begangener rechtswidriger Tat bestimmt hat;
Last part of article 121-7 of the French penal code, which provides:

Est également complice la personne qui par don, promesse, menace, ordre, abus d’autorité
ou de pouvoir aura provoqué à une infraction ou donné des instructions pour la commettre;
And article 5.02(1) of THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT AND
EXPLANATORY NOTES 76 (1962, 1985), which provides:

A person is guilty of solicitation to commit a crime if with the purpose of promoting or


facilitating its commission he commands, encourages or requests another person to engage
in specific conduct that would constitute such crime or an attempt to commit such crime or
would establish his complicity in its commission or attempted commission.
4.1 Structure of Positive Fault Element 81

expression of the execution of criminal plan to cause the incited person to make a
free decision to commit the offense.
The factual and mental elements of accessoryship are not derived from the
object-offense. The mental element requirement of accessoryship may be defined
as specific intent to render assistance to the perpetration of an offense accompanied
by general intent components related to the factual element components of the
accessoryship. The mental element requirement of accessoryship is independent of
that of the object-offense and may be different from it.20
For example, the offense of manslaughter requires recklessness (a cognitive
aspect of awareness and a volitive aspect of recklessness). But accessoryship to
manslaughter requires specific intent (a cognitive aspect of awareness and a volitive
aspect of specific intent). The components of the mental element of accessoryship
are identical with the mental element components of any conduct offense that
requires specific intent. This analysis is required because the factual element of
the accessoryship requires no result component.
In most legal systems the specific intent requirement is explicitly included in the
definition of accessoryship, but in some legal systems accessoryship is interpreted
as requiring intent.21 The reason for requiring specific intent is the general charac-
teristic of purposefulness, which characterizes all forms of derivative criminal
liability, including accessoryship. The purpose of the accessoryship is to render
assistance to the perpetration and not necessarily the commission of the offense
(by the perpetrator). A mental element of a lower level than specific intent cannot
support the level of will required for this purpose.

20
Lynch v. Director of Public Prosecutions for Northern Ireland, [1975] A.C. 653, [1975] 1 All
E.R. 913, [1975] 2 W.L.R. 641, 61 Cr. App. Rep. 6, 139 J.P. 312; Gillick v. West Norfolk and
Wisbech Area Health Authority, [1984] Q.B. 589; Janaway v. Salford Health Authority, [1989]
1 A.C. 537, [1988] 3 All E.R. 1079, [1988] 3 W.L.R. 1350, [1989] 1 F.L.R. 155, [1989] Fam. Law
191, 3 B.M.L.R. 137; Gordon, [2004] E.W.C.A. Crim. 961; Rahman, [2007] E.W.C.A. Crim.
342, [2007] 3 All E.R. 396, but compare the American rulings of Mowery v. State, 132 Tex.Cr.R.
408, 105 S.W.2d 239 (1937); United States v. Hewitt, 663 F.2d 1381 (11th Cir.1981); State
v. Kendrick, 9 N.C.App. 688, 177 S.E.2d 345 (1970).
21
See e.g., article 27(1) of the German penal code, which provides:

Als Gehilfe wird bestraft, wer vorsätzlich einem anderen zu dessen vorsätzlich begangener
rechtswidriger Tat Hilfe geleistet hat;
First part of article 121-7 of the French penal code, which provides:

Est complice d’un crime ou d’un délit la personne qui sciemment, par aide ou assistance, en
a facilité la préparation ou la consommation;
Article 8 of the Accessories and Abettors Act, 1861, 24 & 25 Vict. c.94 as amended by the
Criminal Law Act, 1977, c.45, s. 65(4), which provides:

Whosoever shall aid, abet, counsel, or procure the commission of any indictable offence,
whether the same be an offence at common law or by virtue of any Act passed, shall be
liable to be tried, indicted, and punished as a principal offender.
82 4 Positive Fault Element Involving Artificial Intelligence Systems

Intent and specific intent embody the highest-level will accepted in criminal law.
Some legal systems make a substantive distinction between the terms “specific
intent” and “general intent,” the latter relating to a broader sense of general intent.
Other legal systems make a structural distinction between specific intent, which
relates to purposes and motives, from “intent,” which has to do with the occurrence
of results. Thus, the more accurate term for accessoryship would be specific intent,
because it relates to the purpose of rendering assistance to the perpetration, which is
beyond the factual element and not part of it.
The mental element required for accessoryship is general intent because only
general intent can support specific intent. The specific intent to render assistance to
the perpetration according to the accessory’s criminal plan requires awareness of
the plan and of its aim. It is possible that negligent acts could also assist the
perpetrators, but such negligent acts are not sufficient to be considered
accessoryship, as a form of derivative criminal liability. The accessory is consid-
ered as such only if accessoryship is the factual expression of the execution of a
criminal plan to render assistance to the perpetration.22

4.2 General Intent and Artificial Intelligence Systems

4.2.1 Structure of General Intent

Under modern criminal law of most legal systems, general intent (mens rea)
expresses the basic type of mental element, since it embodies the idea of culpability
most effectively. This is the only mental element which enables the combination of
both cognition and volition. The general intent requirement expresses the internal-
subjective relation of the offender to the physical commission of the offense.23 In
most legal systems the general intent requirement functions as the default option of
the mental element requirement.
Therefore, unless explicitly negligence or strict liability are required as mental
elements of the specific offense, general intent would be the required mental
element. This default option is also known as the presumption of mens rea.24
Accordingly, all offenses are presumed to require general intent, unless explicitly
deviated. Since general intent is the highest level of known mental element require-
ment, this presumption is very significant. Consequently, indeed, most offenses in
criminal law do require general intent and not negligence or strict liability. All

22
State v. Harrison, 178 Conn. 689, 425 A.2d 111 (1979); State v. Gerbe, 461 S.W.2d
265 (Mo.1970).
23
JEROME HALL, GENERAL PRINCIPLES OF CRIMINAL LAW 70–77 (2nd ed., 1960, 2005); DAVID
ORMEROD, SMITH & HOGAN CRIMINAL LAW 91–92 (11th ed., 2005); G., [2003] U.K.H.L. 50,
[2004] 1 A.C. 1034, [2003] 3 W.L.R. 1060, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep.
21, (2003) 167 J.P. 621, [2004] Crim. L. R. 369.
24
Sweet v. Parsley, [1970] A.C. 132, [1969] 1 All E.R. 347, [1969] 2 W.L.R. 470, 133 J.P. 188,
53 Cr. App. Rep. 221, 209 E.G. 703, [1969] E.G.D. 123.
4.2 General Intent and Artificial Intelligence Systems 83

mental elements components, including general intent components, are not inde-
pendent or stand alone for themselves.
For instance, dominant component of general intent is awareness. If the require-
ment from the offender is to be aware, the question would be: “aware of what?”,
since awareness cannot stand alone. Otherwise, it would be meaningless. Conse-
quently, all mental elements components must relate to facts or to any factual
reality. The relevant factual aspect for criminal liability is, of course, the factual
element components (conduct, circumstances and results). Of course, factual reality
contains much more facts than these components of factual element, but all other
facts are irrelevant for the imposition of criminal liability.
For example, in rape the relevant facts are “having sexual intercourse with a
woman without consent”.25 The rapist is required to be aware of these facts. If the
offender was or was not aware of other facts as well (e.g., the color of the woman’s
eyes, her pregnancy, her suffer, etc.), it is immaterial for the imposition of criminal
liability. Thus, for the question of imposition of criminal liability, the object of the
mental elements requirement is nothing but the factual element components. Of
course, this object is much narrower than the whole factual reality, but the factual
element represents the decision of the society on what is relevant for criminal
liability and what is not.
However, the other facts and the mental relation to them may affect the punish-
ment, although insignificant for the imposition of criminal liability. For instance,
rapist who raped the victim in a very cruel way would be convicted in rape, whether
he was cruel or not. However, his punishment is very likely to be much harsher than
a more gentle rapist. Identifying the factual element components as the object of
general intent components is the basis for the structure of general intent.
General intent has two layers of requirement:

(a) cognition; and-


(b) volition.

The layer of cognition consists of awareness. Some legal systems use the term
“knowledge” to express the layer of cognition, but it seems that awareness is more
accurate. However, both awareness and knowledge function the same way and
mean the same meaning in this context. A person is capable of being aware only to
facts which occurred in the past or are occurring at present, but is not capable of
being aware of future facts.
For instance, a person can be aware of the fact that A ate his ice-cream 2 min ago
and he can be aware of the fact that B is eating his ice-cream right now. C said that
he intend to eat his ice-cream, therefore most persons can predict it, foresee it or
estimate the probability it will happen, but no person is capable of being aware of it,
simply because it did not occur yet. If the criminal law would have required

25
See, e.g., State v. Dubina, 164 Conn. 95, 318 A.2d 95 (1972); State v. Bono, 128 N.J.Super.
254, 319 A.2d 762 (1974); State v. Fletcher, 322 N.C. 415, 368 S.E.2d 633 (1988).
84 4 Positive Fault Element Involving Artificial Intelligence Systems

offenders to be aware of future facts, it would have required, in fact, prophecy


skills. The offender’s point of view regarding time is the point the conduct is
actually performed, in this context.
Therefore, the conduct component occurs always at present from the offender
point of view. Consequently, awareness is relevant component of general intent in
relation to conduct. Circumstances are defined as factual data that describes the
conduct, but do not derive from it. In order to describe the current conduct,
circumstances must exist at present as well. For instance, the circumstances “with
a woman” in the specific offense of rape, described above, must exist simulta-
neously with the conduct “having sexual intercourse”. The raped woman should be
a woman during the commission of the offense for the circumstances to be fulfilled.
Consequently, awareness is relevant component of general intent in relation to
circumstances as well. However, things are different in relations to the results.
Results are defined as factual component that derives from the conduct. In order to
derive from the conduct, results must occur later than the occurrence of the conduct.
Otherwise, the conduct would not be the cause for the results. For instance, B dies at
11:00:00, and A shoots him at 11:00:10. In this case, it is obvious that the conduct
(the shot) is not the cause of the other factual event (B’s death), which does not
functions as “results”. From the offender’s point of view, since the results occur
later than the conduct and since the offender’s point of view regarding time is the
point the conduct is actually performed, the results occur in the future.
Therefore, since results do not occur at present, awareness is not relevant
through results. The offender is not supposed to be aware of the results, which
have still not occurred from his point of view. However, although the offender is not
capable of being aware of the future results, he is capable of predicting them and
assessing their probability of occurrence. These capabilities are existed along with
the actual performance of the conduct.
For example, A shoots B. At the point the shot is performed the B’s death has not
occurred yet, but along with performing the shot, the shooter is aware of the
possibility of the occurrence of B’s death as a result of the shot. Consequently,
awareness is relevant component of general intent in relation to the possibility of
the result’s occurrence, and not in relation to the results themselves. The awareness
of this possibility is not required to relate to the reasonability or probability of the
result’s occurrence. If the offender is aware of the existence of the possibility,
whether of high or low probability, that the results occur from the conduct, this
component of general intent is fulfilled.26
The additional layer of general intent is the layer of volition. This layer is
additional to the cognition, and based upon cognition. Volition never comes
alone, but always as an additional component to the awareness. Volition relates
to the offender’s will towards the results of the factual event. In relatively rare
offenses, volition may relate to motives and purposes, beyond the specific factual

26
This component of awareness functions also as the legal causal connection in general intent
offenses, but this function has no additional significance in this context.
4.2 General Intent and Artificial Intelligence Systems 85

event, and it is expressed by “specific intent”.27 The main question regarding


volition is, separately from the offender’s awareness of the possibility the results
occur from the conduct, has the offender wanted the results to occur. Since these
results occur in the future from the offender’s point of view, they are the only
reasonable object of volition. From the offender point of view regarding time, the
occurrence of both circumstances and conduct has nothing to do with the will.
The raped woman is a woman before, during and after the rape, regardless the
rapist will. The sexual intercourse is such at that point of time, regardless the rapist
will. If the offender argues that the conduct occurred against his will, i.e. the
offender did not control it, this argument is related to the general defense of loss
of self-control. Consequently, towards conduct and circumstances, only awareness
is required, and no volition component is required in addition.
In the factual reality there are very many levels of will. However, the criminal
law accepted only three of them:

(a) intent (and specific intent);


(b) indifference; and-
(c) rashness.

The first represents positive will (the offender wanted the results to occur), the
second represents nullity (the offender was indifferent to the occurrence of the
results), and the third is negative will (the offender did not want the results to occur,
but has taken unreasonable risk which caused them to occur).
For example, in homicide offenses, at the moment in which the conduct is
committed, if the offender-

(a) wants the victim’s death, it is intent (or specific intent);


(b) is indifferent as to the victim’s death, it is indifference;
(c) does not want the victim’s death, but undertakes unreasonable risk in this
regard, it is rashness.

Intent is the highest level of will accepted by criminal law. Intended homicide is
considered murder in most countries. Indifference is intermediate level, and rash-
ness is the lowest level of will. Both indifference and rashness are known as
“recklessness”. Reckless homicide is considered manslaughter in most countries.
Consequently, if the specific offense requires recklessness, this requirement may be
fulfilled through proof of intent, since higher level of will covers lower levels.
However, if the specific offense requires intent or specific intent, this requirement
may be fulfilled only through intent or specific intent.
Summing up the structure of general intent is much easier if the offenses are
divided into conduct-offenses and results-offenses. Conduct-offenses are offenses

27
“Specific intent” is sometimes mistakenly referred to “intent” in order to differ it from “general
intent”, which is generally used to express general intent.
86 4 Positive Fault Element Involving Artificial Intelligence Systems

which their factual element requires no results, whereas the factual element of
results-offenses requires.28 This division eases the understanding of the general
intent structure, since volition is required only in relation to the results. Therefore,
results require both cognition and volition, whereas conduct and circumstances
require only cognition. Thus, in conduct-offenses, which their factual element
requirement contains conduct and circumstances, the general intent requirement
contains awareness to these components.
In results-offenses, which their factual element requirement contains conduct,
circumstances and results, the general intent requirement contains awareness to the
conduct, to the circumstances and to the possibility of the occurrence of the results.
In addition, the general intent requirement contains in relation to the results intent
or recklessness, according to the particular definition of the specific offense. This
general structure of general intent is a template which contains terms from the
mental terminology (awareness, intent, recklessness, etc.). In order to explore
whether artificial intelligence technology are capable of fulfilling the general intent
requirement in the particular offenses, the definition of these mental terms must be
explored.

4.2.2 Cognition and Artificial Intelligence Technology

Cognition, or in this context, the cognitive aspect of general intent, contains


awareness, and accordingly the relevant question is whether artificial intelligence
technology is capable of consolidating awareness. Since the term “awareness” may
have different meanings in different scientific spheres (e.g., psychology, theology,
law, etc.), for answering this question, the term “awareness” should be examined by
its legal meaning.
To be more accurate, awareness should be examined by its legal definition in
criminal law. Even if the legal meaning may be different than other meanings of this
term, since the question relates to criminal liability, only the legal meaning may be
relevant. Awareness in criminal law is defined as perception by senses of factual
data and its understanding.29 The roots of this definition lie in the psychological
understandings of the late nineteenth century. Previously, awareness was identical
to consciousness, i.e., the physiological-bodily situation of the human mind when
the human is awake.
By the end of the nineteenth century it has been argued by psychologists, that the
awareness is much more complicated situation of the human mind.30 In the 1960s
the modern concept of human mind has begun to consolidate towards the total sum

28
SIR GERALD GORDON, THE CRIMINAL LAW OF SCOTLAND 61 (1st ed., 1967); Treacy v. Director of
Public Prosecutions, [1971] A.C. 537, 559, [1971] 1 All E.R. 110, [1971] 2 W.L.R. 112, 55 Cr.
App. Rep. 113, 135 J.P. 112.
29
William G. Lycan, Introduction, MIND AND COGNITION 3, 3–13 (William G. Lycan ed., 1990).
30
See, e.g., WILLIAM JAMES, THE PRINCIPLES OF PSYCHOLOGY (1890).
4.2 General Intent and Artificial Intelligence Systems 87

of internal and external stimulations that the individual is aware of in specific point
in time. Consequently, it has been understood that human mind is not constant, but
dynamic and changing regularly. The human mind has been described as a flow of
feelings, thoughts and emotions (“stream of consciousness”).31
It has also been understood that human mind is selective. It means that humans
are capable of focusing their mind on certain stimulations while ignoring others.
The ignored stimulations would not enter the human mind at all. If the human mind
would have included all internal and external stimulations, it could not have been
able to function normally for being too busy in paying attention to each of the
stimulations. The function of the senses system of humans and of any animal is to
absorb the stimulations (light, sound, heat, pressure, etc.) and to transfer them to the
brain for processing this factual information.
Processing the information is executed in the brain as an internal process. The
factual data is processed from the stimulations up to the creation of relevant general
image of the factual data in the human brain. The process is, in fact, the process of
perception. Perception is considered as one of the basic skills of human mind. At
any time, many stimulations are active. In order to enable the creation of organized
image of the factual data, the human brain must focus on some of the stimulations
and ignore others, as aforesaid. This is done through a process of attention.
The process of attention enables the brain to concentrate on some stimulations
whereas others are ignored. In fact, the other stimulations are not totally ignored,
but they still exist in the background of the perception process. The nervous system
is still in situation of adequate vigilance in order to absorb other stimulations when
the process of attention is still working.
For instance, A is reading a book and is very focused on it. B calls him since he
wants to speak with him. When he first calls him, he does not react. When he calls
him for the second time much louder, he reacts, asks him what does he want and
says that he must go to the bathroom. When A is focused on reading the book many
of the existing stimulations are ignored (e.g., the sound of his heartbeats, the smells
from the kitchen, the pressure on his bladder, etc.) so he would be able to focus on
the book, but nervous system is still in situation of adequate vigilance in order to
absorb them.
When B calls him for the first time, it is another sound to be ignored through the
process of attention. However, when B calls him for the second time, the attention
process of focusing on the book is stopped and the other stimulations are absorbed
and get some attention. This is why he suddenly “recalls” that he must go to
bathroom. Perception includes not only absorbing stimulations, but also processing
them into relevant general image.
This relevant general image generally creates the meaning of the accumulation
of the stimulations. Processing the factual data into relevant general image is done
through unconscious inference, so awareness of this process it totally not

31
See, e.g., BERNARD BAARS, IN THE THEATRE OF CONSCIOUSNESS (1997).
88 4 Positive Fault Element Involving Artificial Intelligence Systems

required.32 However, the results of this process (the relevant general image) are
conscious results. Thus, whereas the human mind is not conscious of most of the
process, it is conscious of its results when the relevant general image is accepted. As
a result, the human mind is considered to be aware only when the relevant general
image is accepted. This process is the essence of human awareness.
Awareness is the final stage of perception. Perception by senses of factual data
and its understanding ends up by the creation of the relevant general image. The
creation of the relevant general image is, in fact, the awareness of the factual data.
Thus, for instance, eyes are not the sight organs of humans, but the human brain.
Eyes function as nothing but sensors which deliver the factual data to the brain.
Only when the brain creates the relevant general image, the human is considered
aware of the relevant sight. Consequently, human in vegetative situation, when his
eyes function, is not considered to be seeing (or aware of what he sees), unless the
sights are combined into a relevant general image.
As a result, for human to be considered aware of certain factual data, two
accumulative conditions are required:

(a) absorbing the factual data by senses; and-


(b) creating a relevant general image towards this data in brain.

If one of these conditions is missing, the person is not considered to be aware.


Awareness is a binary question: either the offender is aware or not. Partial aware-
ness is meaningless. The offender may be aware of part of the factual data, i.e., fully
aware of some of the data, but not partly aware of certain fact.
If the facts were not absorbed or no relevant general image has been created, the
offender is considered to be unaware. Sometimes the term “knowledge” is used for
description of the cognitive aspect of general intent, as aforesaid. Therefore, the
question is whether there is any difference between “knowledge” and “awareness”.
When examined functionally, there is no difference between these terms in regard
to criminal liability since they refer to the same idea of cognition.33 Moreover,
sometimes “knowledge” has been explicitly defined as “awareness”.34
It seems that the more accurate term in this context of mental element is
awareness rather than knowledge. Outside the criminal law context, awareness is
more related to consciousness rather than knowledge. Knowledge is related also to
cognitive process, but it refers more to information rather than consciousness.
Knowledge represents, perhaps, a deeper cognitive process than awareness. How-
ever, as to the specific context of the mental element requirement in criminal law,

32
HERMANN VON HELMHOLTZ, THE FACTS OF PERCEPTION (1878).
33
United States v. Youts, 229 F.3d 1312 (10th Cir.2000); State v. Sargent, 156 Vt. 463, 594 A.2d
401 (1991); United States v. Spinney, 65 F.3d 231 (1st Cir.1995); State v. Wyatt, 198 W.Va.
530, 482 S.E.2d 147 (1996); United States v. Wert-Ruiz, 228 F.3d 250 (3rd Cir.2000).
34
United States v. Jewell, 532 F.2d 697 (9th Cir.1976); United States v. Ladish Malting Co.,
135 F.3d 484 (7th Cir.1998).
4.2 General Intent and Artificial Intelligence Systems 89

awareness seems to be the more accurate term. Proving the full awareness of the
offender in court beyond any reasonable doubt, as required in criminal law, is not an
easy task.
Awareness relates to internal processes of the mind, which not necessarily have
external expressions. Therefore, criminal law has developed evidential substitutes
for this task. These substitutes are presumptions, which in certain types of situations
presume the existence of awareness. Two major presumptions are recognized in
most legal systems:

(a) willful blindness presumption as a substitute of awareness of conduct and


of circumstances; and-
(b) awareness presumption as a substitute of awareness of the possibility of the
results’ occurrence.

Before exploring these presumptions and their relevancy to artificial intelligence


technology, the capability artificial intelligence technology to fulfill the cognitive
aspect of the general intent requirement is to be examined.
Does artificial intelligence technology have the capability of being aware of
conduct, circumstances or possibility of the results’ occurrence, in the context of
criminal law?35 The process of awareness may be divided into two stages, as
aforesaid. The first stage consists of absorbing the factual data by senses. At this
stage the major role is the devices’ which are used to absorb the factual data. The
human devices are human organs which have such capability. Thus, the human eyes
are the human sight and light sensors, ears are sound sensors, etc.
These organs absorb the factual data (sights, lights, sounds, pressure, fabric, etc.)
and transfer it to the human brain for processing. Artificial intelligence technology
has this capability. Equipped by relevant devices, artificial intelligence technology
is capable of absorbing any factual data that may be sensed by any of the human five
senses. Cameras absorb sights and lights and transfer the factual data to the
processors.36 So do microphones to sounds,37 weight sensors to pressures, temper-
ature sensors to temperature, humidity sensors to humidity, etc.
In fact, most advanced technologies support much more accurate sensors than
the parallel human sensors. Thus, cameras may absorb light waves in lengths
human eyes cannot absorb, and microphones may absorb sound waves in lengths
human ears cannot absorbs. How many of us can successfully “guess” the exact
temperature and humidity outside just through being outside for a few moments?
Can one guess the temperature by accuracy of 0.01 ? Can one guess the humidity
by accuracy of 0.1 %? Most people cannot perform these acts.

35
Paul Weiss, On the Impossibility of Artificial Intelligence, 44 REV. METAPHYSICS 335, 340 (1990).
36
See, e.g., TIM MORRIS, COMPUTER VISION AND IMAGE PROCESSING (2004); MILAN SONKA, VACLAV
HLAVAC AND ROGER BOYLE, IMAGE PROCESSING, ANALYSIS, AND MACHINE VISION (2008).
37
WALTER W. SOROKA, ANALOG METHODS IN COMPUTATION AND SIMULATION (1954).
90 4 Positive Fault Element Involving Artificial Intelligence Systems

However, simple technology sensors do not “guess” this factual data, but they
absorb it very accurately and transfer the information to the relevant processors for
processing the information. Consequently, artificial intelligence technology has the
capability of fulfilling the first stage of awareness. In fact, it does that much better
than humans do. The second stage of the awareness process is creating a relevant
general image towards this data in brain (full perception). Of course, most artificial
intelligence technologies, robots and computers do not possess biological brains,
but they possess artificial “brains”.
Most of these “brains” are embodied in the relevant hardware (processors, disks,
etc.) used by the relevant technology. Have these “brains” the capability of creating
relevant general image out of the absorbed factual data? Creation of relevant
general image is done by humans through analysis of the factual data so it enables
us to use the information, transfer it, integrate it with other information, act
according to it, or, in fact, understand it.38
Let us take an example of security robots which are based on artificial intelli-
gence technology, and go step by step. Their task is to identify intruders and call the
human troops (police, army) or stop the intruders by themselves. The relevant
sensors (cameras and microphones) absorb the factual data to the processors. The
processor is supposed to identify the intruder as such. For this task it analyzes the
factual data. It must not be confused with the state’s policemen or soldiers, who
walk there. Therefore, it must analyze the factual data to identify the change in sight
and sound. It may compare the shape and color of clothes, and use other attributes to
identify the change in sight and sound. This process is very short.
Now it assesses the probabilities. If the probabilities do not form an accurate
identification, it starts a process of vocal identification. The software poses the
phrase: “Identify yourself, please”, “Your password, please” or anything else
relevant to the situation. The figure’s answers and the sound are compared to
other sounds in its memory. Now, it has the adequate factual data to make decision
to act. In fact, this robot has created a relevant general image out of the factual data
absorbed by its sensors. The relevant general image enabled it to use the informa-
tion, transfer it, integrate it with other information, act according to it, or, in fact,
understand it.
For comparison, how would a human guard act in this situation? He would
probably act the same way. The human guard sees or hears suspicious figure or
sound. For the human it is suspicious, but for the robot, any change in the current
image or sound is examined. This is why robot-guards equipped with artificial
intelligence technology were preferred. They work much thoroughly and not get
asleep while guarding. The human guard uses his memory if he can identify the
figure or the sound as one of his friends’, the robot compares it to its memory. The
robot cannot forget figures, humans can. The human guard is not sure, whereas the
robot assesses the probabilities. The human guards shouts for identification, pass-

38
See, e.g., DAVID MANNERS AND TSUGIO MAKIMOTO, LIVING WITH THE CHIP (1995).
4.2 General Intent and Artificial Intelligence Systems 91

word, etc., and so does the robot. The answer is compared to the existing informa-
tion in the memory both by human and robot guards, only that it is done more
accurately by the robot.
Consequently, the relevant decision is made. The human guard understood the
situation, and so did the robot. It may be said that the human guard was aware of the
relevant factual data. Cannot it be said that towards the robot guard as well? In fact,
there is no reason why not to. Their internal processes were pretty much of the
same, only that the robot was much more accurate, faster, and worked thoroughly.
The human guard was aware of the figure or sound he absorbed and acted accord-
ingly, so was the robot guard.
Some may argue that the human guard may have absorbed much more factual
information in addition, such as fear signs, and that he is capable of filtering
irrelevant information, such as background sounds. This type of arguments does
not weaken the above analysis. First, artificial intelligence technology is capable of
absorbing factual data as fear signs. However, as discussed above, humans use the
attention process so they can focus on part of the factual data. Although humans
have the capability of absorbing wider factual data, it would only disturb their
daily life.
Artificial intelligence technology may be programmed for that. If fear signs are
considered as irrelevant for guarding tasks, artificial intelligence technology, if well
designed, would not consider this data. If it was human, it would have been
described as it would not pay attention to this data. Filtering irrelevant data may
be done through the process of attention that runs in background of the human
mind, but artificial intelligence technology may filter it not through background
process.
The artificial intelligence technology would examine all factual data, and would
eliminate the irrelevant options only after analyzing the factual data thoroughly. At
this point the modern society may ask itself, who is to be preferred to function as
guards—those who unconsciously do not pay attention to factual data, or those
which examine all factual data thoroughly. As a result, artificial intelligence
technology has also the capability of fulfilling the second stage of awareness.
Since these two stages analyzed above are the only stages of the awareness process
in criminal law, it may be concluded that artificial intelligence technology has the
capability of fulfilling the awareness requirement in criminal law.
Some may have the feeling that something is still missing for concluding that
machines are capable of awareness. It may be right, if awareness in its broader
sense, as used in psychology, philosophy, cognitive sciences etc., was discussed
above. However, criminal law is supposed to examine the criminal liability of
artificial intelligence technology, and not the wide meanings of cognition in
psychology, philosophy, cognitive sciences etc. Therefore, the only standards of
awareness that may be relevant for examination are the standards of criminal law.
All other standards are irrelevant for assessment of criminal liability imposed
both upon humans and artificial intelligence technology. The criminal law defini-
tion for awareness is, indeed, much narrower than the parallel definitions in the
92 4 Positive Fault Element Involving Artificial Intelligence Systems

other spheres of knowledge. But this is true not only for the imposition of criminal
liability upon artificial intelligence technology, but also upon humans.39
As aforesaid, awareness itself is very difficult to be proven in courts, especially
in criminal cases where it should be proved beyond any reasonable doubt.40
Therefore, criminal law has developed two evidential substitutes for this task:

(a) willful blindness presumption; and-


(b) awareness presumption.

These presumptions are discussed below.


Willful blindness presumption defines that the offender is presumed to be aware
of the conduct and circumstances, if he suspected that they exist, but he did not
check out that suspicion.41 The rationale of this presumption is that since the
“blindness” of the offender from seeing the facts is willful, they are considered
aware of these facts, although they are not actually aware of them. If the offender
really wanted to avoid the commission of the offense, he would have checked the
facts.
For example, a rapist suspects that the woman does not consent to having sexual
intercourse with him. He predicts that if he asks her, she will refuse and there will
not be any doubt about that. Therefore, he ignores his suspicion and continues.
When interrogated, he says that he thought she consented, since no objection of hers
was heard. The willful blindness presumption equalizes the unchecked suspicion to
full awareness of the relevant conduct or circumstances. The question is whether
this presumption is relevant to artificial intelligence technology.
The awareness presumption defines that any human is presumed to be aware of
the possibility of the occurrence of the natural results of his conduct.42 The rationale
of this presumption is that any human has basic skills to assess the natural
consequences of his conduct. For example, when shooting one’s head, the shooter
is presumed to be able to assess the possibility of death as natural consequence of
the shot. Humans who have no such skills, permanently or occasionally, have their
opportunity to refute the presumption. The question is whether this presumption is
relevant to artificial intelligence technology.

39
Perhaps, the definitions of awareness in psychology, philosophy, cognitive sciences etc. may be
relevant for the research for thinking machines, but not for the imposition of criminal liability,
which is fed by the definitions of criminal law.
40
In re Winship, 397 U.S. 358, 90 S.Ct. 1068, 25 L.Ed.2d 368 (1970).
41
United States v. Heredia, 483 F.3d 913 (2006); United States v. Ramon-Rodriguez, 492 F.3d
930 (2007); Saik, [2006] U.K.H.L. 18, [2007] 1 A.C. 18; Da Silva, [2006] E.W.C.A. Crim. 1654,
[2006] 4 All E.R. 900, [2006] 2 Cr. App. Rep. 517; Evans v. Bartlam, [1937] A.C. 473, 479, [1937]
2 All E.R. 646; G.R. Sullivan, Knowledge, Belief, and Culpability, CRIMINAL LAW THEORY –
DOCTRINES OF THE GENERAL PART 207, 213–214 (Stephen Shute and A.P. Simester eds., 2005).
42
State v. Pereira, 72 Conn. App. 545, 805 A.2d 787 (2002); Thompson v. United States, 348 F.
Supp.2d 398 (2005); Virgin Islands v. Joyce, 210 F. App. 208 (2006).
4.2 General Intent and Artificial Intelligence Systems 93

Indeed, awareness is difficult to be proven as to humans. However, since the


processes which combine the awareness, in its criminal law context, may be
monitored very accurately in artificial intelligence technology, there is no necessary
of any substitute. It may resemble to a situation where the human mind of each
individual is constantly under a sophisticated brain scanner, and everything is
recorded.
If awareness of factual data may be identified through such brain scanner,
proving awareness beyond any reasonable doubt becomes very simple task. Since
any act of the artificial intelligence technology may be monitored and recorded,
including all processes which combine the awareness in the context of criminal law,
proving the awareness of artificial intelligence technology regarding particular
factual data is feasible, possible and achievable task. Proving awareness of artificial
intelligence technology has no necessary with these substitutes, but awareness may
be proven directly. However, even if there was any necessary with these substitutes,
they can also be proven as to artificial intelligence technology.
Strong artificial intelligence technologies use algorithms of assessing
probabilities in order to make plausible decisions. For each type of events there is
a minimum rate of probability for the event to be considered feasible, probable or
“reasonable”. This praxis is very helpful for using the awareness substitutes. For
humans, not any suspicion is considered as adequate for willful blindness, but it
should be realistic. Thus, if the probability for the occurrence of particular factual
event is over specific rate, and nevertheless the artificial intelligence technology
ignored this possibility, this might be considered as willful blindness.
The reason why the human did not check the suspicion is immaterial for the
applicability of the willful blindness presumption. So is the situation with artificial
intelligence technology. Thus, if argued that the artificial intelligence technology
does not ignore the possibility out of evil or some kind of concealed desire, as
humans may do, the reason for this ignorance is immaterial in relation to both
humans and machines. As aforesaid, evil is not part of the components of criminal
liability. Thus, the reason for ignoring suspicion (i.e., option that its probability is
higher than certain rate) is immaterial, whether it results out of evil or not.
Natural consequence of certain conduct is an option of reasonable probability to
occur. Reasonability may be assessed quantitatively by machine after weighing the
relevant circumstances. Strong artificial intelligence technology has the capability
of pointing the options, which human use to call “natural”. As a result, although
proving awareness of artificial intelligence technology lacks the difficulties of
proving human awareness, and therefore there is no necessary to use the awareness
substitutes, these substitutes may be used to prove awareness of artificial intelli-
gence technology.

4.2.3 Volition and Artificial Intelligence Technology

Volition, or in this context, the volitive aspect of general intent contains three levels
of will:
94 4 Positive Fault Element Involving Artificial Intelligence Systems

(a) intent;
(b) indifference; and-
(c) rashness.

Indifference and rashness are commonly referred to as recklessness in most


modern legal systems.
Accordingly, the relevant question is whether artificial intelligence technology is
capable of consolidating these levels of will. Since these terms of will may have
different meanings in different scientific spheres (e.g., psychology, theology, phi-
losophy, law, etc.), for answering this question, these terms should be examined by
their legal meaning, i.e. these terms should be examined by their legal definitions in
criminal law. Even if the legal meaning may be different than other meanings of
these terms, since the question relates to criminal liability, only the legal meaning
may be relevant.
Intent is the highest level of will accepted by the criminal law. Terminologically,
there is some confusion towards the terms “intent”, “general intent” and “specific
intent”.43 General intent is a common name for general intent. The term “intent”
refers to the highest level of will embodied in the volitive aspect of general intent.
The term “specific intent” refers also to the highest level of will. The level of will of
intent and specific intent is identical. However, intent refers to will towards the
results components of the factual element, whereas specific intent refers to motives
and purposes, and not to results.44 Specific intent is relatively rarely required as the
mental element requirement of specific offenses.
Purpose is factual data, which is supposed to derive from the conduct. The
conduct’s destiny is to achieve the purpose. Purpose-offenses do not require the
purpose to be actually achieved, but only that the offender would intend to achieve
the relevant purpose. For instance, “whoever says anything in purpose to
intimidate. . .” is an offense, which does not require actual intimidation of anyone,
but only having such purpose.
Motive is internal-subjective feeling, that the conduct derived from it is its
satisfaction. For instance, in “whoever acts X out of hatred. . .” the conduct
X satisfies the hatred. Unless purpose and motive are explicitly required in the

43
See, e.g., United States v. Doe, 136 F.3d 631 (9th Cir.1998); State v. Audette, 149 Vt. 218, 543
A.2d 1315 (1988); Ricketts v. State, 291 Md. 701, 436 A.2d 906 (1981); State v. Rocker, 52 Haw.
336, 475 P.2d 684 (1970); State v. Hobbs, 252 Iowa 432, 107 N.W.2d 238 (1961); State v. Daniels,
236 La. 998, 109 So.2d 896 (1958).
44
People v. Disimone, 251 Mich.App. 605, 650 N.W.2d 436 (2002); Carter v. United States,
530 U.S. 255, 120 S.Ct. 2159, 147 L.Ed.2d 203 (2000); State v. Neuzil, 589 N.W.2d 708 (Iowa
1999); People v. Henry, 239 Mich.App. 140, 607 N.W.2d 767 (1999); Frey v. United States,
708 So.2d 918 (Fla.1998); United States v. Randolph, 93 F.3d 656 (9th Cir.1996); United States
v. Torres, 977 F.2d 321 (7th Cir.1992).
4.2 General Intent and Artificial Intelligence Systems 95

particular offense, they are insignificant for the imposition of criminal liability.45
For instance: (1) A killed B out of hatred; and (2) A killed B out of merci are both
considered murders, since the particular offense of murder does not require specific
intent towards certain purposes or motives.
However, if the particular offense would have required explicitly specific intent
towards purposes or motives, proving the specific intent would have been a
condition to the imposition of criminal liability in that offense. As aforesaid,
requirement of specific intent in particular offenses is relatively rare. Since the
only difference between intent and specific intent relates to their objects, and the
level of will of both is identical, the following analysis of intent would be relevant
to specific intent as well.
Intent is defined as aware will, which accompanies the commission of conduct,
that the results derived from that conduct will occur. In the definition of specific
intent results would be replaced by motives and purposes as follows: aware will,
which accompanies the commission of conduct, that the motive for the conduct
would be satisfied or that the purpose of the conduct would be achieved.
Intent is an expression of positive will, i.e., the will that a factual event would
occur.46 Although there are higher levels of will than intent (e.g., lust, longing,
desire, etc.), intent has been accepted in criminal law as the highest level of will that
may be required for the imposition of criminal liability in particular offenses.47
Consequently, no particular offense requires higher level of will than intent, and if
intent is proven, it satisfies all other levels of will.
It should be distinguished between aware will and unaware will. Unaware will is
an internal urge, impulse or instinct, in which the human is not aware of. Unaware
will is naturally uncontrollable. An individual is incapable of controlling his will,
unless he is aware of it. Being aware of the will does not guarantee capability of
controlling it, but controlling the will requires being aware of that will. Controlling
the will requires activating conscious processes in the human mind that may cause
the relevant activity to cease, to be initiated or not to be interfered. Imposing
criminal liability due to intent requires the will to be aware for the will to be
controllable. Intent does not require an abstract aware will.
The aware will is required to be focused on certain targets: results, motives or
purposes. The intent is an aware will which is focused on these certain targets. For
instance, the murderer’s intent is an aware will which is focused on causing the
victim’s death. This will is required to exist simultaneously with the commission of
the conduct and accompany it for the intent to be relevant to the imposition of
criminal liability. If one killed another by mistake and after the victim’s death he

45
Schmidt v. United States, 133 F. 257 (9th Cir.1904); State v. Ehlers, 98 N.J.L. 263, 119 A. 15
(1922); United States v. Pomponio, 429 U.S. 10, 97 S.Ct. 22, 50 L.Ed.2d 12 (1976); State v. Gray,
221 Conn. 713, 607 A.2d 391 (1992); State v. Mendoza, 709 A.2d 1030 (R.I.1998).
46
LUDWIG WITTGENSTEIN, PHILOSOPHISCHE UNTERSUCHUNGEN }629-}660 (1953).
47
State v. Ayer, 136 N.H. 191, 612 A.2d 923 (1992); State v. Smith, 170 Wis.2d 701, 490 N.W.2d
40 (App.1992).
96 4 Positive Fault Element Involving Artificial Intelligence Systems

has the will that the victim is death, it is not intent. Only when the murder is actually
committed accompanied simultaneously by the relevant will, it may be considered
intent.
Proving intent is much more difficult than proving awareness. Although both of
them are internal processes of the human mind, whereas awareness relates to
current facts, intent relates to future factual situation. Awareness is rational and
realistic, whereas intent is not necessarily. For instance, a person may intent to
become an elephant, but that person cannot be aware of being an elephant, as he is
not one. The deep difficulties in proving the intent made criminal law to developed
evidential substitute for this task.
The common used substitute is the foreseeability rule (dolus indirectus). Fore-
seeability rule is a legal presumption which is purposed to prove the existence of
intent. The foreseeability rule presumption defines that the offender is presumed to
intend the occurrence of the results, if the offender, during the aware commission of
the conduct, has foreseen the occurrence of the results as a very high probability
option.48 This presumption is relevant also to specific intent, if the object of results
is replaced by purpose.49 The rationale of this presumption is that believing that the
probability of certain factual event to occur out of the conduct is extremely high,
and committing that conduct, expresses that the offender wanted that factual event
to occur.
For example, A holds a loaded gun pointed to B’s head. A knows that B’s death
out of shooting him in the head is a factual event of very high probability.
Consequently, A pulls the trigger. In court A argues that he did not want it to
occur, therefore the required component of intent is not fulfilled and he should be
acquitted. If the court exercises the foreseeability rule presumption, the shooter is
presumed to intend the occurrence of the results. Since the shooter assessed the
death as results of very high probability and conducted accordingly, he is presumed
to want these results to occur.50
Using this presumption in courts for the proof of intent is very common. In fact,
unless the defendant confesses the existence of the intent explicitly in the
interrogation, the prosecution would rather prove the intent through this
presumption.

48
Studstill v. State, 7 Ga. 2 (1849); Glanville Williams, Oblique Intention, 46 CAMB. L. J.
417 (1987).
49
People v. Smith, 57 Cal. App. 4th 1470, 67 Cal. Rptr. 2d 604 (1997); Wieland v. State, 101 Md.
App. 1, 643 A.2d 446 (1994).
50
Stephen Shute, Knowledge and Belief in the Criminal Law, CRIMINAL LAW THEORY – DOCTRINES
OF THE GENERAL PART 182–187 (Stephen Shute and A.P. Simester eds., 2005); ANTONY KENNY,
WILL, FREEDOM AND POWER 42–43 (1975); JOHN H. SEARLE, THE REDISCOVERY OF MIND 62 (1992);
ANTONY KENNY, WHAT IS FAITH? 30–31 (1992); State v. VanTreese, 198 Iowa 984, 200 N.W. 570
(1924) but see Montgomery v. Commonwealth, 189 Ky. 306, 224 S.W. 878 (1920); State
v. Murphy, 674 P.2d 1220 (Utah.1983); State v. Blakely, 399 N.W.2d 317 (S.D.1987).
4.2 General Intent and Artificial Intelligence Systems 97

The question is, whether artificial intelligence technology has the capability of
having intent, in the context of criminal law.51 Since will may be vague and general
term, even in criminal law, the artificial intelligence capability of having intent
should be examined through the foreseeability rule presumption. In fact, this is the
core reason for using this presumption to prove the human intent. There are two
conditions to be followed within this rule:

(1) the occurrence of the results has been foreseen as a very high probability
option;
(2) the conduct has been committed under awareness.

Strong artificial intelligence has the capability of assessing probabilities of


occurrence of factual events, as aforesaid, and they have the capability to act
accordingly. For instance, chess-player computers have the capability of analyzing
the current status of the game based of the location of the tools on board. They run
all possible options for the next move. For each option they run the possible
reactions of the other player. For each reaction they run all possible reactions,
and so on until the possible final move which ends with one player’s win. Each of
the options is assessed for its probability, and accordingly the computer decides on
its next move.52
If it were human, it would have been said that he has an intent to win the game. It
would have not been known whether he has such intent for sure, but his course of
conduct matches the foreseeability rule presumption. Artificial intelligence tech-
nology, which is programmed to play chess, has a goal-driven behavior of wining
chess games. Human chess players have also goal-driven behavior of winning chess
games. For the human players it may be said that they have the intent to win chess
games. It seems that it may be said that not only about human players, but on
artificial intelligence players as well. The analysis of their course of conduct in the
relevant situations matches exactly the foreseeability rule presumption.
Any entity, human or artificial intelligence technology, who examines some
options of conduct and makes an aware decision to commit one of them, while
assessing the probability, that specific factual event would result from the conduct,
as of high probability, is considered as foreseeing the occurrence of that factual
event. If it is a chess game, that seems to be quite very normal. However, there is no
substantial difference, in this context, between playing chess under the purpose of
winning the game and committing any other conduct under the purpose of the
results’ occurrence. If the results together with the conduct form criminal offense,
that enters to the sphere of criminal law.

51
See, e.g., Ned Block, What Intuitions About Homunculi Don’t Show, 3 BEHAVIORAL & BRAIN SCI.
425 (1980); Bruce Bridgeman, Brains + Programs ¼ Minds, 3 BEHAVIORAL & BRAIN SCI.
427 (1980).
52
See, e.g., FENG-HSIUNG HSU, BEHIND DEEP BLUE: BUILDING THE COMPUTER THAT DEFEATED THE
WORLD CHESS CHAMPION (2002); DAVID LEVY AND MONTY NEWBORN, HOW COMPUTERS PLAY CHESS
(1991).
98 4 Positive Fault Element Involving Artificial Intelligence Systems

Thus, when the relevant program (based on artificial intelligence technology)


assesses the probabilities that one factual event (winning chess game, death of
human, injury of human, etc.) would result from its conduct as very high
probabilities, and accordingly chooses to commit the relevant conduct (moving
chess tool on board, pulling a gun’s trigger, moving its hydraulic arm towards
human body, etc.), this computer fulfils the required conditions for the foreseeabil-
ity rule presumption. Consequently, that computer is presumed to have intent that
the results would actually occur.
This is exactly the way the court examines the offender’s intent in most cases,
when the offender does not confess. It may be asked, what is to be considered as
very high probability in this context. The answer is identical to the answer in
relation to humans. The only difference is that the computer has the ability to assess
the probability more accurately than human. For instance, one holds a loaded gun
pointed to the other’s head. As human, he evaluates the probability of the occur-
rence of the victim’s death out of the conduct of pulling the trigger as high.
However, most humans are incapable of assessing exactly how high. If that
entity is computer, it assesses the exact probability based on the factual data it is
exposed to (e.g., wind’s direction and velocity, distance of the victim from the gun,
the mechanic situation of the gun, etc.). If the probability is assessed as high, the
computer is required to act accordingly. The computer is required to commit an
aware conduct which promotes the relevant factual event.
As aforesaid, artificial intelligence technology has the capability of
consolidating awareness to factual data. The commission of the conduct is consid-
ered as factual data, therefore artificial intelligence technology has the capability to
fulfill both required conditions for the foreseeability rule presumption to be proven.
This presumption is classified as absolute legal presumption (praesumptio juris et
de jure), therefore, if its conditions are proven, there is no way of refuting its
conclusion (having intent). Nevertheless, the above analysis of foreseeability of
artificial intelligence technology requires strong artificial intelligence technology.
The more specific requirement is the advanced capability of assessing
probabilities as tool of decision-making. Strong artificial intelligence technologies
do have these capabilities. Consequently, these artificial intelligence technologies
have the capability to intend in the context of criminal law. Accordingly, artificial
intelligence technology has the capability of fulfilling the intent requirement in
criminal law whenever it is required. However, again, some may have the feeling
that something is still missing for concluding that machines are capable of intent.
It may be right, if it was discussed on intent in its broader sense, as used in
psychology, philosophy, cognitive sciences etc. However, the criminal law
examines the criminal liability of artificial intelligence technology, and not the
wide meanings of will and intent in psychology, philosophy, cognitive sciences etc.
Therefore, the only standards of intent that may be relevant for examination are the
standards of criminal law. The other standards are irrelevant for assessment of
criminal liability imposed both upon humans and artificial intelligence technology.
The modern criminal law definition for intent (and foreseeability) is, indeed,
much narrower than the parallel definitions in the other spheres of knowledge. But
4.2 General Intent and Artificial Intelligence Systems 99

this is true not only for the imposition of criminal liability upon artificial intelli-
gence technology, but also upon humans.53 The actual evidence for the artificial
intelligence technology’s intent is based on the ability to monitor and record all the
software’s activities. Each stage in the consolidation of intent or foreseeability is
monitored and recorded as part of the computer’s activity. Assessing probabilities
and making the relevant decisions are part of the computer’s activity.
Consequently, there will always be direct evidence for proving artificial intelli-
gence technology’s criminal intent, if proven through foreseeability rule presump-
tion. If intent is proven, directly or through the foreseeability rule presumption, all
other forms of volition may be proven accordingly. Since recklessness, combined
out of indifference or rashness, is a lower degree of will, it may be proven through
direct proof of recklessness or through proof of intent.
There are no offenses which require recklessness and nothing but recklessness.
In general, any requirement in the law represents only the minimum condition for
the imposition of criminal liability. The prosecution may choose between proving
the mentioned requirement or any other higher one, but not lower one. Conse-
quently, specific offenses, which require recklessness as their mental element
requirement, may be satisfied through the proof of intent (directly or through the
foreseeability rule presumption) or recklessness.
Thus, if the specific artificial intelligence technology has the capability
described above towards foreseeability, it has the capability of fulfilling the mental
element requirements of both intent offenses and recklessness offenses. However,
artificial intelligence technology has also the capability of fulfilling the recklessness
requirement directly as well. By analogy, if the capability of intent exists, the
capability of recklessness, which is a lower capability, exists as well.
Indifference is the higher level of recklessness and it consists of aware volition
neutrality towards the occurrence of the factual event. For the indifferent person,
the option the factual event occurs and the option the factual event would not occur,
are of the same significance. This has nothing to do with the actual probability of
the factual event to occur, but only to the offender’s internal volition towards the
occurrence of the factual event.
For instance, A and B are playing Russian roulette.54 At B’s turn, A is indifferent
as to the possibility of B’s death. He is careless whether B will die or live. For
indifference to be considered as such in the context of criminal law it should be
aware. The offender is required to be aware of the relevant options and have no
certain preference between them.55 Strong artificial intelligence technology
decision-making process is based on assessing probabilities, as discussed above.

53
The definitions of intent in psychology, philosophy, cognitive sciences etc. may be relevant for
the research for thinking machine, but not for the criminal liability, which is fed by the definitions
of criminal law.
54
Russian roulette is a game of chance in which participants place a single round in a gun, spin the
cylinder, place the muzzle against their head and pull the trigger.
55
G., [2003] U.K.H.L. 50, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep. 237, 167 J.P. 621, [2004]
Crim. L.R. 369, [2004] 1 A.C. 1034; Victor Tadors, Recklessness and the Duty to Take Care,
100 4 Positive Fault Element Involving Artificial Intelligence Systems

When the artificial intelligence technology makes a decision to act in a certain way,
but this decision does not take into consideration the probability of one specific
factual event to occur, it is indifferent as to the occurrence of that factual event.
In general, complicated processes of decision-making for artificial intelligence
technology are characterized by large sum of factors to be considered. Humans in
such situations tend to ignore part of the factors and not take them into consider-
ation. So do computers. Some of them are programmed to ignore certain factors, but
strong artificial intelligence technology has the capability of learning the ability to
ignore factors. Otherwise, the decision-making process would be impossible. This
learning process of strong artificial intelligence technology is based on “machine
learning”, which is inductive learning from examples. The more examples are
analyzed, the more effective learning it is, and this is sometimes referred as
“experience”.56
Since the decision-making process is monitored whenever the decision-maker is
artificial intelligence technology, there is no evidential problem in proving the
artificial intelligence technology’s indifference towards the occurrence of the
relevant factual event. The awareness of the artificial intelligence technology to
the possibility of the event’s occurrence is monitored as well as the factors which
were taken into consideration in the decision-making process. This data enables
direct proof of the artificial intelligence technology’s indifference. Indifference
may be proven through the foreseeability rule presumption as well.
Rashness is the lower level of recklessness and it consists of aware volition for
the relevant factual event not to occur, but yet committing aware unreasonable
conduct which causes it to occur. For the rash person, the occurrence of the factual
event is undesired, however he conducts with unreasonable risk for this event to
occur. Rashness is considered degree of will, since if the rash offender would not
have wanted the event to occur at all, he would not have taken the unreasonable risk
through his conduct. This is the major reason for rashness to be part of the volitive
aspect of the general intent. Otherwise, the negative will itself does not justify
criminal liability.
For instance, a car driver is driving in a narrow road behind a very slow truck.
The road has two routes in opposite directions and they are divided by continuous
separation line. The driver is very hurry. Eventually, he decides to bypass the truck
through crossing the line. He does not want to kill anyone, but only to bypass the
truck. However, a motorcycle comes across, the car hits it and the motorcycle driver
is killed. If the car driver would have wanted him to be dead, he was criminally
liable for murder. It would not be true to say he was indifferent as to the motorcycle
driver’s death. However, he was rash. He did not want to hit the motorcycle, but has
taken unreasonable risk for the death to occur. In this case, he would be criminally
liable for manslaughter.

CRIMINAL LAW THEORY – DOCTRINES OF THE GENERAL PART 227 (Stephen Shute and A.P. Simester
eds., 2005); Gardiner, [1994] Crim. L.R. 455.
56
VOJISLAV KECMAN, LEARNING AND SOFT COMPUTING, SUPPORT VECTOR MACHINES, NEURAL
NETWORKS AND FUZZY LOGIC MODELS (2001).
4.2 General Intent and Artificial Intelligence Systems 101

For rashness to be considered as such in the context of criminal law it should be


aware. The offender is required to be aware of the relevant options, prefer not to
cause the occurrence of the specific event, but commit conduct which is unreason-
able in order to avoid that event.57 Strong artificial intelligence technology
decision-making process is based on assessing probabilities, as discussed above.
When the computer makes a decision to act in a certain way, but it does not weigh
one relevant factor as significant enough, it is considered to be rash as to the
occurrence of the relevant factual event.
In the above example, if the car driver is replaced by driving computer and it
figures the probability to hit a motorcycle driver through crossing the continuous
separation line as low, the decision to commit the bypass would be considered rash.
In general, complicated processes of decision-making for artificial intelligence
technology are characterized by large sum of factors to be considered. Humans in
such situations tend sometimes to miscalculate the weight of part of the factors. So
do computers. Humans strengthen their decision by hope and beliefs, computers
do not.
Some of the computers are programmed to weigh certain factors in certain way,
but strong artificial intelligence technology has the capability of learning the ability
to weigh factors correctly and accurately. This learning process of strong artificial
intelligence technology is based on “machine learning”, as aforesaid. Since the
decision-making process is monitored whenever the decision-maker is artificial
intelligence technology, there is no evidential problem in proving the artificial
intelligence technology’s rashness towards the occurrence of the relevant factual
event.
The awareness of the artificial intelligence technology to the possibility of the
event’s occurrence is monitored as well as the factors which were taken into
consideration in the decision-making process and their actual weight within that
decision. This data enables direct proof of the artificial intelligence technology’s
rashness. Rashness may be proven through the foreseeability rule presumption as
well, as aforesaid. Thus, all components of the volitive aspect of general intent are
relevant to artificial intelligence technology and their proof in court is possible.
Accordingly, the question is who is to be criminally liable for the commission of
this kind of offenses.
In general, imposition of criminal liability for general intent offenses (“inten-
tional offenses”) requires the fulfillment of both factual and mental elements of
these offenses. Humans are involved in the creation of artificial intelligence tech-
nology, its design, programming and operation. Consequently, when the factual and
mental elements of the offense are fulfilled by artificial intelligence technology, the
question is who is to be criminally liable for the offenses committed.

57
K., [2001] U.K.H.L. 41, [2002] 1 A.C. 462; B. v. Director of Public Prosecutions, [2000]
2 A.C. 428, [2000] 1 All E.R. 833, [2000] 2 W.L.R. 452, [2000] 2 Cr. App. Rep. 65, [2000]
Crim. L.R. 403.
102 4 Positive Fault Element Involving Artificial Intelligence Systems

4.2.4 Direct Liability

In general, when an offender fulfills both the factual and mental elements
requirements of a specific offense, criminal liability for that offense is imposed.
When doing this, the court has no need to investigate whether the offender was
“evil” or whether other attribute characterized the commission of the offense. The
fulfillment of these requirements is the only condition for the imposition of criminal
liability. Other information may affect the punishment, but not the criminal
liability.58
The factual and mental elements are neutral, in this context. They do not
necessarily content “evil” or “good”.59 Their fulfillment is much more “technical”
than the detection of “evil”. For example, the society prohibits murder. Murder is
causing death to human with awareness and intent to cause the death. If an
individual factually caused another person’s death, the factual element requirement
is fulfilled. If the conduct has been committed under awareness and intent, the
mental element is fulfilled. At this point that individual is criminally liable for
murder, unless any general defense is applicable (e.g., self-defense, insanity, etc.).
The reason for the murder is immaterial for the imposition of criminal liability. It
is insignificant whether the murder has been committed out of merci (euthanasia) or
out of evil. This is the way criminal liability is imposed on human offenders. If this
standard is embraced in relation to artificial intelligence technology, the criminal
law would be able to impose criminal liability upon artificial intelligence technol-
ogy as well. This is the basic idea behind the criminal liability of artificial intelli-
gence technology. This idea is different than their moral accountability, social
responsibility or even civil legal personhood.60
The narrow definitions of criminal liability enable the artificial intelligence
technology to become subject to criminal law. Nevertheless, some may feel that
something is missing in this analysis, and perhaps this analysis may fall short.
Those feelings may be refuted by rational arguments. One feeling is that the
capacity of an artificial intelligence technology to follow a program is not sufficient
to enable the system to make moral judgment and exercise discretion, although the
program may contain tremendously elaborate and complex system of rules.61 This
feeling may relate eventually to moral choice of the offender.
The deeper argument is that no formal system could adequately make the moral
choices with which an offender may be confronted. Two answers may be relevant to

58
Robert N. Shapiro, Of Robots, Persons, and the Protection of Religious Beliefs, 56 S. CAL.
L. REV. 1277, 1286–1290 (1983); Nancy Sherman, The Place of the Emotions in Kantian Morality,
Identity, Character, and Morality 149, 145–162 (Owen Flanagan & Amelie O. Rotry eds., 1990);
Aaron Sloman, Motives, Mechanisms, and Emotions, The Philosophy of Artificial Intelligence
231, 231–232 (Margaret A. Boden ed., 1990).
59
JOHN FINNIS, NATURAL LAW AND NATURAL RIGHTS 85–90 (1980).
60
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231, 1262
(1992).
61
OWEN J. FLANAGAN, JR., THE SCIENCE OF THE MIND 224–241 (2nd ed., 1991).
4.2 General Intent and Artificial Intelligence Systems 103

this argument. First, it is not that sure that formal systems are morally blind. There
are very many types of morality and moral values. Teleological morality, such as
the utilitarianism, for example, deals with the utility values of the conduct.62 These
values may be measurable, compared and decided upon according to their quanti-
tative comparison. For instance, an artificial intelligence technology controls a
heavy wagon. The wagon malfunctions, and there are two possible paths to drive
the wagon. The software calculates the probabilities and one involves the death of
one person, the other of 50.
Teleological morality would direct any human to choose the first, and he would
be considered moral person. So can the artificial intelligence technology. Its
morality is dictated by its program, which makes it evaluate the consequences in
each chosen way. In fact, this is the way the human mind acts. Second, even if the
first answer is not convincing, and formal systems are incapable of morality of any
kind, still, the criminal liability is neither dependent on nor fed by any morality.
Morality is not even a precondition for the imposition of criminal liability.
Criminal courts do not assess the human offenders’ morality for the imposition
of criminal liability. The offender may be very moral through the court’s perspec-
tive, but still be convicted (e.g., euthanasia), and the offender may be very immoral
through the court’s perspective, but still be acquitted (e.g., adultery). Since morality
of any kind is not required for the imposition of criminal liability upon human
offenders, the question is why should it be considerable when an artificial intelli-
gence technology is involved. Another feeling is more banal. Accordingly, artificial
intelligence technology is not human, and criminal liability is designed for humans
only, since it involves constitutional human rights, that only humans may have.63
In this context, it is immaterial whether the constitutional rights refer to substan-
tial rights or to procedural rights. The answer here is that perhaps criminal law was
originally designed for humans, but since the seventeenth century, it is not exclu-
sive for humans, as discussed above.64 Corporations, which are non-human
creatures, are also subject to criminal law, and not only to criminal law. There
may be some matches and adjustments that should be done, but criminal liability
and punishments are imposed upon corporations for the past four centuries. Some
argue that although corporations have been recognized as subject to criminal law,
the personhood of artificial intelligence technology should not be recognized, and
that is for the human benefit, since humans have no interest in recognizing it.65
This argument cannot be considered applicable in analytic legal discussion on
the criminal liability of artificial intelligence technology. There are very many cases
in daily life, that the imposition of criminal liability has no benefit for human
society, and still criminal liability is imposed. The famous example is Kant’s, who
claimed that even if the last human person on Earth is an offender, he should be

62
See, e.g., DAVID LYONS, FORMS AND LIMITS OF UTILITARIANISM (1965).
63
See Solum, supra note 60, at pp. 1258–1262.
64
Above at paragraph 2.2.
65
EDWARD O. WILSON, SOCIOBIOLOGY: THE NEW SYNTHESIS 120 (1975).
104 4 Positive Fault Element Involving Artificial Intelligence Systems

punished, even though that leads to the extinction of humanity.66 Human benefit has
not been recognized as valid component of criminal liability. Other feeling is that
the concept of awareness presented above is too shallow for artificial intelligence
technology to be called to account, blamed and faulted for the factual harm they
may cause.67
This feeling is based on confusion between the concept of awareness and
consciousness in psychology, philosophy, theology and cognitive sciences and
the concept of awareness in criminal law. In those spheres of knowledge, besides
criminal law, the society lacks a clear notion of what awareness is. Lacking such
notions disables serious answers to the question of artificial intelligence capability
of awareness. It would be correct in most cases, that such answers, whenever given,
were bases on intuition, but not science.68 However, criminal law must be accurate
and criminal liability must be accurate when imposed.
Based on criminal law definitions, people may be jailed for all their lives, their
property may be taken, and even their very lives. Therefore, the criminal law
definitions must be accurate and proven beyond any reasonable doubt. Imposition
of criminal liability for general intent offenses is based on awareness as the major
and dominant component of the mental element requirement. The fact, that the term
“awareness” in psychology, philosophy, theology and cognitive sciences has not
been developed adequately for the creation of accurate definition, does not exempt
the criminal law from developing such definition of its own for the purpose of
imposing criminal liability.
The definition of criminal law for awareness, as any other legal definition, might
be extremely different than its daily meaning or its meaning in psychology,
philosophy, theology and cognitive sciences, if any. The criminal law definitions
are designed and adapted to fulfill the needs of criminal law, and nothing beyond
that. These definitions represent the necessary requirements for the imposition of
criminal liability. They also represent the minimal conditions both structurally and
substantively. Consequently, the definitions of criminal law, including the defini-
tion of awareness, have a relative value, which is relevant only to the criminal law.
Since these definitions are formed due to the concept of minimum requirement,
as discussed above,69 they might be regarded as shallow. However, that is true only
when examined through the perspectives of psychology, philosophy, theology and
cognitive sciences, but not of criminal law. If there would occur a significant
development in those scientific spheres towards awareness in the future, the crimi-
nal law may embrace newer and more complicated definitions for awareness
inspired by these scientific spheres. For the time being, there are none.

66
ROGER J. SULLIVAN, IMMANUEL KANT’S MORAL THEORY 68 (1989).
67
RAY JACKENDOFF, CONSCIOUSNESS AND THE COMPUTATIONAL MIND 275–327 (1987); COLIN MCGINN,
THE PROBLEM OF CONSCIOUSNESS: ESSAYS TOWARDS A RESOLUTION 202–213 (1991).
68
DANIEL C. DENNETT, BRAINSTORMS 149–150 (1978).
69
Above at Sect. 2.1.2.
4.2 General Intent and Artificial Intelligence Systems 105

The criminal law definitions, including that of awareness, were originally


designed for humans. For centuries people were indicted, convicted and acquitted,
according to these definitions. Since the seventeenth century these definitions were
required to be adopted and adapted for the incrimination of non-human creatures—
the corporations. Consequently, corporations were indicted, convicted and acquit-
ted according to these very definitions. When the criminal law definitions were
changed, they were changed for both humans and corporations at the very same
way. In the twenty-first century the same definitions are required for the incrimina-
tion of artificial intelligence technology.
Examining these definitions reveals that they are applicable for artificial intelli-
gence technology. Artificial intelligence technology has the capability to fulfill the
requirements of criminal law with no single change of these definitions. Suddenly,
can they become “shallow”? How come that these very definitions used by criminal
law systems around the world for centuries were adequate and almost
non-questionable, and suddenly they became “shallow”? If they are good enough
for humans and corporations, why would not they be good enough for artificial
intelligence technology? If criticism was that criminal law definitions are too
shallow, and therefore they should be radically changed for both humans,
corporations and artificial intelligence technology, it could have been acceptable.
However, when these definitions are “shallow” just in relation to artificial
intelligence technology, but not in relation to humans and corporations, it cannot
be considered serious and applicable. This answer is not unique only for the
criticism towards awareness of artificial intelligence technology in criminal law,
but also for the criticism towards intentionality of artificial intelligence technol-
ogy.70 At this point, it may be concluded that criticism against the idea of artificial
intelligence technology’s criminal liability may generally relate to two points:

(a) lack of attributes which are not required for imposition of criminal liability
(e.g., soul, evil, good, etc.); and-
(b) shallowness of criminal law definitions in the perspective of some spheres
of science other than criminal law (e.g., psychology, philosophy, theology
and cognitive sciences).

Both points of criticism have methodologically simple answers.


The first point is answered through the structural aspect of criminal law concept,
as no such attributes are required for the imposition of criminal liability on both
humans, corporations and artificial intelligence technology. The second point is
answered through the substantive aspect of criminal law concept, as definitions of
legal terms from outside the criminal law are completely irrelevant for the question
of criminal liability.
As a result, the gate for imposition of criminal liability upon artificial intelli-
gence technology as direct offenders is, possibly, opened. Acceptance of the idea

70
DANIEL C. DENNETT, THE INTENTIONAL STANCE 327–328 (1987).
106 4 Positive Fault Element Involving Artificial Intelligence Systems

towards the criminal liability of artificial intelligence technology for general intent
offenses is not ended in commission of the particular offense as principal offender.
The commission of general intent offenses may be through complicity as well. The
accomplices are required to form not less than general intent for their criminal
liability as accomplices. There is no complicity through negligence or through strict
liability.71
Thus, a joint-perpetrator may be considered as such only in relation to general
intent offenses. Other general forms of complicity (e.g., inciters and accessories)
require at least general intent as well. Consequently, since all general forms of
complicity require at least general intent, an artificial intelligence system may be
considered as an accomplice only if it actually formed general intent. Opening the
gate for imposition of criminal liability upon artificial intelligence technology as
direct offenders opens up the gate for accepting artificial intelligence technology as
accomplices, joint-perpetrators, inciters, accessories, etc., as well, as long as both
factual and mental elements requirements are met at full.

4.2.5 Indirect Liability

All human offenders, corporations and artificial intelligence technology may be


used as mere instruments for the commission of the offense, regardless their legal
personhood. For instance, one person threatens another’s life, that if he does not
commit a specific offense, he would be killed. Having no choice, the threatened
person commits the offense. The question is, who is to be considered criminally
liable for the commission of that offense, the threatening person, the threatened
person or both.
In the context of artificial intelligence technology liability, the question arises
when the artificial intelligence technology is used as mere instrument by another
offender. For such situations the criminal law has created the criminal liability
general form of perpetration-through-another. This form of criminal liability may
be defined as aware execution of a criminal plan through instrumental use of
another person, who participates in the commission of the offense as innocent
agent or semi-innocent agent. Perpetration-through-another is a late development
of vicarious liability into a law of complicity.
Vicarious liability has been recognized both in criminal and civil law since
ancient times, and it is based on an ancient concept of slavery.72 The master, who
was a legal entity and possessed legal personhood, was liable not only for his own

71
See, e.g., People v. Marshall, 362 Mich. 170, 106 N.W.2d 842 (1961); State v. Gartland,
304 Mo. 87, 263 S.W. 165 (1924); State v. Etzweiler, 125 N.H. 57, 480 A.2d 870 (1984); People
v. Kemp, 150 Cal.App.2d 654, 310 P.2d 680 (1957); State v. Hopkins, 147 Wash. 198, 265 P. 481
(1928); State v. Foster, 202 Conn. 520, 522 A.2d 277 (1987); State v. Garza, 259 Kan. 826, 916
P.2d 9 (1996); Mendez v. State, 575 S.W.2d 36 (Tex.Crim.App.1979).
72
Francis Bowes Sayre, Criminal Responsibility for the Acts of Another, 43 HARV. L. REV.
689, 689–690 (1930).
4.2 General Intent and Artificial Intelligence Systems 107

conduct but also for that of all his subjects (slaves, workers, family, etc.). When one
of his subjects committed an offense, it was considered as if the master himself had
committed the offense, and the master was obligated to respond to the indictment
(respondeat superior). The legal meaning of this obligation was that the master was
criminally liable for offenses physically committed by his subjects.
The rationale for this concept was that the master should enforce the criminal
law among his subjects. If the master failed to do so, he was personally liable for the
offenses committed by his subjects. As the master’s subjects were considered to be
his property, he was liable for the harms committed by them both under criminal
and civil law. A subject was considered as an organ of the master, as his long arm.
The legal maxim that governed vicarious liability stated that whoever acts through
another is considered to be acting for himself (qui facit per alium facit per se).
The physical appearance of the commission of the offense was insignificant for
the imposition of criminal liability in this context. This legal concept was accepted
in most ancient legal systems. Based on it, the Roman law developed the function of
the father of the family (paterfamilias), who was responsible for any crime or tort
committed by members of the family, its servants, guards, and slaves.73 Conse-
quently, the father of the family was responsible for the prevention of criminal
offenses and civil torts among his subjects. The incentive for doing so was the fears
of the father of the family criminal or tort liability for the actions of members of his
household. The legal concept of vicarious liability was absorbed into medieval
European law.
The concept of vicarious liability was formally and explicitly accepted in
English common law in the fourteenth century,74 based on legislation enacted in
the thirteenth century.75 Between the fourteenth and seventeenth centuries, English
common law amended the concept and ruled that the master was liable for the
servants’ offenses (under criminal law) and torts (under civil law) only if he
explicitly ordered the servant to commit the offenses, explicitly empowered them
to do so, or consented to their doing so before the commission of the offense (ex
ante), or after the commission of the tort (ex post).76 Since the end of the seven-
teenth century, this firm requirement was replaced by a much weaker one.
Criminal and civil liability could be imposed on the master for offenses and torts
committed by the servants even if the orders of the master were implicit or the

73
Digesta, 9.4.2; Ulpian, 18 ad ed.; OLIVIA F. ROBINSON, THE CRIMINAL LAW OF ANCIENT ROME 15–
16 (1995).
74
Y.BB. 32–33 Edw. I (R. S.), 318, 320 (1304); Seaman v. Browning, (1589) 4 Leonard
123, 74 Eng. Rep. 771.
75
13 EDW. I, St. I, c.2, art. 3, c.II, c.43 (1285). See also FREDERICK POLLOCK AND FREDERICK WILLIAM
MAITLAND, THE HISTORY OF ENGLISH LAW BEFORE THE TIME OF EDWARD I 533 (rev. 2nd ed., 1898);
Oliver W. Holmes, Agency, 4 HARV. L. REV. 345, 356 (1891).
76
Kingston v. Booth, (1685) Skinner 228, 90 Eng. Rep. 105.
108 4 Positive Fault Element Involving Artificial Intelligence Systems

empowerment of the servant was general.77 This was the result of an attempt by
English common law to deal with the many tort cases against workers at the dawn
of the first industrial revolution in England and of the commercial developments of
that time. The actions committed by the master’s workers were considered to be
actions of the master because he enjoyed the benefits. And if the master enjoyed the
benefits of these actions, he should be legally liable, both in criminal and civil law,
for the harm that may be caused by them.
In the nineteenth century, the requirements were further weakened, and it was
ruled that if the worker’s actions were committed through or as part of the general
course of business, the master was liable for them even if no explicit or implicit
orders had been given. Consequently, the defense argument of the worker having
exceeded his authority (ultra vires) was rejected. Thus, even if the worker acted in
contradiction to the specific order of his superior, the superior was still liable for the
worker’s actions if they were carried out in the general course of business. This
approach was developed in tort law, but the English courts did not restrict it to tort
law and applied it to criminal law as well.78
Nevertheless, that vicarious liability was developed under very specific social
conditions, in which only individuals of the upper classes had the required compe-
tence to be considered legal technology. In the Roman law only the father of the
family could become a prosecutor, plaintiff, or defendant. When the concept of
social classes began to fade, in the nineteenth century, vicarious liability faded
away with it. In the criminal law at the beginning of the nineteenth century, the
cases of vicarious liability were divided into three main types of criminal liability.
The first type was that of classic complicity. If the relations between the parties
were based on real cooperation, they were classified as joint-perpetration even if the
parties had an employer–employee or some other hierarchical relation. However, if
within the hierarchical relations, information gaps between the parties or the use of
power made one of the parties lose its ability to commit an aware and willed
offense, the act could not be considered as joint-perpetration. The party that lost
the ability to commit an aware and willed offense was considered an “innocent
agent” who functions as a mere instrument in the hands of the other party.
The innocent agent was not criminally liable. The offense was considered
“perpetration-through-another,” and another party had full criminal liability for

77
Boson v. Sandford, (1690) 2 Salkeld 440, 91 Eng. Rep. 382:

The owners are liable in respect of the freight, and as employing the master; for whoever
employs another is answerable for him, and undertakes for his care to all that make use of
him;
Turberwill v. Stamp, (1697) Skinner 681, 90 Eng. Rep. 303; Middleton v. Fowler, (1699)
1 Salkeld 282, 91 Eng. Rep. 247; Jones v. Hart, (1699) 2 Salkeld 441, 91 Eng. Rep. 382; Hern
v. Nichols, (1708) 1 Salkeld 289, 91 Eng. Rep. 256.
78
Sayre, supra note 72, at pp. 693–694; WILLIAM PALEY, A TREATISE ON THE LAW OF PRINCIPAL AND
AGENT (2nd ed., 1847); Huggins, (1730) 2 Strange 882, 93 Eng. Rep. 915; Holbrook, (1878)
4 Q.B.D. 42; Chisholm v. Doulton, (1889) 22 Q.B.D. 736; Hardcastle v. Bielby, [1892] 1 Q.B. 709.
4.2 General Intent and Artificial Intelligence Systems 109

the actions of the innocent agent.79 This was the basis for the emergence of
perpetration-through-another from vicarious liability, and it was also the second
type of criminal liability derived from vicarious liability. The third type was the
core of the original vicarious liability. In most modern legal systems, this type is
embodied in specific offenses and not in the general formation of criminal liability.
Since the emergence of the modern law of complicity, the original vicarious
liability is no longer considered a legitimate form of criminal liability.
Since the end of the nineteenth century and the beginning of the twentieth
century, the concept of the innocent agent has been widened to include also parties
that have no hierarchical relations between them. Whenever a party acts without
awareness of its actions or without will it is considered an innocent agent. The acts
of the innocent agent could be the results of another party’s initiative (e.g., using the
innocent agent through threats, coercion, misleading, lies, etc.) or another party’s
abuse of an existing factual situation that eliminates the awareness or will of the
innocent agent (e.g., abuse of a factual mistake, insanity, intoxication, minority
etc.).
During the twentieth century the concept of perpetration-through-another has
been applied also to “semi-innocent agents”, typically a negligent party that is not
fully aware of the factual situation while any other reasonable person could have
been aware of it under the same circumstances. Most modern legal systems accept
the semi-innocent agent as part of perpetration-through-another, so that the other
party is criminally liable for the commission of the offense, and the semi-innocent
agent is criminally liable for negligence.
If the legal system contains an appropriate offense of negligence (i.e., the same
factual element requirement, but a mental element of negligence instead of aware-
ness, knowledge, or intent), the semi-innocent agent is criminally liable for that
offense. If no such offense exists, no criminal liability is imposed, although the
other party is criminally liable for the original offense. For the criminal liability of
the perpetration-through-another, the factual element may be fulfilled through the
innocent agent, but the mental element requirement should be fulfilled actually and
subjectively by the perpetrator-through-another himself, including as to the instru-
mental use of the innocent agent.80
Accordingly, the question is, if an artificial intelligence technology is used by
another entity (human, corporation or another artificial intelligence technology) as
mere instrument for the commission of the offense, how would the criminal liability
for the commission of the offense be divided between them. Perpetration-through-
another does not consider the artificial intelligence technology which physically
committed the offense as possessing any human attributes. The artificial intelli-
gence technology is considered an innocent agent. However, one cannot ignore an

79
Glanville Williams, Innocent Agency and Causation, 3 CRIM. L. F. 289 (1992); Peter Alldridge,
The Doctrine of Innocent Agency, 2 CRIM. L. F. 45 (1990).
80
State v. Silva-Baltazar, 125 Wash.2d 472, 886 P.2d 138 (1994); GLANVILLE WILLIAMS, CRIMINAL
LAW: THE GENERAL PART 395 (2nd ed., 1961).
110 4 Positive Fault Element Involving Artificial Intelligence Systems

artificial intelligence technology’s capabilities of physical commission of the


offense. These capabilities are insufficient to deem the artificial intelligence tech-
nology a perpetrator of an offense, since they lack required awareness or will.
These capabilities of physical commission of the offense resemble the parallel
capabilities of a mentally limited person, such as a child,81 a person who is mentally
incompetent,82 or one who lacks a criminal state of mind.83 Legally, when an
offense is committed by an innocent agent (a child,84 a person who is mentally
incompetent,85 or one who lacks a criminal state of mind to commit an offense86),
no criminal liability is imposed upon the physical perpetrator. In such cases, that
person is regarded as a mere instrument, albeit a sophisticated instrument, while the
party orchestrating the offense (the perpetrator-through-another) is the actual per-
petrator as a principal in the first degree and is held accountable for the conduct of
the innocent agent.
The perpetrator’s liability is determined on the basis of the “instrument’s”
conduct87 and his mental state.88 The derivative question relative to artificial
intelligence technology is: Who is the perpetrator-through-another? The answer is
any person who makes an instrumental use of the artificial intelligence technology
for the commission of the offense. In most cases, this person is human and may be
the programmer of the artificial intelligence software and the second is the user, or
the end-user. A programmer of artificial intelligence software might design a
program in order to commit offenses through the artificial intelligence technology.
For example, a programmer designs software for an operating robot, which its
software is based on artificial intelligence technology. The robot is intentionally
placed in a factory, and its software is designed to torch the factory at night when no
one is there. The robot committed the arson, but the programmer is deemed the
perpetrator. The user did not program the software, but he uses the artificial
intelligence technology, including its software, for his own benefit, which is
expressed by the very commission of the offense.
For example, a user purchases a servant-robot, which is designed to execute any
order given by its master, and which its software is based on artificial intelligence

81
Maxey v. United States, 30 App. D.C. 63, 80 (App. D.C. 1907).
82
Johnson v. State, 142 Ala. 70, 71 (1904).
83
United States v. Bryan, 483 F.2d 88, 92 (3d Cir. 1973).
84
Maxey, 30 App. D.C. at 80 (App. D.C. 1907); Commonwealth v. Hill, 11 Mass. 136 (1814);
Michael, (1840) 2 Mood. 120, 169 Eng. Rep. 48.
85
Johnson v. State, 38 So. 182, 183 (Ala. 1904); People v. Monks, 24 P.2d 508, 511 (Cal. Dist.
Ct. App. 1933).
86
United States v. Bryan, 483 F.2d 88, 92 (3d Cir. 1973); Boushea v. United States, 173 F.2d
131, 134 (8th Cir. 1949); People v. Mutchler, 140 N.E. 820, 823 (Ill. 1923); State v. Runkles,
605 A.2d 111, 121 (Md. 1992); Parnell v. State, 912 S.W.2d 422, 424 (Ark. 1996); State
v. Thomas, 619 S.W.2d 513, 514 (Tenn. 1981).
87
Dusenbery v. Commonwealth, 772 263 S.E.2d 392 (Va. 1980).
88
United States v. Tobon-Builes, 706 F.2d 1092, 1101 (11th Cir. 1983); United States v. Ruffin,
613 F.2d 408, 411 (2d Cir. 1979).
4.2 General Intent and Artificial Intelligence Systems 111

technology. The robot identifies the specific user as the master, and the master
orders the robot to assault any invader of the house. The robot executes the order
exactly as ordered. This is not different than a person who orders his dog to attack
any trespasser. The robot committed the assault, but the user is deemed the
perpetrator.
In both scenarios, the actual offense was physically committed by the artificial
intelligence technology. The programmer or the user did not perform any action
conforming to the definition of a specific offense; therefore, they do not meet the
factual element requirement of the specific offense. The perpetration-through-
another liability considers the physical actions committed by the artificial intelli-
gence technology as if it had been the programmer’s, the user’s or any other
person’s, who is instrumentally using the artificial intelligence technology.
The legal basis for this criminal liability is the instrumental use of the artificial
intelligence technology as an innocent agent.89 No mental attribute, required for the
imposition of criminal liability, is attributed to the artificial intelligence technol-
ogy.90 When programmers or users use an artificial intelligence technology instru-
mentally, the commission of an offense by the artificial intelligence technology is
attributed to them. The mental element required in the specific offense already
exists in their minds. The programmer had criminal intent when he ordered the
commission of the arson, and the user had criminal intent when he ordered the
commission of the assault, even though these offenses were physically committed
through a robot, an artificial intelligence technology.
When an end-user makes instrumental use of an innocent agent to commit an
offense, the end-user is deemed the actual perpetrator of that very offense.
Perpetration-through-another does not attribute any mental capability, or any
human mental capability, to the artificial intelligence technology. Accordingly,
there is no legal difference between an artificial intelligence technology and a
screwdriver or an animal, both instrumentally used by the actual perpetrator.
When a burglar uses a screwdriver in order to open up a window, he instrumentally
uses the screwdriver, and the screwdriver is not criminally liable. The screwdriver’s
“action” is, in fact, the burglar’s. This is the same legal situation when using an
animal instrumentally. An assault committed by a dog by order of its master is, in
fact, an assault committed by the master.
This kind of criminal liability might be suitable for two types of scenarios. The
first scenario is using an artificial intelligence technology, even a strong artificial
intelligence technology, to commit an offense without using its advanced
capabilities. The second scenario is using a weak version of an artificial intelligence
technology, which lacks the modern advanced capabilities of the modern artificial

89
See Solum, supra note 60, at p. 1237.
90
The artificial intelligence technology is used as an instrument and not as a participant, although
it uses its features of processing information. See, e.g., George R. Cross & Cary G. Debessonet, An
Artificial Intelligence Application in the Law: CCLIPS, A Computer Program that Processes Legal
Information, 1 HIGH TECH. L.J. 329 (1986).
112 4 Positive Fault Element Involving Artificial Intelligence Systems

intelligence technology. In both scenarios, the use of the artificial intelligence


technology is instrumental usage. Still, it is use of an artificial intelligence technol-
ogy, due to its ability to execute an order to commit an offense. A screwdriver
cannot execute such an order; a dog can. A dog cannot execute complicated orders;
an artificial intelligence technology can.91
The perpetration-through-another liability is not suitable when an artificial
intelligence technology makes the decision to commit an offense based on its
own accumulated experience or knowledge or based on advanced calculations of
probabilities. This liability is not suitable when the software of the artificial intelli-
gence technology was not designed to commit the specific offense, but was
committed by the artificial intelligence technology nonetheless.
This liability is also not suitable when the specific artificial intelligence technol-
ogy functions not as an innocent agent, but as a semi-innocent agent.92 Semi-
innocent agents are agents who lacks general intent component, but do have
lower mental element component, such as negligence and strict liability. Neverthe-
less, the perpetration-through-another liability might be suitable when a program-
mer or user makes instrumental use of an artificial intelligence technology, but
without using the artificial intelligence technology’s advanced capabilities.
The legal result of applying this liability is that the programmer and the user are
criminally liable for the specific offense committed, while the artificial intelligence
technology has no criminal liability whatsoever.93 This is not significantly different
than relating the artificial intelligence personhood as mere property, even though
with sophisticated skills and capabilities.94 If the artificial intelligence technology
is considered semi-innocent agent, i.e., it fulfills the negligence or strict liability
requirements, it would be criminally liable for relevant offenses of negligence or
strict liability, if such are recognized by the criminal law.

4.2.6 Combined Liabilities

The first type of criminal liability presented above related the artificial intelligence
technology as the perpetrator of the offense.95 The second related the artificial
intelligence technology as mere instrument in the hands of the legally-considered
perpetrator.96 The second type of liability is not the only type possible to describe

91
Andrew J. Wu, From Video Games to Artificial Intelligence: Assigning Copyright Ownership to
Works Generated by Increasingly Sophisticated Computer Programs, 25 AIPLA Q.J. 131 (1997);
Timothy L. Butler, Can a Computer Be an Author – Copyright Aspects of Artificial Intelligence,
4 COMM. ENT. L.S. 707 (1982).
92
NICOLA LACEY AND CELIA WELLS, RECONSTRUCTING CRIMINAL LAW – CRITICAL PERSPECTIVES ON
CRIME AND THE CRIMINAL PROCESS 53 (2d ed. 1998).
93
People v. Monks, 133 Cal. App. 440, 446 (Cal. Dist. Ct. App. 1933).
94
See Solum, supra note 60, at pp. 1276–1279.
95
Above at Sect. 4.2.4.
96
Above at Sect. 4.2.5.
4.2 General Intent and Artificial Intelligence Systems 113

legal relations between humans and artificial intelligence technology towards the
commission of the offense. The second type dealt with adhered artificial intelli-
gence technology, but what if the artificial intelligence technology, which was not
programmed to commit the offense, calculates a decision to act, and the act is
considered offense.
The question here is towards the human liability rather than the criminal liability
of the artificial intelligence technology. For instance, the programmer of sophisti-
cated artificial intelligence technology designs it not to commit certain offenses. In
the beginning of its activation, the artificial intelligence system commits no
offenses. In time, the machine learning through induction is widened and new
paths of activity are opened. At some point, an offense is committed.
Another instance, a bit different, the programmer designs the artificial intelli-
gence system to commit one certain offense. Expectedly, the offense is committed
through the artificial intelligence system. However, the artificial intelligence system
deviates from the original plan of the programmer and continues in its delinquent
activity. The deviation might be quantitative (more offenses of the same kind),
qualitative (more offenses of different kinds) or both.
If the programmer would have been programming it from the beginning to
commit the additional offenses, it would have been considered as perpetration-
through-another at most. However, the programmer did not do this. If the artificial
intelligence system consolidated both factual and mental elements of the additional
offenses, the artificial intelligence system was criminally liable through the first
type of liability. However, the question here would be towards the criminal liability
of the programmer. This is the main issue of the third type of liability discussed
below. The most appropriate criminal liability in such cases is the probable
consequence liability.
By origin, probable consequence liability in criminal law relates to the criminal
liability of parties to criminal offenses that have been committed in practice, but
these offenses were not part of the original criminal plan. For example, A and B
plan to commit bank robbery. According to the plan, A’s role is to break into the
safe and B’s role is to threaten the guard with a loaded gun. During the robbery the
guard resists and B shoots him to death. The killing of the guard was not part of the
original criminal plan. When the guard was shot A was not there, did not know
about it, did not agree to it, and did not commit it.
The legal question in the above example concerns A’s criminal liability for
homicide, in addition to his certain criminal liability for robbery. A does not satisfy
either the factual or the mental element of homicide, since he neither physically
committed it nor was aware of it. The homicide was not part of their criminal plan.
The question may be expanded also to inciters and accessories of the robbery, if
any. In general, the question of the probable consequence liability refers to the
criminal liability of one person for unplanned offenses that were committed by
another person. Before applying the probable consequence liability on Human–
artificial intelligence offenses, its features should be explored.
There are two opposite extreme approaches to this general question. The first
calls for imposition of full criminal liability upon all parties. The other calls for
114 4 Positive Fault Element Involving Artificial Intelligence Systems

broad exemption from criminal liability for any party that does not meet the factual
and mental element requirements of the unplanned offense. The first is considered
problematic for over-criminalization, whereas the second is considered problematic
for under-criminalization. Consequently, moderate approaches were developed and
embraced.
The first extreme approach does not consider at all the factual and mental
elements of the unplanned offense. This approach originates in Roman civil law,
which has been adapted to criminal cases by several legal systems. According to
this approach, any involvement in the delinquent event is considered to include
criminal liability for any further delinquent event derived from it (versanti in re
illicita imputantur omnia quae sequuntur ex delicto).97 This extreme approach
requires neither factual nor mental elements for the unplanned offense from the
other parties, besides the party who actually committed the offense and possessed
both factual and mental elements.
According to this extreme approach, the criminal liability for the unplanned
offense is an automatic derivative. The basic rationale of this approach is deterrence
of potential offenders from participating in future criminal enterprises by widening
the criminal liability not only to include the planned offenses but the unplanned
ones as well. The potential party must realize that his personal criminal liability
may not be restricted to specific types of offenses, and that he may be criminally
liable for all expected and unexpected developments that are derived directly or
indirectly from his conduct. Potential parties are expected to be deterred and avoid
involvement in delinquent acts.
This approach does not distinguish between various forms of involvement in the
delinquent event. The criminal liability for the unplanned offense is imposed
regardless of the role of the offender in the commission of the planned offense as
perpetrator, inciter, or accessory. The criminal liability imposed for the unplanned
offense is not dependent on the fulfillment of factual and mental element
requirements by the parties. If the criminal liability for the unplanned offense is
imposed on all parties of the original enterprise, including those who could have no
control over the commission of the unplanned offense, the deterrent value of this
approach is extreme.
Prospectively, this approach educates people to keep away from involvement in
delinquent events, regardless of the specific role they may potentially play in the
commission of the offense. Any deviation from the criminal plan, even if not under
the direct control of the party, is basis for criminal liability for all persons involved,
as if it had been fully perpetrated by all parties. The effect of this extreme approach
can be broad and encompassing. Parties to another (third) offense, different from
the unplanned offense, who were not direct parties of the unplanned offense, may
be criminally liable for the unplanned offense as well if there is the slightest
connection between the offenses.

97
Digesta, 48.19.38.5; Codex Justinianus, 9.12.6; REINHARD ZIMMERMANN, THE LAW OF OBLIGATIONS
– ROMAN FOUNDATIONS OF THE CIVILIAN TRADITION 197 (1996).
4.2 General Intent and Artificial Intelligence Systems 115

The criminal liability for the unplanned offense is uniform for all parties and
requires no factual and mental elements. Most western legal systems consider such
a deterrent approach too extreme and have therefore rejected it.98 The other extreme
approach is the exact opposite of the former and focuses on the factual and mental
elements of the unplanned offense. Accordingly, to impose criminal liability for the
unplanned offense, it is a necessary to examine that both factual and mental element
requirements are met by each party. Only if both requirements of the unplanned
offense are met by the specific party, it is legitimate to impose criminal liability
upon him. Naturally, as the unplanned offense was not planned, it is most likely that
none of the parties would be criminally liable for that offense, besides the party who
actually committed it.
This extreme approach ignores the social endangerment inherent in the criminal
enterprise. This social endangerment includes not only planned offenses but the
unplanned ones as well. Because of this extreme approach, offenders have no
incentive to restrict their involvement in the delinquent event. Prospectively, any
party, who wishes to escape from criminal liability for the probable consequences
of the criminal plan needs only to avoid participation in the factual aspect of any
further offense.
Such offenders would tend to share and involve more parties in the commission
of the offense in order to increase the chance for the commissioning of further
offenses, and therefore most modern legal systems prefer not to adopt this extreme
approach either. Several moderate approaches have been developed to meet the
difficulties raised by these extreme approaches. The core of these moderate
approaches lies in the creation of probable consequence liability, i.e., criminal
liability for the unplanned offense, whose commission is the probable consequence
of the planned original offense. “Probable consequence” means both mentally
probable from the point of view of the party and factual consequence derived
from the planned offense.
Thus, probable consequence liability generally requires two major conditions to
impose criminal liability for the unplanned offense:

(a) a factual condition—the unplanned offense should be the consequence of


the planned offense;
(b) a mental condition—the unplanned offense should be probable (foreseeable
by the relevant party) as a consequence of the commission of the planned
offense.

The factual condition (“consequence”) requires the incidental occurrence of the


unplanned offense in relation to the planned offense. There should be a factual
causal connection between the planned offense and the unplanned offense. For
example, A and B conspire to rob a bank and execute their plan. During the robbery

98
See, e.g., United States v. Greer, 467 F.2d 1064 (7th Cir.1972); People v. Cooper, 194 Ill.2d
419, 252 Ill.Dec. 458, 743 N.E.2d 32 (2000).
116 4 Positive Fault Element Involving Artificial Intelligence Systems

B shoots the guard to death, an act that is incidental to the robbery and to his role in
it. Had it not been for the committed robbery, no homicide would have been
committed. Therefore, the homicide is the factual consequence of the robbery and
it was committed incidentally to the robbery.
An incidental offense is one has not been part of the criminal plan, and the
parties did not conspire to commit it. If the offense is part of the criminal plan, the
probable consequence liability is irrelevant and the general rules of complicity
apply as to parties to offense. Unplanned offenses fall short and create an under-
criminalization problem. The probable consequence liability is an attempt to
address this difficulty by expanding the criminal liability for unplanned offenses
despite the fact that they are unplanned.
The unplanned offense may be a different offense from the planned one, but not
necessarily. The offense may also be an additional identical offense. For example,
A and B conspire to rob a bank by breaking into one of its safes. A is intended to
break into the safe and B to watch the guard. They execute their plan but in addition
B shoots and kills the guard, and A breaks into yet another safe. The unplanned
homicide is a different offense from the planned robbery. The unplanned robbery is
identical to the planned robbery.
Both unplanned offenses are incidental consequences of the planned offense,
although one is different from the planned offense and the other is identical with
it. The planned offense serves as the causal background for both unplanned
offenses, as they incidentally derive from it.99 The mental condition (“probable”)
requires that the occurrence of the unplanned offense be probable in the eyes of the
relevant party, meaning that it could have been foreseen and reasonably predicted.
Some legal systems prefer to examine the actual and subjective foreseeability
(the party has actually and subjectively foreseen the occurrence of the unplanned
offense), whereas others prefer to evaluate the ability to foresee through an objec-
tive standard of reasonability (the party has not actually foreseen the occurrence of
the unplanned offense, but any reasonable person in his state could have). Actual
foreseeability parallels the subjective general intent, whereas objective foreseeabil-
ity parallels the objective negligence.
For example, A and B conspire to rob a bank. A is intended to break into the safe
and B to watch the guard. They execute the plan and B shoots and kills the guard
while A breaks into the safe. In some legal systems A is criminally liable for the
killing only if he had actually foreseen the homicide, and in others, if a reasonable
person could have foreseen the forthcoming homicide in these circumstances.
Consequently, if the relevant accomplice did not actually foresee the unplanned
offense, or any reasonable person in the same condition could not have foreseen it,
he is not criminally liable for the unplanned offense.

99
State v. Lucas, 55 Iowa 321, 7 N.W. 583 (1880); Roy v. United States, 652 A.2d 1098 (D.C.
App.1995); People v. Weiss, 256 App.Div. 162, 9 N.Y.S.2d 1 (1939); People v. Little, 41 Cal.
App.2d 797, 107 P.2d 634 (1941).
4.2 General Intent and Artificial Intelligence Systems 117

This type of approach is considered moderate because it combines answers to the


social endangerment problem with a positive relation to the factual and mental
elements of criminal liability. The factual and mental conditions are the entry terms
and minimal requirements for the imposition of criminal liability for the unplanned
offense. Legal systems differ on the legal consequences of probable consequence
liability. The main factor in these differences is the mental condition.
Some legal systems require negligence whereas others require general intent,
and the consequences may be both legally and socially different. Moderate
approaches that are close to the extreme approach, which holds that all accomplices
are criminally liable for the unplanned offense (versanti in re illicita imputantur
omnia quae sequuntur ex delicto), impose full criminal liability for the unplanned
offense if both factual and mental conditions are met. According to these
approaches the party is criminally liable for unplanned general intent offenses
even if he may have been merely negligent.
More lenient moderate approaches do not impose full criminal liability on all the
parties for the unplanned offense. These approaches can show more leniency in
respect of the mental element. The approaches match the actual mental element of
the party to the type of offense. Thus, the negligent party in the unplanned offense is
criminally liable for a negligence offense, whereas the party who is aware is
criminally liable for a general intent offense.100
For example, A, B, and C planned to commit robbery. The robbery is executed,
and C shoots and kills the guard. A foresaw this but B did not, although a reasonable
person would have foreseen this outcome under the circumstances. All three are
criminally liable for robbery as joint-parties. C is criminally liable for murder,
which is a general intent offense. A, who acted under general intent is criminally
liable for manslaughter or murder, both general intent offenses. But B was negligent
with regard to the homicide, therefore he is criminally liable for negligent homi-
cide. Negligent offenders are criminally liable for no more than negligence
offenses, whereas other offenders, who meet general intent requirements, are
criminally liable for general intent offenses.
American criminal law imposes full criminal liability for the unplanned offense
equally upon all parties of the planned offense101 as long as the unplanned offense is
the probable consequence of the planned one.102 Appropriate legislation has been
enacted to accept the probable consequence liability, and has been considered

100
State v. Linscott, 520 A.2d 1067 (Me.1987): “a rule allowing for a murder conviction under a
theory of accomplice liability based upon an objective standard, despite the absence of evidence
that the defendant possessed the culpable subjective mental state that constitutes an element of the
crime of murder, does not represent a departure from prior Maine law” (emphasis in original).
101
People v. Prettyman, 14 Cal.4th 248, 58 Cal.Rptr.2d 827, 926 P.2d 1013 (1996); Chance
v. State, 685 A.2d 351 (Del.1996); Ingram v. United States, 592 A.2d 992 (D.C.App.1991);
Richardson v. State, 697 N.E.2d 462 (Ind.1998); Mitchell v. State, 114 Nev. 1417, 971 P.2d
813 (1998); State v. Carrasco, 122 N.M. 554, 928 P.2d 939 (1996); State v. Jackson, 137 Wash.2d
712, 976 P.2d 1229 (1999).
102
United States v. Powell, 929 F.2d 724 (D.C.Cir.1991).
118 4 Positive Fault Element Involving Artificial Intelligence Systems

constitutionally valid.103 Moreover, in the specific context of homicide, the Ameri-


can law incriminates incidental unplanned homicide committed in the course of the
commission of another planned offense as murder, even if the mental element of the
parties was not adequate for murder.104
English common law imposes criminal liability for the unplanned offense
equally upon all parties of the planned offense—full criminal liability for the
specific offense.105 European-continental legal systems impose criminal liability
for the unplanned offense equally upon all parties of the planned offense—criminal
liability for the specific offense. The English106 and European-continental moderate
approaches are closer to the first extreme approach.107
For the applicability of the probable consequence liability on human–artificial
intelligence offenses, two types of cases should be distinguished. The first type
deals with cases where the programmer designed the artificial intelligence system to
commit certain offense, but the system exceeded the programmer’s plan quantita-
tively (more offenses of the same kind), qualitatively (more offenses of different
kinds) or in both ways. The second type deals with cases where the programmer did
not design the artificial intelligence system to commit any offense, but the system
committed an offense.
The criminal liability in the first type of cases is divided into the liability for the
planned offense and for the unplanned. If the programmer programmed the system
to commit certain offense, it is perpetration-through-another of that offense, at
most. The programmer dictated the system what it should do, therefore he instru-
mentally used the system for the commission of the offense. The programmer in this
case is the sole responsible for that offense, as discussed above.108 For this
particular criminal liability, there is no difference between artificial intelligence
system, other computing system, screwdriver or any human innocent agent.

103
State v. Kaiser, 260 Kan. 235, 918 P.2d 629 (1996); United States v. Andrews, 75 F.3d 552 (9th
Cir.1996); State v. Goodall, 407 A.2d 268 (Me.1979). Compare: People v. Kessler, 57 Ill.2d
493, 315 N.E.2d 29 (1974).
104
People v. Cabaltero, 31 Cal.App.2d 52, 87 P.2d 364 (1939); People v. Michalow, 229 N.Y. 325,
128 N.E. 228 (1920).
105
Anderson, [1966] 2 Q.B. 110, [1966] 2 All E.R. 644, [1966] 2 W.L.R. 1195, 50 Cr. App. Rep.
216, 130 J.P. 318:

Put the principle of law to be invoked in this form: that where two persons embark on a joint
enterprise, each is liable for the acts done in pursuance of that joint enterprise, that that
includes liability for unusual consequences if they arise from the execution of the agreed
joint enterprise.
106
English, [1999] A.C. 1, [1997] 4 All E.R. 545, [1997] 3 W.L.R. 959, [1998] 1 Cr. App. Rep.
261, [1998] Crim. L.R. 48, 162 J.P. 1; Webb, [2006] E.W.C.A. Crim. 2496, [2007] All
E.R. (D) 406; O’Flaherty, [2004] E.W.C.A. Crim. 526, [2004] 2 Cr. App. Rep. 315.
107
BGH 24, 213; BGH 26, 176; BGH 26, 244.
108
Above at Sect. 4.2.5.
4.2 General Intent and Artificial Intelligence Systems 119

The exceeding offenses require a different approach. If the artificial intelligence


system is a strong one that has the capability of computing the commission of the
additional offense, the artificial intelligence system would be considered criminally
liable for that offense according to the regular rules of criminal liability, as
described above.109 That completes the artificial intelligence system’s criminal
liability. On that basis, the criminal liability of the programmer would be deter-
mined according to the probable consequence liability described above.
Accordingly, if the additional offense had been probable consequence of the
planned offense from the programmer point of view, the programmer is imposed
criminal liability for the unplanned offense in addition to the criminal liability for
the planned offense. If the artificial intelligence system is not considered strong and
it has no capability of computing the commission of the additional offense, the
artificial intelligence system cannot be considered as criminally liable for the
additional offense. The artificial intelligence system in such conditions would be
considered as innocent agent.
The criminal liability for the additional offense would be the programmer’s
alone on the same basis of probable consequence liability described above. Accord-
ingly, if the additional offense had been probable consequence of the planned
offense from the programmer point of view, the programmer is imposed criminal
liability for the unplanned offense in addition to the criminal liability for the
planned offense.
The second type of cases relates no intentions to the programmers to commit any
offense. From the programmer’s point of view, the occurrence of the offense is not
more than an unwilled accident. Since the initiative was not criminal, the probable
consequence liability would be inappropriate. The centrality of planned offense for
the imposition of probable consequence liability is crucial, as discussed above. The
probable consequence liability is meant to deal with unplanned developments of a
planned delinquent event. The involved persons start point must be delinquent for
using that deterring mechanism of probable consequence liability.
When the programmer’s start point is not delinquent, and from his point of view,
the occurrence of the offense is accidental, the deterring mechanism is inappropri-
ate and irrelevant. Applying such mechanism to deal with mistakes and accidents of
no criminal intent would be disproportional. Thus, if the artificial intelligence
system, which actually committed the particular offense, is considered strong and
has the capabilities of consolidating the requirements of the particular offense, it
may be criminally liable for that offense as direct perpetrator of the offense. If not,
the artificial intelligence system shall have no criminal liability for that offense.
However, the programmer is, at most, negligent. The programmer’s criminal
liability is not dependent in this type of cases on the artificial intelligence system
criminal liability. Whether the artificial intelligence system is criminally liable or
not, the criminal liability of the programmer for the particular unplanned offense is
examined separately. Since the programmer had no intention that any offense

109
Above at Sect. 4.2.4.
120 4 Positive Fault Element Involving Artificial Intelligence Systems

would occur, the mental element of general intent is irrelevant for him. The
programmer’s criminal liability in this type of cases is to be examined by standards
of negligence, and his criminal liability would be for negligence offenses, at most.

4.3 Negligence and Artificial Intelligence Systems

Imposition of criminal liability for negligence offenses requires fulfillment of both


factual and mental elements. The mental element requirement of negligence
offenses is negligence. If artificial intelligence technology is capable of fulfilling
the negligence requirement, the imposition of criminal liability upon it for negli-
gence offenses is possible, feasible and achievable.

4.3.1 Structure of Negligence

Negligence is sometimes used as requirement of mental element and as behavioral


standard. It has been recognized as behavioral standard since the ancient ages. It has
already been mentioned in the Eshnunna laws of the twentieth century BC,110 in the
Roman law,111 in the Canonic law, and in the early English common law.112
However, negligence has been related to then as a behavioral standard rather than
as a type of mental element in criminal law. That behavioral standard included a
dangerous behavior, which lacks considering all relevant considerations for acting
the way the individual did.
Only since the seventeenth century negligence has been related to as type of
mental element in criminal law. In 1664 the English court ruled that negligence is
not adequate for convicting in manslaughter, but it requires at least recklessness.113
This ruling gave birth to negligence as type of mental element in criminal law.
During the nineteenth century the negligence has been related to as an exception for
the general requirement of general intent.114 Accordingly, it should have been
required explicitly and construed strictly.
The particular offense should have required explicitly, by its definition, negli-
gence for it to be adequate for imposing criminal liability. In the nineteenth century
negligence offenses were still quite rare. The more common use of negligence in
criminal law became as transportation, by horses and automobiles, developed.

110
See REUVEN YARON, THE LAWS OF ESHNUNNA 264 (2nd ed., 1988).
111
Collatio Mosaicarum et Romanarum Legum, 1.6.1-4, 1.11.3-4; Digesta, 48.8.1.3, 48.19.5.2;
Ulpian, 7 de off. Proconsulis. Pauli Sententiae, 1 manual: “magna neglegentia culpa est; magna
culpa dolus est”.
112
HENRY DE BRACTON, DE LEGIBUS ET CONSUETUDINIBUS ANGLIAE 278 (1260; G. E. Woodbine ed.,
S. E. Thorne trans., 1968–1977).
113
Hull, (1664) Kel. 40, 84 Eng. Rep. 1072, 1073.
114
Williamson, (1807) 3 Car. & P. 635, 172 Eng. Rep. 579.
4.3 Negligence and Artificial Intelligence Systems 121

Death cases on roads became more common, and manslaughter was not appropriate
for these cases. A lower level of homicide was required, and negligent homicide
was considered appropriate.115
When negligence came into common use, the confusion has begun. Negligence
has been interpreted as requiring unreasonable conduct, and that caused confusion
with recklessness of the lower level (rashness), which required taking unreasonable
risk. That confusion caused creation of unnecessary terms of “gross negligence”
and “wicked negligence”.116 Many misleading rulings were given on that basis in
English law,117 until the House of Lords made the distinction clear, not before
2003.118 The American law developed negligence as mental element in criminal
law parallel to and inspired by the English common law.119
The negligence has been accepted as an exception to general intent during the
nineteenth century, but more accurately than in English law.120 The main distinc-
tion between recklessness and negligence has been developed towards the cognitive
aspect of recklessness. Whereas recklessness requires cognitive aspect of aware-
ness, as part of general intent requirement, negligence requires none.121 Both
reckless and negligent offenders are required to take unreasonable risks. However,
the reckless offender is required to be aware of the factual element components,
whereas the negligent offender is not required to.122 The negligence functions as
omission of awareness, and it creates social standard of conduct.
The individual is required to take only reasonable risks.123 Reasonable risks are
measured objectively through the perspective of an abstract reasonable person. The
reasonable person is aware of his factual behavior and takes only reasonable
risks.124 Of course, the reasonability is determined by the court, and this is done

115
Knight, (1828) 1 L.C.C. 168, 168 Eng. Rep. 1000; Grout, (1834) 6 Car. & P. 629, 172 Eng. Rep.
1394; Dalloway, (1847) 2 Cox C.C. 273.
116
Finney, (1874) 12 Cox C.C. 625.
117
Bateman, [1925] All E.R. Rep. 45, 94 L.J.K.B. 791, 133 L.T. 730, 89 J.P. 162, 41 T.L.R. 557,
69 Sol. Jo. 622, 28 Cox. C.C. 33, 19 Cr. App. Rep. 8; Leach, [1937] 1 All E.R. 319; Caldwell,
[1982] A.C. 341, [1981] 1 All E.R. 961, [1981] 2 W.L.R. 509, 73 Cr. App. Rep. 13, 145 J.P. 211.
118
G., [2003] U.K.H.L. 50, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep. 237, 167 J.P. 621, [2004]
Crim. L.R. 369, [2004] 1 A.C. 1034.
119
JEROME HALL, GENERAL PRINCIPLES OF CRIMINAL LAW 126 (2nd ed., 1960, 2005).
120
Commonwealth v. Thompson, 6 Mass. 134, 6 Tyng 134 (1809); United States v. Freeman,
25 Fed. Cas. 1208 (1827); Rice v. State, 8 Mo. 403 (1844); United States v. Warner, 28 Fed. Cas.
404, 6 W.L.J. 255, 4 McLean 463 (1848); Ann v. State, 30 Tenn. 159, 11 Hum. 159 (1850); State
v. Schulz, 55 Ia. 628 (1881).
121
Lee v. State, 41 Tenn. 62, 1 Cold. 62 (1860); Chrystal v. Commonwealth, 72 Ky. 669, 9 Bush.
669 (1873).
122
Commonwealth v. Pierce, 138 Mass. 165 (1884); Abrams v. United States, 250 U.S. 616, 63 L.
Ed. 1173, 40 S.Ct. 17 (1919).
123
Commonwealth v. Walensky, 316 Mass. 383, 55 N.E.2d 902 (1944).
124
See, e.g., People v. Haney, 30 N.Y.2d 328, 333 N.Y.S.2d 403, 284 N.E.2d 564 (1972); Leet
v. State, 595 So.2d 959 (1991); Minor v. State, 326 Md. 436, 605 A.2d 138 (1992); United States
v. Hanousek, 176 F.3d 1116 (9th Cir.1999).
122 4 Positive Fault Element Involving Artificial Intelligence Systems

retrospectively in relation to the particular case. The modern development of


negligence in American law is expressed in the American Law Institute’s Model
Penal Code as inspired by the European-Continental modern understandings of
negligence.
Accordingly, negligence is a type of mental element in criminal law. It relates to
the factual element components, as any other type of mental element. It requires
unawareness to the factual element components, whereas the reasonable person
could and should have been aware of them, and taking of unreasonable risk in
regard to the results of the offense. This development has been embraced by
American courts.125 On that basis the question is how can negligence function as
mental element in criminal law, if the offender is not even required to be aware of
his conduct.
Some scholars did call to exclude it from criminal law and leave it for tort law or
other civil proceedings.126 However, the justification for negligence as mental
element is concentrated on its function as omission of awareness. The very same
way as act and omission are considered identical for the imposition of criminal
liability, as discussed above in relation to the factual element,127 so may both
awareness and omission of awareness be considered basis for mental element
requirement.
Negligence is not parallel to inaction, in this analogy to the factual element, but
to omission. If a person was just not aware of the factual element components, and
nothing besides that, he is not considered negligent, but innocent. Omission to be
aware means, that the person was not aware although reasonable person could and
should have been. The individual is considered not to be using his existing
capabilities of forming awareness. The individual was not aware, although he had
the capabilities to be, and therefore had the duty to as well (non scire quod scire
debemus et possumus culpa est).
Negligence does not incriminate persons who are incapable of forming aware-
ness, but only those who failed to use their existing capabilities to form awareness.
Negligence does not incriminate the blind person for not seeing, but only persons
who have the capability of seeing, but they failed to use this capability. Wrong
decisions are part of the human daily life and they are quite common, therefore
negligence does not struggle against such situations. Taking risks is also part of
human life, and not only negligence does not struggle against it, the society
encourage risk taking in very many situations.
Negligence struggles only against taking unreasonable risks. If people do not
take any risks at all, the human development is utterly stopped. If no risk was taken,

125
See, e.g., State v. Foster, 91 Wash.2d 466, 589 P.2d 789 (1979); State v. Wilchinski, 242 Conn.
211, 700 A.2d 1 (1997); United States v. Dominguez-Ochoa, 386 F.3d 639 (2004).
126
See, e.g., Jerome Hall, Negligent Behaviour Should Be Excluded from Penal Liability,
63 COLUM. L. REV. 632 (1963); Robert P. Fine and Gary M. Cohen, Is Criminal Negligence a
Defensible Basis for Criminal Liability?, 16 BUFF. L. REV. 749 (1966).
127
Above at Sect. 2.1.1.
4.3 Negligence and Artificial Intelligence Systems 123

our ancestors would have been still staring the burning branch after being stroke by
lightning, and afraid of taking the risk of getting closer to it, grab it, and use it for
our needs. People are constantly pushed by the society to take risks, but reasonable
risks. The question is, how can modern society identify the unreasonable risk and
distinguish it from the reasonable risks, which are legitimate.
For instance, scientists propose an advanced device that would significantly ease
our daily lives. It is comfortable, fast, elegant and accessible. However, using it may
cause death of about 30,000 people per year in the US alone. Would it be considered
reasonable risk to use this device or unreasonable? It may be thought that 30,000
victims each year is enormous number, and that makes the use of this device
completely unreasonable. However, it is used to call that device “car”.128 Driving
a car is not considered unreasonable in most countries in the world, today.
However, in the late nineteenth century it was. So is the situation with trains,
planes, ships, and many other of our daily instruments. The reasonability of the risk
is relative by its nature, and it is relatively determined in respect to time, place,
society, culture and other circumstances. Different courts in different countries are
determining different reasonable persons in this context of negligence. The reason-
able person must be measured not only as general abstract person, but it should be
adapted to the relevant circumstances of the specific offender.
For example, it is not enough to compare medical malpractice of physician to the
behavior of an abstract reasonable person. This behavior should be compared to that
of reasonable physician, of the same expertise, of the same experience, of the same
circumstances of treatment (emergency treatment or other), the same sources etc.
This may focus the standard of reasonability, and make it more subjective standard
rather than pure objective. This process in relation to artificial intelligence systems
is discussed in detail below. Most negligence offenses are results-offenses, since the
society prefer to use negligence to protect from factual harms involved in unrea-
sonable risk takings. However, negligence may be required for conduct-offenses
as well.
The general structure of negligence includes no volitive aspect, but only cogni-
tive. Since volition is supported by cognition, and since negligence does not require
awareness, it cannot require components of volition. The cognitive aspect of
negligence consists of omission of awareness in relation to all factual element
components. The negligence requirements in relation to conduct and circumstances
are identical. Both require unawareness of the component (conduct/circumstances)
in spite of the capability to form awareness, when reasonable person could and
should have been aware of that component.
The reasonability in these components of negligence is examined in relation to
the capability and duty to form awareness, although actually no awareness has been
formed by the offender. The negligence requirement in relation to the results
requires unawareness of possibility of the results’ occurrence in spite of the
capability to form awareness, when reasonable person could and should have

128
For car accidents statistics in US see, e.g., http://www.cdc.gov/motorvehiclesafety/.
124 4 Positive Fault Element Involving Artificial Intelligence Systems

been aware of that possibility as unreasonable risk. The reasonability in this


component is focused on identifying the occurrence of the results as a possibility
of unreasonable risk. It means that the risk taking of the offender in the specific
event, under the relevant circumstances, is considered unreasonable.
The modern structure of negligence continues the minimal concept of criminal
law. It contains both inner and external aspects. Inward, negligence is the minimal
requirement of mental element for each of the factual element components. Conse-
quently, if negligence is proven in relation to the circumstances and results, but in
relation to the conduct awareness is proven, that satisfies the requirement of
negligence. It means that for each of the factual element components at least
negligence is required but not exclusively negligence. Outwards, negligence
offenses’ mental element requirement is satisfied through at least negligence, but
not exclusively. It means that criminal liability for negligence offenses may be
imposed through proving general intent as well as negligence.
For negligence is still considered an exception for the general requirement of
general intent, negligence has been required as an adequate mental element of
relatively lenient offenses. In some legal systems around the world, negligence has
even been restricted ex ante to lenient offenses.129 This general structure of
negligence is a template which contains terms from the mental terminology (e.g.,
reasonability). In order to explore whether artificial intelligence technology are
capable of fulfilling the negligence requirement in the particular offenses, these
terms must be examined.

4.3.2 Negligence and Artificial Intelligence Technology

In general, the core of negligence template in relation to the factual element


components is expressed by unawareness of the factual component in spite of the
capability to form awareness, when reasonable person could and should have been
aware of that component. Unawareness is naturally the opposite situation of
awareness, which is required by general intent. Consequently, for human to be
considered aware of certain factual data, two accumulative conditions are required:

(a) absorbing the factual data by senses; and-


(b) creating a relevant general image towards this data in brain.

If one of these conditions is missing, the person is not considered to be aware.


Unawareness may be achieved through inexistence of at least one of the above
conditions. When the factual data has not been absorbed by senses or when
absorbed, but no relevant general image has been created, it is considered unaware-
ness to that factual data. Since awareness is binary situation in the context of
criminal law, no partial awareness is recognized. Therefore, if parts of the

129
E.g., in France. See article 121-3 of the French penal code.
4.3 Negligence and Artificial Intelligence Systems 125

awareness process has been existed, but the process has not been accomplished at
full, the person is regarded as unaware of the relevant factual data. This is true for
both human and artificial intelligence offenders.
However, for the unawareness to be considered omission of awareness, and not
mere unawareness, it should be in spite of the capability to form awareness, when
reasonable person could and should have been aware. These are, in fact, two
conditions:

(a) possessing the cognitive capabilities of consolidating awareness; and-


(b) reasonable person could and should have been aware of the factual data.

The first condition deals with physical capability. If the offender lacks these
capabilities, regardless the offense, no criminal liability for negligence may be
imposed. It would not be different than punishing the blind for not seeing.
Consequently, negligent offenders are only those who possess the capabilities of
forming awareness. That is true for both human and artificial intelligence offenders.
Thus, for the imposition of criminal liability for negligence offense upon an
artificial intelligence technology, the artificial intelligence system must possess
the capabilities of forming awareness. An artificial intelligence system which
lacks these capabilities cannot be considered offender of negligence offenses in
this manner. Of course, it cannot be considered offender of general intent offenses
either.
Proving these capabilities is through the general features of the artificial intelli-
gence system, regardless the specific case. At this point, it is known that the
offender was not aware of the relevant factual data, and it is also known that he
has the capabilities of being aware. For that to become negligence, it should be
proven that a reasonable person could and should have been aware of the factual
data. The “reasonable person” is a mental standard to be compared to. Although in
some other legal spheres the reasonable person refers to standard higher than
average person, in criminal law it refers to the average person.130
The reasonable person is filled out with different contents by different societies
and cultures in different times and places. The reasonable person is supposed to
reflect the existing relevant situation in the specific society and not be used by
courts to change the current situation. This standard relates to the cognitive
processes that should have occurred.
The reasonable person is measured through two accumulative paths of cognitive
activity:

130
In Hall v. Brooklands Auto Racing Club, [1932] All E.R. 208, [1933] 1 K.B. 205,
101 L.J.K.B. 679, 147 L.T. 404, 48 T.L.R. 546 the “reasonable person” has been described as:

The person concerned is sometimes described as ‘the man in the street’, or ‘the man in the
Clapham omnibus’, or, as I recently read in an American author, ‘the man who takes the
magazines at home, and in the evening pushes the lawn mower in his shirt sleeves’.
126 4 Positive Fault Element Involving Artificial Intelligence Systems

(a) he should take into consideration all relevant considerations; and in


addition-
(b) he should relate these considerations the proper weight.

Thus, ignorance of relevant considerations is considered to be unreasonable.


Moreover, even if all relevant considerations have been taken, but have not been
weighed properly, it would be considered unreasonable either.
In general, the relevancy of considerations and their proper weight is to be
determined by the court ex post. The complexity and variety of common situations
in life disable characterizing a general type of reasonable person. Such type would
be purely objective, but also unrealistic and irrelevant to too many situations in life.
As a result, the objective standard of reasonable person should be fixed with some
subjectivity, for it to match the particular case. The subjective fix is the connecting
link between the specific offender and the objective and general standard of
reasonable person.
Thus, the reasonable person must be measured not only as general abstract
person, but it should be adapted to the relevant circumstances of the specific
offender. The society expects different persons to behave differently. Experienced
attorney acts differently than inexperienced one, even in the very same situations.
So act pilots, drivers, physicians, surgeons, policemen, and, in fact, all of
us. Moreover, the very same person acts differently under different circumstances
in relation to the considerations taken and their related weight. A soldier under the
enemy’s real attack, in an emergency situation, and under real danger to life act
differently than in routine training.
Consequently, the relevant reasonable person is not a general standard of
objectivism, but it includes relevant subjective reflections of the particular
offender.131 The offender, who is surgeon of 10 years of experience, acted in
emergency conditions and used the limited sources found for the surgery, would
be compared to the reasonable surgeon of the same attributes. The relevant
attributes for that comparison and their content and effect are determined by
court, which may use for that expert opinions of professionals.
The reasonable person forms a sphere of reasonability. This sphere contains all
types of reasonable behaviors in the particular situation. It is assumed that there are
several reasonable ways to behave in these situations. Only deviation from the
sphere of reasonability by the individual may form negligence. When the relevant
factual data relates to the results of the conduct, the possibility of the results’
occurrence should be considered unreasonable risk. Unreasonable risk in this
context is acting outside the sphere of reasonability towards the results. The

131
State v. Bunkley, 202 Conn. 629, 522 A.2d 795 (1987); State v. Evans, 134 N.H. 378, 594 A.2d
154 (1991).
4.3 Negligence and Artificial Intelligence Systems 127

possibility of the results to occur should be unreasonable risk, i.e., the occurrence of
the results is a risk, that taking this risk is unreasonable in this situation.132
Reasonable and unreasonable risks are measured the same way as reasonable
and unreasonable persons, as described above. For the risk to be considered
reasonable, the individual should take into consideration all relevant considerations
and relate these considerations the proper weight. If one of the ways of actions
accordingly is taking that risk, it is considered to be reasonable. If not, it is
unreasonable. For the fulfillment of negligence by an artificial intelligence system,
it should make unreasonable decisions. The ultimate question here is whether a
machine can be reasonable, or should it be asked whether a machine can be
unreasonable.
Analytically speaking, reasonability of machine is not different than of human.
Both should take into consideration the relevant considerations and relate them the
proper weight. This can easily be a matter of calculation. The relevant
considerations are not more than factors in the equation, and their proper weight
is the combinations of these factors. The equation may be constant if programmed
to be so. However, machine learning features changes that. Machine learning is
process of generalization through induction from many specific cases. The machine
learning feature enables the artificial intelligence system to change the equation
from time to time.
In fact, effective machine learning should cause changes in the equation almost
every time a specific case is analyzed. This is what happens to our image of the
world as our life-experience becomes richer and wider. If the equation remains
constant, the machine learning is considered to be absolutely ineffective. Expert
systems, who are not featured with machine learning, are not different than a human
expert, who insists not to be updated and not to practice his expertise. Machine
learning is essential for the artificial intelligence system to be developing and not be
blocked in stagnation.
When the artificial intelligence system is activated for the first time, the equation
and its factors are programmed by human programmers. The human programmer
determines what the reasonable course of conduct in the relevant cases
is. Afterwards, after analyzing a few cases, the system identifies exceptions,
wider/narrower definitions, newer connections between existing factors, new
factors, etc. The artificial intelligence system’s way of generalizing the knowledge
absorbed from the particular cases is to rephrase the relevant equation. The term
“equation” is referred here in order to describe the relevant algorithm, but, of
course, it is not necessarily equation in its mathematical sense.
Changing the relevant equation by rephrasing it causes the possibility of making
different decisions than those made in the past. This process of induction is in the
core of the machine learning, and the changes in the equation form, in fact, a

132
People v. Decina, 2 N.Y.2d 133, 157 N.Y.S.2d 558, 138 N.E.2d 799 (1956); Government of the
Virgin Islands v. Smith, 278 F.2d 169 (3rd Cir.1960); People v. Howk, 56 Cal.2d 687, 16 Cal.Rptr.
370, 365 P.2d 426 (1961); State v. Torres, 495 N.W.2d 678 (1993).
128 4 Positive Fault Element Involving Artificial Intelligence Systems

different sphere of right decisions. For instance, a medical expert artificial intelli-
gence system is given lists of symptoms of common cold and influenza. When
activated for the first time, it diagnoses them due to the given symptoms. However,
after some more cases the system learns to notice more symptoms as crucial for
distinguishing between common cold and influenza, such as the exact temperature
of the patient.
If the system is required to recommend medical treatments, different treatments
are to be recommended due to different diagnosis. Sometimes the expert system is
not “sure”. The symptoms may match two different diseases. The system can assess
the probabilities according to the factors measured and analyzed.
For instance, the expert system may determine that there is probability of 38 %
the patient got a common cold, and 62 % it is influenza. Processing these
probabilities may be the cause for the particular negligence in artificial intelligence
systems. Mistakes in the conclusions may be both in sure and unsure conclusions.
The system may be sure of mistaken conclusion, and it can also assess probabilities
mistakenly. The mistakes may be caused by wrong changes of the equation, wrong
factors to be considered, wrong ignorance of certain factors or wrong weight related
to certain factors.
These mistakes are byproduct of errors in the machine learning process. More
precisely, ex post errors, i.e., errors which have been considered errors only after the
decision has been made and according to the consequences of the decision. Humans
tend to learn empirically through trial and error method. Analytically, machine and
human errors, in this context, are of the same type. Understanding the error, its
causes and the ways to avoid it are part of the learning process, both human and
machine learning. On this ground, it should be asked, what is to be considered
reasonable decision in this context.
The major question is, given the start point of the system in relation to its basic
factual data, given its specific experience through machine learning, could reason-
able person be aware of the relevant factual data. The derivative question would be,
of course, who is this reasonable person—human or machine. If the general concept
of easing objectivity by adding it some subjective characteristics is accepted, the
reasonable person should have similar attributes of the offender. Only then the
reasonability of the offender’s decision may be measured, and no injustice is made.
Therefore, if the offender has the capability of machine learning, so should the
reasonable person have under that concept. Thus, the reasonable person in the
context of measuring artificial intelligence system decisions’ reasonability would
be the reasonable artificial intelligence system of the same type. That might seem a
bit tricky for the human programmers, operators and users to escape from criminal
liability and leave it to the mistaken machine. What would be easier for a medical
staff to use expert artificial intelligence system and follow its recommendations,
and if the system is wrong, the system is the only one to be criminally liable?
However, the legal situation is not that simple.
The very decisions of posing the specific artificial intelligence system in its
position, using it, following its recommendations, etc. are subject to negligence
offenses as well. The artificial intelligence system has the capability of fulfilling the
4.3 Negligence and Artificial Intelligence Systems 129

negligence requirements, but that is not an exempt from criminal liability for other
persons involved in the particular situation. The very decision of using the artificial
intelligence system may, by itself, be subject to criminal liability. For example, if
the decision has been made under awareness of the relevant mistakes, and these
mistakes caused death, the human decision may lead to charge in murder. However,
if there was no awareness, but reasonable person in this situation could and should
have been aware of it, it may lead to negligent homicide.
Analysis of the reasonable machine relates, in fact, to the feature of machine
learning. The imposition of criminal liability in negligence offenses must relate to
and analyze the machine learning process which revealed the mistaken decision.
Access to this process is based on the records of the artificial intelligence system
itself. However, the reasonability of the decision making process within the
machine learning may be based on expert opinions. This is the very same way of
proving negligence in courts in relation to human offenders. It is not rare to prove or
refute the human offender negligence through expert opinions.
For instance, when the medical expert artificial intelligence system produces
probabilities of 38 % common cold and 62 % influenza, a medical expert may
explain to the court why these probabilities are reasonable/unreasonable under the
specific conditions of the case, and a computer scientist may explain to the court the
process of producing these probabilities based on the artificial intelligence system
particular machine learning process and existing database.
Accordingly, the court has to decide in three questions:

(a) Was the artificial intelligence system unaware of the factual component?
(b) Has the artificial intelligence system the general capability of consolidating
awareness of the factual component?
(c) Could reasonable person have been aware of the factual component?

If the answer is positive for all three questions, and that is proven beyond any
reasonable doubt, the artificial intelligence system has fulfilled the requirements of
the particular negligence offense. Artificial intelligence systems, which are capable
of forming awareness for general intent offenses as discussed above,133 have
neither technologic nor legal problem to form negligence for negligence offenses,
for the negligence is lower level of mental element than general intent. Thus,
negligence is relevant to artificial intelligence technology and its proof in court is
possible.
Accordingly, the question is who is to be criminally liable for the commission of
this kind of offenses. In general, imposition of criminal liability for negligence
offenses requires the fulfillment of both factual and mental elements of these
offenses. Humans are involved in the creation of artificial intelligence technology
and technology, their design, programming and operation. Consequently, when the
factual and mental elements of the offense are fulfilled by artificial intelligence

133
Above at Sect. 4.2.4.
130 4 Positive Fault Element Involving Artificial Intelligence Systems

technology, the question is who is to be criminally liable for the offenses


committed.

4.3.3 Direct Liability

When negligence is examined, when the offender fulfils both the factual and mental
elements requirements of the particular offense, criminal liability is to be imposed,
the same way as with general intent. In addition, the very same way as in general
intent offenses, the court is not supposed to check whether the offender had been
“evil”, “immoral” etc. That is true for all types of offenders: humans, corporations
and artificial intelligence technology. Therefore, the same justifications for the
imposition of criminal liability upon artificial intelligence technology in general
intent offenses would be relevant here as well.
As long as the narrow fulfillment of these requirements does exist, criminal
liability should be imposed. However, negligence offenses are different than
general intent offenses also in their social purpose. The relevant question would
be whether their different social purpose is relevant not only for humans and
corporations, but for artificial intelligence technology as well. From the first place
negligence offenses were not designed to deal with “evil” persons, but with
individuals who made mistakes in their discretion. Therefore, the debate upon
evil in criminal law is not relevant for negligence offenses, as it may be relevant
for general intent offenses.
The criminal law in this context functions as an educator for designing the
outlines of individual discretion. The boundaries and borderlines of that discretion
are drawn up by negligence offenses. Sometimes any person may exercise discre-
tion in a wrong way. In most times that does not contradict the criminal law norms.
For instance, people may choose the wrong husband or wife, since they may have
exercised their subjective individual discretion in a wrong way, but that does not
come to criminal law. So is the situation with exercising our inner discretion in a
wrong way towards choosing cars, employers, houses and even faith.
However, in some cases that does contradict a norm of criminal law. For
instance, when people’s inner wrong discretion reveals to someone’s death (negli-
gent homicide).134 As long as our wrong discretion is not contradicted to the
criminal law, the society expects us to learn our lesson by our own. The next time
a person chooses a car, house, employer, etc., he would be much more careful in
examining some details relevant for the purchase. This is how human life experi-
ence is gained in general. However, the society takes the risk that its members
would not learn the lesson, and it still does not intervene through criminal law.
However, at some points, when criminal offenses are committed, the society
does not take the risk of letting the individual learn the lesson as an autodidact. The

134
See, e.g., Rollins v. State, 2009 Ark. 484, 347 S.W.3d 20 (2009); People v. Larkins, 2010 Mich.
App. Lexis 1891 (2010); Driver v. State, 2011 Tex. Crim. App. Lexis 4413 (2011).
4.3 Negligence and Artificial Intelligence Systems 131

social harm in these cases is to grave to be left under such risk. In this type of cases
the society intervenes, and that is done through negligence offenses. The purpose is
to make it more sure and certain that the specific individual would learn the relevant
lesson. Prospectively, it is assumed that after the lesson is taught, the probability for
re-commission of the offense would be much lower.
Thus, for instance, the society educates its physicians to be much more careful in
operating surgeries, its employers in saving their employees lives, its construction
companies in using more secure constructions, its factories in creating less pollu-
tion, etc. The human and corporation offenders are supposed to learn their lesson
through the criminal law. Would it be relevant either for artificial intelligence
technology? The answer is positive. For the educative purpose of the criminal
negligence, it is true that there is no much use or utility in the imposition of criminal
liability, unless the offender has the ability to learn.
If society wants to make the offender learn the lesson for his mistakes, it must
assume that the offender has the capability to learn. If such capabilities are existed
and exercised, criminal liability for negligence offenses is necessary. However, if
no such capabilities are exercised manually, it is completely unnecessary, for no
prospective value is expected here. Using and not using criminal liability for
negligence offense would reveal to the same results. For artificial intelligence
systems, which are equipped with the relevant capabilities of machine learning,
the criminal liability for negligence offenses is not less than necessary.
The very same way as for humans, negligence offenses may draw up the
boundaries and borderlines of discretion for artificial intelligence systems. Both
humans, corporations and artificial intelligence systems are supposed to learn from
their mistakes and improve their decisions prospectively. When the mistakes are
part of the criminal law, the criminal law intervenes in shaping the decision-maker
discretion. For the artificial intelligence system the criminal liability for negligence
offenses is a chance to reconsider the decision-making process due to the external
limitations dictated by the criminal law.
If society has learned through years that the human process of decision-making
requires criminal liability for negligence in order to be improved, this logic is not
less relevant for artificial intelligence systems using machine learning methods.
Society can say that artificial intelligence systems can be reprogrammed, but their
precious experience, gained through machine learning, would be lost. Society may
say that artificial intelligence systems are capable to learn their boundaries and fix
their discretion by their own, but it may say that the very same way for humans
either, and still society imposes criminal liability upon humans for negligence
offenses.
Consequently, if artificial intelligence technology has the required capabilities of
fulfilling both factual and mental elements of criminal liability for negligence
offenses, and if the rational for imposition of criminal liability for these offenses
is relevant for both humans and artificial intelligence systems, there is no reason to
avoid from criminal liability in these cases. However, this is not the only way of
artificial intelligence involvement in criminal liability for negligence offenses.
132 4 Positive Fault Element Involving Artificial Intelligence Systems

4.3.4 Indirect Liability

As described above in the context of general intent, the most common way to deal
with instrumental use of individuals for the commission of offenses is the general
form of perpetration-through-another.135 In order to impose criminal liability for
perpetration-through-another of particular offense, there is a necessary to prove
awareness for that instrumental use. Consequently, perpetration-through-another is
applicable only in general intent offenses. In most cases, the other person being
instrumentally used by the perpetrator is considered “innocent agent” and no
criminal liability is imposed upon him.
The analysis of perpetration-through-another in that context of general intent
offenses has been discussed above. However, that person who is instrumentally
used may also be considered as “semi-innocent agent”, who is criminally liable for
negligence, although the perpetrator is criminally liable for general intent offense.
This is the case where negligence may be relevant for perpetration-through-another,
and that completes the discussion towards it.
For example, a nurse in surgery room is acknowledged that a person who
attacked her in the past is about to be in surgery. She decides that he deserves to
die. She infects the surgery instruments with lethal bacteria, and when the surgeon
comes and makes sure that the surgery instruments are sterilized, she tells him that
she sterilized them and this is her responsibility. The surgery begins, unaware of the
infected instruments, and the patient is infected by the bacteria. A few hours after
the surgery is ended, the patient dies out of the infection.
Legal analysis of the case would suggest the nurse to be perpetrator-through-
another of murder, as she instrumentally used the surgeon to commit the patient’s
murder. The surgeon’s criminal liability in this case is dependent on his mental
state. If he were innocent agent, he would be exempt from criminal liability.
However, if the surgeon has the legal duty to make sure the instruments are
sterilized, he is not completely an innocent agent, since he failed to fulfill his
legal duties. On the other hand, he was not aware of the infection. This is the
case for negligence. When the agent is not aware of crucial elements of the offense,
but reasonable person at his state could and should have been aware, this agent is
negligent. This is the “semi-innocent agent”.136
Thus, when one person is instrumentally using another person who is negligent
about the commission of the offense, it is perpetration-through-another, but both
persons are criminally liable: the perpetrator for general intent offense (e.g.,
murder) and the other person for negligence offense (e.g., negligent homicide).
Since artificial intelligence systems have the capability of forming negligence as
mental element, the question is whether they may function as semi-innocent agents.
The case for artificial intelligence system semi-innocent agent is where the perpe-
trator (human, corporation or artificial intelligence system) instrumentally uses an

135
Above at Sect. 4.2.5.
136
See, e.g., Peter Alldridge, The Doctrine of Innocent Agency, 2 CRIM. L. F. 45 (1990).
4.3 Negligence and Artificial Intelligence Systems 133

artificial intelligence system for the commission of the offense, but although
instrumentally used the artificial intelligence system was negligent as to the com-
mission of that very offense.
Only artificial intelligence systems, which have the capability of fulfilling the
mental element requirement of negligence offenses, may be considered and func-
tion as semi-innocent agents. However, not in any case the artificial intelligence
system has the capability of negligence, it would automatically function as semi-
innocent agent. This capability is necessary for that function, but it is certainly not
adequate.
The semi-innocent agent, human, corporation or machine, should be examined
ad hoc in the particular case. Only if the agent conducted negligently towards the
commission of the offense, this may be considered semi-innocent agent. Thus, if the
instrumentally used artificial intelligence system did not consolidate awareness of
the relevant factual data, but it had the capability and a reasonable person could
have consolidated such awareness, the artificial intelligence system is to be consid-
ered semi-innocent agent within the perpetration-through-it.
The perpetrator’s criminal liability is not affected by the agent’s criminal liabil-
ity, if any. The perpetrator-through-another criminal liability is for the relevant
general intent offense, whether the instrumentally used artificial intelligence system
has no criminal liability (i.e., innocent agent or lacks the relevant capabilities) or
has criminal liability for negligence (i.e., semi-innocent agent). As a result, using
the legal construction of perpetration-through-another as to the instrumental use of
artificial intelligence systems has the same consequences for the perpetrator as
general intent offender, regardless the artificial intelligence system’s criminal
liability.
The agent’s criminal liability in these cases is not directly affected by the
perpetrator’s criminal liability. If the artificial intelligence system was negligent
(i.e., it fulfilled both factual and mental elements requirements of the negligence
offense), criminal liability for negligence would be imposed upon it. This system is
also to be classified as semi-innocent agent within the context of the particular
perpetration-through-another. If the artificial intelligence system was not negligent,
due to its incapability or due to any other reason, no criminal liability is imposed
upon it. This system is also to be classified as innocent agent within the context of
the particular perpetration-through-another. To make the image be clearer, if the
artificial intelligence system is neither innocent agent nor semi-innocent agent, then
it has fulfilled the requirements of the general intent offenses at full.
This is not the case for perpetration-through-another, but for principal perpetra-
tion of the artificial intelligence system. If the artificial intelligence system has the
capability to be criminally liable for general intent offenses as sole-offender, there
is nothing to prevent it from committing the offense jointly with other technology:
humans, corporations or other artificial intelligence technology. Complicity
participated by artificial intelligence systems requires at least general intent, not
negligence, since it requires awareness of the very complicity and the delinquent
association. This situation is not substantively different than fulfillment of any other
general intent offense.
134 4 Positive Fault Element Involving Artificial Intelligence Systems

4.3.5 Combined Liabilities

The probable consequence liability deals with cases of commission of unplanned


offense (different or additional). The question in these cases is towards the criminal
liability of the other parties to the unplanned offense committed by one party.
Above discussed the probable consequence liability as to unplanned general intent
offenses.137 The relevant question here would be towards the probable consequence
liability to unplanned negligence offense committed by an artificial intelligence
system. Are the programmers, users and other related person criminally liable for
unplanned negligence offense committed by an artificial intelligence system? This
question completes the above two discussions towards human criminal liability for
artificial intelligence system’s negligence in addition to or instead of the artificial
intelligence system’s criminal liability for that offense.
For example, a medical expert artificial intelligence system is used for diagnosis
of certain types of diseases through analyzing the patient’s symptoms. The artificial
intelligence system analysis is based on machine learning, which inductively
analyses and generalizes specific cases. The system fails to diagnose correctly
one case, and that reveals to wrong treatment, which worsens the patient’s situation
and finally causes the patient’s death. The analysis of the artificial intelligence
system’s activity reveals negligence of it, and it fulfills both factual and mental
elements requirements of the relevant negligence offense (negligent homicide).
At this point arises the question of the programmer’s criminal liability for that
offense. His criminal liability is not related to the decision to use the artificial
intelligence system, to follow its diagnosis, etc., but it is related to the very initial
programming of the system. If the programmer would have programmed the system
to kill patients and instrumentally used it for this purpose, it would have been
perpetration-through-another of murder, but this is not the case here. For the
programmer’s criminal liability in this case the probable consequence liability
may be relevant.
The mental condition for the probable consequence liability requires the
unplanned offense to be “probable” for the party who did not actually commit
it. It is necessary that party could have foreseen and reasonably predict the com-
mission of the offense. Some legal systems prefer to examine the actual and
subjective foreseeability (the party has actually and subjectively foreseen the
occurrence of the unplanned offense), whereas others prefer to evaluate the ability
to foresee through an objective standard of reasonability (the party has not actually
foreseen the occurrence of the unplanned offense, but any reasonable person in his
state could have).
Actual foreseeability parallels the subjective general intent, whereas objective
foreseeability parallels the objective negligence. Consequently, in legal systems
that require objective foreseeability, the programmer should be at least negligent
for the commission of the negligence offense by the artificial intelligence system

137
Above at Sect. 4.2.6.
4.4 Strict Liability and Artificial Intelligence Systems 135

for imposition of criminal liability for that offense. However, in legal systems that
require subjective foreseeability, the programmer should be at least aware of the
possibility of the commission of the negligence offense by the artificial intelligence
system for imposition of criminal liability for that offense.
However, if the programmer had neither subjective nor objective foreseeability
towards the commission of the offense, the probable consequence liability would be
irrelevant. In this type of cases no criminal liability would be imposed upon the
programmer, and the artificial intelligence system’s criminal liability for the negli-
gence offense would not affect the programmer’s liability.

4.4 Strict Liability and Artificial Intelligence Systems

Imposition of criminal liability for strict liability offenses requires fulfillment of


both factual and mental elements. The mental element requirement of strict liability
offenses is strict liability or presumed negligence. If artificial intelligence technol-
ogy is capable of fulfilling the strict liability requirement, the imposition of criminal
liability upon it for strict liability offenses is possible, feasible and achievable.

4.4.1 Structure of Strict Liability

In general, strict liability has been accepted as form of mental element requirement
in criminal law as a development from the absolute liability. Since the eighteenth
century for some particular offenses it has been determined by English common law
that they require neither general intent nor negligence. These particular offenses
were related to as public welfare offenses.138 These offenses were inspired by tort
law, that accepted absolute liability as legitimate.
Consequently, these particular offenses were criminal offenses of absolute
liability, and imposition of criminal liability for them required proof of the factual
element alone.139 These absolute liability offenses were considered exceptional for
no mental element is required. In some cases the parliament intervened and required
mental element,140 and in some other cases the court rulings added mental element
requirements.141 By the mid-nineteenth century English courts began to consider
efficiency considerations as part of criminal law in various contexts. That gave rise
to the development of convictions on the basis of public inconvenience.

138
Francis Bowes Sayre, Public Welfare Offenses, 33 COLUM. L. REV. 55, 56 (1933).
139
See, e.g., Nutt, (1728) 1 Barn. K.B. 306, 94 Eng. Rep. 208; Dodd, (1736) Sess. Cas.
135, 93 Eng. Rep. 136; Almon, (1770) 5 Burr. 2686, 98 Eng. Rep. 411; Walter, (1799) 3 Esp.
21, 170 Eng. Rep. 524.
140
See, e.g., 6 & 7 Vict. c.96.
141
Dixon, (1814) 3 M. & S. 11, 105 Eng. Rep. 516; Vantandillo, (1815) 4 M. & S. 73, 105 Eng.
Rep. 762; Burnett, (1815) 4 M. & S. 272, 105 Eng. Rep. 835.
136 4 Positive Fault Element Involving Artificial Intelligence Systems

Offenders were indicted in particular offenses, and they were convicted although
no mental element was proven, due to the public inconvenience caused by the
commission of the offense.142 These convictions created, in fact, an upper threshold
of negligence, a kind of increased negligence. Accordingly, the individual must be
strict and make sure that no offense is committed. This standard of behavior is
higher than in negligence, which requires just behaving reasonably.
In these offenses, it is required more than reasonability, but to make sure of no
offense is committed whatsoever. Such offenses have clear preference of the public
welfare over the strict justice with the potential offender. Since these offenses were
not considered to be grave and severe, they were widened “for the good of all”.143
This development was considered necessary due to the legal and social
developments of the first industrial revolution. For instance, the increasing number
of workers in the cities brought the employers to decrease the worker’s social
conditions.
The parliament intervened through social welfare legislation, and the efficient
enforcement of this legislation was through absolute liability offenses.144 It was
insignificant whether the employer knew or not what are the proper social
conditions for the workers, he must make sure that no violation of these conditions
occurs.145 In the twentieth century this type of criminal liability has been spread to
other spheres of law, including traffic law.146 The American criminal law accepted
absolute liability as basis for criminal liability in the mid-nineteenth century,147
while ignoring previous rulings that did not accept it.148
This acceptance was restricted only to petty offenses, that their violation was
punished through fines, and not very severe fines. Similar acceptance occurred at

142
Woodrow, (1846) 15 M. & W. 404, 153 Eng. Rep. 907.
143
Stephens, [1866] 1 Q.B. 702; Fitzpatrick v. Kelly, [1873] 8 Q.B. 337; Dyke v. Gower, [1892]
1 Q.B. 220; Blaker v. Tillstone, [1894] 1 Q.B. 345; Spiers & Pond v. Bennett, [1896] 2 Q.B. 65;
Hobbs v. Winchester Corporation, [1910] 2 K.B. 471; Provincial Motor Cab Company Ltd.
v. Dunning, [1909] 2 K.B. 599, 602.
144
W. G. Carson, Some Sociological Aspects of Strict Liability and the Enforcement of Factory
Legislation, 33 MOD. L. REV. 396 (1970); W. G. Carson, The Conventionalisation of Early Factory
Crime, 7 INT’L J. OF SOCIOLOGY OF LAW 37 (1979).
145
AUSTIN TURK, CRIMINALITY AND LEGAL ORDER (1969).
146
NICOLA LACEY, CELIA WELLS AND OLIVER QUICK, RECONSTRUCTING CRIMINAL LAW 638–639 (3rd
ed., 2003, 2006).
147
Barnes v. State, 19 Conn. 398 (1849); Commonwealth v. Boynton, 84 Mass. 160, 2 Allen
160 (1861); Commonwealth v. Goodman, 97 Mass. 117 (1867); Farmer v. People, 77 Ill.
322 (1875); State v. Sasse, 6 S.D. 212, 60 N.W. 853 (1894); State v. Cain, 9 W. Va. 559 (1874);
Redmond v. State, 36 Ark. 58 (1880); State v. Clottu, 33 Ind. 409 (1870); State v. Lawrence,
97 N.C. 492, 2 S.E. 367 (1887).
148
Myers v. State, 1 Conn. 502 (1816); Birney v. State, 8 Ohio Rep. 230 (1837); Miller v. State,
3 Ohio St. Rep. 475 (1854); Hunter v. State, 30 Tenn. 160, 1 Head 160 (1858); Stein v. State,
37 Ala. 123 (1861).
4.4 Strict Liability and Artificial Intelligence Systems 137

the same time in the European Continental legal systems.149 Consequently, absolute
liability in criminal law became global phenomenon. However, at the meanwhile
the fault element in criminal law become much more important due to internal
developments in criminal law, and the general intent became the major and
dominant requirement for mental element in criminal law.
Thus, the criminal law should have made changes in the absolute liability for it
to meet the modern understandings towards fault. That was the trigger for moving
from absolute liability to strict liability. The core of the change lies in the move
from absolute legal presumption (praesumptio juris et de jure) to relative legal
presumption (praesumptio juris tantum), so that the offender has the opportunity to
refute the criminal liability. The presumption was presumption of negligence, either
refutable or not.150 The move from absolute liability towards strict liability eased
the acceptance of the presumed negligence as another, third, form of mental
element in criminal law.
Since that wide acceptance of strict liability in the world, legal systems justified
it both in the perspective of fault in criminal law151 and constitutionally. The
European court for human rights justified using strict liability in criminal law in
1998.152 Accordingly, the strict liability was considered as not contradicting the
presumption of innocence, protected by the 1950 European Human Rights Cove-
nant,153 and that ruling has been embraced in Europe and Britain.154 The federal
supreme court of the United States ruled consistently that strict liability does not
contradict the US constitution.155 So did the supreme courts in the states.156

149
John R. Spencer and Antje Pedain, Approaches to Strict and Constructive Liability in Conti-
nental Criminal Law, APPRAISING STRICT LIABILITY 237 (A. P. Simester ed., 2005).
150
Gammon (Hong Kong) Ltd. v. Attorney-General of Hong Kong, [1985] 1 A.C. 1, [1984] 2 All
E.R. 503, [1984] 3 W.L.R. 437, 80 Cr. App. Rep. 194, 26 Build L.R. 159.
151
G., [2003] U.K.H.L. 50, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep. 237, 167 J.P. 621, [2004]
Crim. L.R. 369; Kumar, [2004] E.W.C.A. Crim. 3207, [2005] 1 Cr. App. Rep. 566, [2005] Crim.
L.R. 470; Matudi, [2004] E.W.C.A. Crim. 697.
152
Salibaku v. France, (1998) E.H.R.R. 379.
153
1950 European Human Rights Covenant, sec. 6(2) provides: “Everyone charged with a
criminal offence shall be presumed innocent until proved guilty according to law”.
154
G., [2008] U.K.H.L. 37, [2009] A.C. 92; Barnfather v. Islington London Borough Council,
[2003] E.W.H.C. 418 (Admin), [2003] 1 W.L.R. 2318, [2003] E.L.R. 263; G. R. Sullivan, Strict
Liability for Criminal Offences in England and Wales Following Incorporation into English Law
of the European Convention on Human Rights, APPRAISING STRICT LIABILITY 195 (A. P. Simester
ed., 2005).
155
Smith v. California, 361 U.S. 147, 80 S.Ct. 215, 4 L.Ed.2d 205 (1959); Lambert v. California,
355 U.S. 225, 78 S.Ct. 240, 2 L.Ed.2d 228 (1957); Texaco Inc. v. Short, 454 U.S. 516, 102 S.Ct.
781, 70 L.Ed.2d 738 (1982); Carter v. United States, 530 U.S. 255, 120 S.Ct. 2159, 147 L.Ed.2d
203 (2000); Alan C. Michaels, Imposing Constitutional Limits on Strict Liability: Lessons from the
American Experience, APPRAISING STRICT LIABILITY 218, 222–223 (A. P. Simester ed., 2005).
156
State v. Stepniewski, 105 Wis.2d 261, 314 N.W.2d 98 (1982); State v. McDowell, 312 N.W.2d
301 (N.D. 1981); State v. Campbell, 536 P.2d 105 (Alaska 1975); Kimoktoak v. State, 584 P.2d
25 (Alaska 1978); Hentzner v. State, 613 P.2d 821 (Alaska 1980); State v. Brown, 389 So.2d
48 (La.1980).
138 4 Positive Fault Element Involving Artificial Intelligence Systems

However, it was recommended to restrict the usage of these offenses to the


necessary minimum, and prefer using general intent or negligence offenses. The
strict liability construction in criminal law is concentrated on the negligence
relative presumption and the ways to refute it. The presumption defines that if all
components of the factual element requirement in the offense are proven, it is
presumed that the offender was at least negligent. Consequently, for the imposition
of criminal liability in strict liability offenses, the prosecution does not have to
prove the mental state of the defendant, but only the fulfillment of the factual
element. The mental state of the offender is learned from the conduct.
At this point, the strict liability is similar to the absolute liability. However,
contradicted to absolute liability, strict liability may be refuted by the defendant,
since it is based upon relative legal presumption. For the defendant to refute strict
liability, there should be proven accumulatively two conditions:

(a) No general intent or negligence were actually existed in the offender; and-
(b) All reasonable measures to prevent the offense were taken.

The first condition deals with the actual mental state of the offender. According to
the presumption, the commission of the factual element presumes that the offender
is at least negligent. That means that the offender’s mental state is of negligence or
general intent. Thus, at first, the conclusion of the presumption should be refuted so
that the presumption is proven as incorrect at this case. The offender should prove
that he was not aware to the relevant facts, and that no other reasonable person could
have been aware of them under the certain circumstances of the case.
This proof resembles refuting general intent in general intent offenses and
negligence in negligence offenses. However, strict liability offenses are not general
intent or negligence offenses, for refuting general intent and negligence is not
adequate for preventing imposition of criminal liability. The social and behavioral
purpose of these offenses is to make the individuals conduct strictly and make sure
that the offense is not committed. That should be proven as well. Consequently, the
offender should prove that he has taken all reasonable measure to prevent the
offense.157
The difference between strict liability and negligence is sharp. To refute negli-
gence it is adequate to prove the offender has taken a reasonable measure, but to
refute strict liability it is required to prove that all reasonable measure were actually
taken. In order to refute the negligence presumption of strict liability the defendant
should positively prove each of these two conditions by a preponderance of the
evidence, as in civil law cases. The defendant is not required to prove these conditions
beyond reasonable doubt, but in general it is not sufficient to only raise reasonable
doubt. This burden of proof is higher than the general burden of the defendant.

157
B. v. Director of Public Prosecutions, [2000] 2 A.C. 428, [2000] 1 All E.R. 833, [2000]
2 W.L.R. 452, [2000] 2 Cr. App. Rep. 65, [2000] Crim. L.R. 403; Richards, [2004]
E.W.C.A. Crim. 192.
4.4 Strict Liability and Artificial Intelligence Systems 139

The possibility of the offender to refute the presumption becomes part of the
strict liability requirement since it relates to the offender’s mental state. The
modern structure of strict liability continues the concept of minimal requirement.
It contains both inner and external aspects. Inward, strict liability is the minimal
requirement of mental element for each of the factual element components.
Consequently, if strict liability is proven in relation to the circumstances and
results, but in relation to the conduct negligence is proven, that satisfies the
requirement of strict liability. It means that for each of the factual element
components at least strict liability is required but not exclusively strict liability.
Outwards, strict liability offenses’ mental element requirement is satisfied through
at least strict liability, but not exclusively. It means that criminal liability for strict
liability offenses may be imposed through proving general intent and negligence as
well as strict liability.
For strict liability is still considered an exception for the general requirement of
general intent, strict liability has been required as an adequate mental element of
relatively lenient offenses. In some legal systems around the world, strict liability
has been restricted ex ante or ex post to lenient offenses.158 This general structure of
strict liability is a template which contains terms from the mental terminology.

4.4.2 Strict Liability and Artificial Intelligence Technology

Generally, in order to prove the fulfillment of strict liability requirement by the


defendant, the prosecution may choose to prove the factual element of the particular
offense alone. According to the negligence presumption, the existence of the factual
element presumes the offender has been at least negligent. However, as noted
above,159 the possibility of refuting that presumption by the defendant is an integral
part of the strict liability substance and structure.
As a result, strict liability may be related only to offenders who possess the
mental capability of refuting the presumption, which relates to the mental element.
The mental capability of refuting the presumption does not necessarily means that
the offender has proofs for his innocence or convincing arguments towards that, but
only the inner capabilities required to refute the negligence presumption. Refuting
the presumption requires proofs that the offender-

(a) was neither aware nor negligent towards the factual element components;
and-
(b) has taken all reasonable measures to prevent the commission of the offense.

158
In re Welfare of C.R.M., 611 N.W.2d 802 (Minn.2000); State v. Strong, 294 N.W.2d
319 (Minn.1980); Thompson v. State, 44 S.W.3d 171 (Tex.App.2001); State v. Anderson,
141 Wash.2d 357, 5 P.3d 1247 (2000).
159
Above at Sect. 4.4.1.
140 4 Positive Fault Element Involving Artificial Intelligence Systems

The required capabilities, therefore, are the capabilities to consolidate awareness


and negligence and the capabilities to act reasonably.
The question is whether artificial intelligence systems possess such capabilities
and, therefore, criminal liability for strict liability offenses may be imposed upon
them. Let us examine these capabilities one by one. The first required capability is
to consolidate awareness and negligence. This capability of artificial intelligence
systems has already been examined above in relation to general intent and negli-
gence. Artificial intelligence systems, possessing relevant features discussed above,
do have this capability.
Consequently, all artificial intelligence systems that are indictable in general
intent and negligence offenses are indictable in strict liability offenses. Analogi-
cally, if general intent and negligence require higher mental capabilities than strict
liability, then lower capability of strict liability would be much easier to be
achieved in comparison to general intent and negligence. As a result, the artificial
intelligence offender is required tom possess the same features, regardless the
specific type of the particular offense. Since negligence requires the capability to
consolidate awareness, and since strict liability requires the capabilities of
consolidating awareness and negligence, it turns out to be that all artificial intelli-
gence offenders are required to possess the capability of consolidating awareness.
Thus, in fact, indictable offender in this context is an offender who has the
capability of consolidating awareness, whether this capability has been realized and
utilized or not, and regardless the specific type of the particular offense. An artificial
intelligence system, which is indictable in strict liability offenses, must have the
same mental features and capabilities as if it were indicted in general intent or
negligence offenses. This fact draws out the character of the artificial intelligence
offender, in relation to its inner capabilities, as uniform, more or less. Thus, the
minimal inner features and capabilities of artificial intelligence technology are
uniform and may be defined accordingly.
The second required capability is to act reasonably, or the capability of reason-
ability. Refuting negligence presumption requires the proof of taking all reasonable
measures, as aforesaid, and that requires a capability of acting reasonably. Without
such capability no artificial intelligence system is capable of taking all reasonable
measures. The capability of reasonability is required in negligence as well. This is
the same capability required in strict liability, and it has been discussed above in the
context of negligence. The required same capability strengthens the above argu-
ment that all inner capabilities of artificial intelligence technology are uniform,
regardless the type of the particular offense.
Although this is the same capability in both negligence and strict liability, it is
operated differently in negligence and in strict liability. In negligence it is required
that the artificial intelligence system would map the reasonable options as such, and
make the choice between them. The only requirement is that its final choice would
be in a reasonable option. However, in strict liability it is required that the artificial
intelligence system would map the reasonable options as such, but choose to
operate all of them.
4.4 Strict Liability and Artificial Intelligence Systems 141

The choice is between reasonable and unreasonable, whether the choice in


negligence is also between the reasonable options. For an artificial intelligence
system in such condition, to act reasonably in strict liability context is much easier
than acting reasonably in the context of negligence. Fewer choices are required in
strict liability. Accordingly, in relation to the criminal liability of an artificial
intelligence system in strict liability offense the court has to decide in three
questions:

(a) Was the factual element of the offense has been fulfilled by the artificial
intelligence system?
(b) Has the artificial intelligence system the general capability of consolidating
awareness or negligence?
(c) Has the artificial intelligence system the general capability of
reasonability?

If the answer is positive for all three questions, and that is proven beyond any
reasonable doubt, the artificial intelligence system has fulfilled the requirements of
the particular strict liability offense. Consequently, the artificial intelligence system
is presumed to be at least negligent.
At this point, the defense has the opportunity to refute the negligence presump-
tion through positive evidence. After the evidences are presented, the court has to
decide in two questions:

(a) Had the artificial intelligence system actually formed general intent or
negligence towards the factual elements components of the strict liability
offense?
(b) Had the artificial intelligence system not taken all reasonable measures to
prevent the actual commission of the offense?

If the answer of even one question is positive, the negligence presumption is not
refuted, and criminal liability for the strict liability offense is imposed. Only if the
answers for both questions are negative, the negligence presumption is refuted, and
no criminal liability for the strict liability offense is to be imposed upon the artificial
intelligence system. In general, artificial intelligence systems, which are capable of
forming awareness and negligence, have neither technologic nor legal problem to
form the inner requirements for strict liability offenses, for the strict liability is
lower level of mental element than general intent or negligence.
Thus, strict liability is relevant to artificial intelligence technology and its proof
in court is possible. Accordingly, the question is who is to be criminally liable for
the commission of this kind of offenses. In general, imposition of criminal liability
for strict liability offenses requires the fulfillment of both factual and mental
elements of these offenses. Humans are involved in the creation of artificial intelli-
gence technology and technology, their design, programming and operation. Con-
sequently, when the factual and mental elements of the offense are fulfilled by
142 4 Positive Fault Element Involving Artificial Intelligence Systems

artificial intelligence technology, the question is who is to be criminally liable for


the offenses committed.

4.4.3 Direct Liability

In strict liability offenses, as in general intent and negligence offenses, when the
offender fulfils both the factual and mental elements requirements of the particular
offense, criminal liability is to be imposed. The very same way as in general intent
and negligence offenses, the court is not supposed to check whether the offender
had been “evil”, “immoral” etc. That is true for all types of offenders: humans,
corporations and artificial intelligence technology.
Therefore, the same justifications for the imposition of criminal liability upon
artificial intelligence technology in general intent and negligence offenses would be
relevant here as well. As long as the narrow fulfillment of these requirements does
exist, criminal liability should be imposed. However, strict liability and negligence
offenses are different than general intent offenses also in their social purpose. The
relevant question would be whether their different social purpose is relevant not
only for humans and corporations, but for artificial intelligence technology as well.
From their beginning strict liability offenses were not designed to deal with
“evil” persons, but with individuals who did not make all efforts to prevent the
commission of the offense. Therefore, the debate upon evil in criminal law is not
relevant for strict liability offenses, as it may be relevant for general intent offenses.
The criminal law in this context functions as an educator for making sure that no
offense is to be committed. Accordingly, it is supposed to be designing the outlines
of individual discretion.
The boundaries and borderlines of that discretion are drawn up by strict liability
offenses. Sometimes any person may exercise discretion in a wrong way and make
some effort, but not all efforts, to prevent the commission of the offense. In most
times that does not contradict the criminal law norms. For instance, people may take
the risk of investing our money in doubtful stock, since people may not be doing all
efforts to prevent damage to their investments, but that does not come to
criminal law.
However, in some cases that does contradict a norm of criminal law, which is
designed to educate us to make sure that no offense is to be committed. As long as
people’s wrong discretion and absence of efforts to prevent offenses are not
contradicted to the criminal law, the society expects people to learn our lesson by
their own. The next time people invest their money they would be much more
careful in examining some details relevant for the investment. This is how human
life experience is gained.
However, the society takes the risk that people will not learn their lesson, and it
still does not intervene through criminal law. At some points, when criminal
offenses are committed, the society does not take the risk of letting the individual
learn the lesson as an autodidact. The social harm in these cases is to grave to be left
under such risk. In this type of cases the society intervenes, and that is done through
4.4 Strict Liability and Artificial Intelligence Systems 143

both strict liability and negligence offenses. In strict liability offenses the purpose is
to educate the individuals to make all possible efforts to prevent the occurrence of
the offense.
For instance, when driving on road, any driver is expected (and educated) to
drive so carefully, that any traffic offense is to be prevented through that driving.
The purpose is to make it more sure and certain that the specific individual would
learn how to behave carefully, extra-carefully. Prospectively, it is assumed that
after the individual is convicted in the strict liability offense, the probability for
re-commission of the offense would be much lower. Thus, for instance, the society
educates its drivers to drive extra-carefully and its employers to adhere the social
security regulations towards paying wages etc.
The human and corporation offenders are supposed to learn how to behave
carefully through the criminal law. Would it be relevant either for artificial intelli-
gence technology? The answer is positive. For the educative purpose of the strict
liability, it is true that there is no much use or utility in the imposition of criminal
liability, unless the offender has the ability to learn and to change behavior
accordingly.
If society wants to make the offender learn to behave very carefully, the society
must assume that the offender has the capability to learn and to implement that
knowledge. If such capabilities are existed and exercised, criminal liability for strict
liability offenses is necessary. However, if no such capabilities are exercised
manually, it is completely unnecessary, for no prospective value is expected here.
Using and not using criminal liability for strict liability offense would reveal to the
same results.
For artificial intelligence systems, which are equipped with the relevant
capabilities of machine learning, the criminal liability for strict liability offenses
is not less than necessary, if these capabilities are applied in the relevant situations
involving duties to behave extra-carefully. The very same way as for humans, strict
liability offenses may draw up the boundaries and borderlines of discretion for
artificial intelligence systems. Humans, corporations and artificial intelligence
systems are supposed to learn from their experience and improve their decisions
prospectively, including the standards of carefulness.
When the non-carefulness is part of the criminal law, the criminal law intervenes
in shaping the discretion towards careful behavior. For the artificial intelligence
system the criminal liability for strict liability offenses is a chance to reconsider the
decision-making process due to the external limitations dictated by the criminal
law, which require extra-careful conduct and decision-making. If society has
learned through years that the human process of decision-making requires criminal
liability for strict liability in order to be improved, this logic is not less relevant for
artificial intelligence systems using machine learning methods.
The society can say that artificial intelligence systems can be reprogrammed, but
their precious experience, gained through machine learning, would be lost. Society
may say that artificial intelligence systems are capable to learn their boundaries and
fix their discretion by their own, but society can say that the very same way for
144 4 Positive Fault Element Involving Artificial Intelligence Systems

humans either, and still society imposes criminal liability upon humans for strict
liability offenses.
Consequently, if artificial intelligence technology has the required capabilities of
fulfilling both factual and mental elements of criminal liability for strict liability
offenses, and if the rational for imposition of criminal liability for these offenses is
relevant for both humans and artificial intelligence systems, there is no reason to
avoid from criminal liability in these cases. However, this is not the only way of
artificial intelligence involvement in criminal liability for strict liability offenses.

4.4.4 Indirect Liability

In most legal systems, the useful way to deal with instrumental use of individuals
for the commission of offenses is the general form of perpetration-through-another.
In order to impose criminal liability for perpetration-through-another of particular
offense, there is a necessary to prove awareness for that instrumental use. Conse-
quently, perpetration-through-another is applicable only in general intent offenses.
In most cases, the other person being instrumentally used by the perpetrator is
considered “innocent agent” and no criminal liability is imposed upon him.
The analysis of perpetration-through-another in that context of general intent
offenses has been discussed above. However, that person who is instrumentally
used may also be considered as “semi-innocent agent”, who is criminally liable for
negligence, although the perpetrator is criminally liable for general intent offense.
Negligence is the lowest level of mental element required for the person instru-
mentally used in order to be considered “semi-innocent agent”. In this context,
strict liability is too low level of mental element in order to be considered “semi-
innocent agent”.
If the person who is instrumentally used by another person is under mental state
of strict liability, that person is to be considered innocent agent, as if that person has
no criminal mental state at all. As a result, in perpetration-through-another of
particular offense, the other person (the instrumentally used person) may be in
four possible mental states, which reflect matched legal consequences (accomplice
in general intent, semi-innocent agent in negligence, innocent agent in strict liabil-
ity and innocent agent in lack of mental element).
When the other person is aware of the delinquent enterprise and still continues to
participate under no pressure, he becomes an accomplice to the commission of this
offense. Negligence reduces that person’s legal state to become semi-innocent
agent.160 However, whether the mental state of that person is of strict liability or
of no criminal mental state, that person is to be considered innocent agent, that no
criminal liability is imposed upon him, and the full criminal liability for the relevant
offense is imposed upon the perpetrator, who instrumentally used that person. That
is correct for both humans and artificial intelligence technology.

160
See, e.g., Glanville Williams, Innocent Agency and Causation, 3 CRIM. L. F. 289 (1992).
4.4 Strict Liability and Artificial Intelligence Systems 145

For this legal construction of perpetration-through-another, it is insignificant if


the artificial intelligence system was instrumentally used, using or not using its
strong artificial intelligence capabilities to form strict liability mental state. Thus,
using instrumentally weak artificial intelligence system, strong artificial intelli-
gence system which formed strict liability or a screwdriver would form the same
legal consequences. The instrumentally using person (human, corporation or artifi-
cial intelligence system) is criminally liable for the commission of the offense at
full, and the instrumentally used person (human, corporation or artificial intelli-
gence system) is considered innocent agent.
For instance, a human user of unmanned vehicle based on advanced artificial
intelligence system instrumentally uses the artificial intelligence system to cross the
approaching junction. However, the traffic light in this junction was red. The
system has the capability of being aware of the traffic light color, but was not
actually aware of the red light. The system did not take all reasonable measures to
check the situation, such as analyzing the light’s color which comes from the
junction’s direction. Crossing a junction in red traffic light is a strict liability
offense. In this instance, the human is criminally liable for that offense as perpetra-
tor-through-another. Since the artificial intelligence system was instrumentally
used by the perpetrator, it is considered innocent agent, and therefore criminal
liability is not imposed upon it.

4.4.5 Combined Liabilities

The probable consequence liability deals with cases of commission of unplanned


offense (different or additional). The question in these cases is towards the criminal
liability of the other parties to the unplanned offense committed by one party.
Above discussed the probable consequence liability as to unplanned general intent
and negligence offenses. The relevant question here would be towards the probable
consequence liability to unplanned strict liability offense committed by an artificial
intelligence system.
Are the programmers, users and other related person criminally liable for
unplanned strict liability offense committed by an artificial intelligence system?
For instance, two human persons are committing a bank robbery. For their escape
from the arena, they use an unmanned vehicle drone based on advanced artificial
intelligence system. During their escape, the drone exceeds the legal velocity,
which is a strict liability offense. Analyzing the drone’s records reveals that the
drone satisfies the strict liability requirements of this offense.
The robbers did not program it do so and did not order it to do so. The question
here is towards their criminal liability for the strict liability traffic offense in
addition to their criminal liability for robbery. If the users would have ordered the
drone to drive that fast and instrumentally used it for this purpose, it would have
been perpetration-through-another of that offense, but this is not the case here. For
the users’ criminal liability in this case the probable consequence liability may be
146 4 Positive Fault Element Involving Artificial Intelligence Systems

relevant. The mental condition for the probable consequence liability requires the
unplanned offense to be “probable” for the party who did not actually commit it.
It is necessary that party could have foreseen and reasonably predict the com-
mission of the offense. Some legal systems prefer to examine the actual and
subjective foreseeability (the party has actually and subjectively foreseen the
occurrence of the unplanned offense), whereas others prefer to evaluate the ability
to foresee through an objective standard of reasonability (the party has not actually
foreseen the occurrence of the unplanned offense, but any reasonable person in his
state could have). Actual foreseeability parallels the subjective general intent,
whereas objective foreseeability parallels the objective negligence.
Lower level of foreseeability is not adequate for the probable consequence
liability. Consequently, the question is towards the level of foreseeability of the
users (the robbers). If they have actually foreseen the commission of that offense by
the drone, criminal liability for that offense would be imposed upon them in
addition to the criminal liability for robbery. If the uses formed objective foresee-
ability, criminal liability for the additional offense would be imposed only in the
legal systems that probable consequence liability may be satisfied through objective
foreseeability. However, if the mental state of the users towards the additional
offense is strict liability, it is not adequate for the imposition of criminal liability
through the probable consequence liability.
Even though the additional offense itself requires strict liability for imposition of
criminal liability, that is correct only for the actual perpetrator of that offense and
not for imposition of criminal liability through probable consequence liability.
Thus, if the users had neither subjective nor objective foreseeability towards the
commission of the offense, the probable consequence liability would be irrelevant.
In this type of cases no criminal liability would be imposed upon the users, and the
artificial intelligence system’s criminal liability for the strict liability offense would
not affect the users’ liability.
Negative Fault Elements and Artificial
Intelligence Systems 5

Contents
5.1 Relevance and Structure of Negative Fault Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.2 Negative Fault Elements by Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . . 150
5.2.1 In Personam Negative Fault Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.2.2 In Rem Negative Fault Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

5.1 Relevance and Structure of Negative Fault Elements

Negative fault elements are defenses, in which the court is bound to consider when
imposing criminal liability upon the defendant, if claimed. Defenses in criminal law
are complementary to the mental element requirement. Both deal with the
offender’s fault concerning the commission of the offense. The mental element
requirement is the positive aspect of fault (what should be in the offender’s mind
during the commission of the offense), whereas the general defenses are the
negative aspect of fault (what should not be in the offender’s mind during the
commission of the offense).1
For instance, awareness is part of mental element requirement (general intent),
and insanity is a general defense. Therefore, in general intent offenses the offender
must be aware and must not be insane. Thus, the fault requirement in criminal law
consists on both mental element requirement and general defenses. The general
defenses were developed in the ancient world in order to prevent injustice in certain
types of cases. For instance, a person who killed another out of self-defense was not
criminally liable for the homicide, since he lacked the required fault to cause death.
Authentic factual mistake of the offender as to the commission of intentional
offense was considered then as negating the required fault for imposition of

1
ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW 157–158, 202 (5th ed., 2006).

# Springer International Publishing Switzerland 2015 147


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8_5
148 5 Negative Fault Elements and Artificial Intelligence Systems

criminal liability.2 In the modern era the general defenses became wider and more
conclusive. However, the common factor of all general defenses remained the same
one. All general defenses in criminal law are part of the negative aspect of the fault
requirement, as they are meant to negate the offender’s fault.
The deep abstract question behind the general defenses is whether the commis-
sion of the offense was not imposed upon the offender. Thus, when a person really
acts in self-defense, it is considered as imposed on him. For saving his life, which is
considered a legitimate purpose, that person had no choice, but act in self-defense.
Of course, such person could have given up his life, but that is not considered to be
legitimate requirement, as it goes against the natural instinct of every living
creature.
All general defenses may be divided into two main types: in personam and in
rem defenses.3 In personam defenses are general defenses which are related to the
personal characteristics of the offender (exempts), whereas in rem are related to the
characteristics of the factual event (justifications). The personal characteristics of
the offender may negate the fault towards the commission of the offense, regardless
the factual characteristics of the event or the exact identity of the particular offense.
In in personam defenses the personal characteristics of the offender are adequate to
prevent imposition of criminal liability in any offense.
For instance, a child under the age of legal maturity is not criminally liable for
any offense factually committed by him. So is the situation for insane individuals
who committed the offense during the time they were considered insane. The exact
identity of the particular offense committed by the individual is completely insig-
nificant for the question of imposition of criminal liability. It, perhaps, may be
relevant for further steps of treatment or rehabilitation triggered by the commission
of the offense, but not for the imposition of criminal liability. The personal in
personam defenses, as general defenses, are completed by the impersonal in rem
defenses.
In rem defenses are impersonal general defenses. As such they are not depended
on the identity of the offender, but only on the factual event actually occurred. The
personal characteristics of the individual are insignificant for the in rem defenses.
For instance, an individual is attacked by another person up to real danger to his life.
The only way to be out of this danger was through pushing the attacker away.
Pushing away a person is considered assault unless done under consent. In this case,
the pushing person would argue for self-defense, regardless his identity, the attacker
identity or any other personal attributes of them, since self-defense relates only to
the factual event itself.

2
REUVEN YARON, THE LAWS OF ESHNUNNA 265, 283 (2nd ed., 1988).
3
Compare Kent Greenawalt, Distinguishing Justifications from Excuses, 49 LAW & CONTEMP.
PROBS. 89 (1986); Kent Greenawalt, The Perplexing Borders of Justification and Excuse,
84 COLUM. L. REV. 1897 (1984); GEORGE P. FLETCHER, RETHINKING CRIMINAL LAW 759–817
(1978, 2000).
5.1 Relevance and Structure of Negative Fault Elements 149

Since in rem defenses are impersonal, they have in addition a prospective value.
Not only is the individual not criminally liable if acted under in rem defense, but
also he should have been acting this way. In rem defenses define not only types of
general defenses, but also the proper behavior.4 Thus, individuals should defend
themselves under the conditions of self-defense, even though that may apparently
seem as commission of an offense. This is not true for in personam defenses. An
infant under the maturity age is not supposed to commit offenses, although no
criminal liability would be imposed. So are not insane individuals.
The prospective behavioral value of the in rem defenses expresses the social
values of the relevant society. If self-defense, for instance, is accepted as legitimate
in rem defense, it means that society prefers individuals to protect themselves when
the authorities are unable to protect them. Society prefers to reduce the state’s
monopoly on power through legitimizing self-assistance rather than leave
individuals vulnerable and helpless. Society does not enforce individuals to act in
manner of self-defense, but if they do so, it does not consider them criminally liable
for the offense committed through that self-defense.
Both in personam defenses and in rem defenses are general defenses. The phrase
“general defenses” relates to defenses that may be attributed to any offense and not
to particular and certain group of offenses. For instance, infancy may be attributed
to any offense, as long as it has been committed by an infant. On the contrary, there
are some specific defenses, which may be attributed only to specific offenses or
specific type of offenses. For instance, in some countries in the offense of statutory
rape, it is a defense for the defendant if the gap of ages is under 3 years. This
defense is unique for statutory rape, and it is irrelevant for any other offense. In
personam defenses and in rem defenses are classified as general defenses.
As defense arguments, general defenses should be positively argued by the
defense. If the defense chooses not to raise these arguments, they are not discussed
in court, even though all participants of the trial understand that such an argument
may be relevant. It is not enough to argue the general defense, but its elements
should be proven by the defendant. In some legal systems it should be proven
through raising reasonable doubt that the elements of the defense have been
actually existed in the case, and in other legal systems they should be proven by a
preponderance of the evidence. Accordingly, the prosecution has the opportunity to
refute the general defense.
General defenses of the in personam defense type include infancy, loss of self-
control, insanity, intoxication, factual mistake, legal mistake and substantive immu-
nity. General defenses of the in rem defense type include self-defense (including
defense of dwelling), necessity, duress, superior orders, and de minimis defense. All
these general defenses may negate the offender’s fault. The question is whether

4
Compare Paul H. Robinson, A Theory of Justification: Societal Harm as a Prerequisite for
Criminal Liability, 23 U.C.L.A. L. REV. 266 (1975); Paul H. Robinson, Testing Competing
Theories of Justification, 76 N.C. L. REV. 1095 (1998); George P. Fletcher, The Nature of
Justification, ACTION AND VALUE IN CRIMINAL LAW 175 (Stephen Shute, John Gardner and Jeremy
Horder eds., 2003).
150 5 Negative Fault Elements and Artificial Intelligence Systems

these general defenses are applicable for artificial intelligence technology in the
context of criminal law. These general defenses and their applicability to artificial
intelligence criminal liability are discussed below, divided into in personam
defenses and in rem defenses.

5.2 Negative Fault Elements by Artificial Intelligence


Technology

5.2.1 In Personam Negative Fault Elements

In personam negative fault elements are in personam defenses which are general
defenses, that are related to the personal characteristics of the offender (in
personam), as noted above.5 The applicability of in personam defenses upon
artificial intelligence criminal liability raises the question of the capability of
artificial intelligence systems to form the personal characteristics required for
these general defenses. The question if could an artificial intelligence system be
insane, for instance, is interpreted at first to the question whether it has the mental
capability of forming the elements of insanity in criminal law. This question mutatis
mutandis is relevant for all in personam defenses, as discussed below.

5.2.1.1 Infancy
Could an artificial intelligence system be considered infant for the question of
criminal liability? Since ancient ages infants under certain biologic age were not
considered criminally liable (doli incapax). The difference between the different
legal systems consisted on the exact age of maturity. For instance, Roman law made
it be the age of 7 years old.6 This defense is determined through legislation7 and
case-law.8 It was not questionable, that the relevant age is biological and not mental
age, mainly for evidential reasons.9 Biologic age is much easier to be proven.
However, it was presumed that the biologic age matches the mental age. If the
infant’s biologic age is beyond the lower level of age, but under the age of full
maturity, the mental age of the infant is examined through evidence (e.g., expert
testimony).10 The conclusive examination is whether the infant understands the

5
Above at Sect. 5.1.
6
RUDOLPH SOHM, THE INSTITUTES OF ROMAN LAW 219 (3rd ed., 1907).
7
See, e.g., MINN. STAT. }9913 (1927); MONT. REV. CODE }10729 (1935); N.Y. PENAL CODE }816
(1935); OKLA. STAT. }152 (1937); UTAH REV. STAT. 103-I-40 (1933).
8
State v. George, 20 Del. 57, 54 A. 745 (1902); Heilman v. Commonwealth, 84 Ky.
457, 1 S.W. 731 (1886); State v. Aaron, 4 N.J.L. 269 (1818).
9
State v. Dillon, 93 Idaho 698, 471 P.2d 553 (1970); State v. Jackson, 346 Mo. 474, 142 S.W.2d
45 (1940).
10
See Godfrey v. State, 31 Ala. 323 (1858); Martin v. State, 90 Ala. 602, 8 So. 858 (1891); State
v. J.P.S., 135 Wash.2d 34, 954 P.2d 894 (1998); Beason v. State, 96 Miss. 165, 50 So. 488 (1909);
State v. Nickelson, 45 La.Ann. 1172, 14 So. 134 (1893); Commonwealth v. Mead, 92 Mass.
5.2 Negative Fault Elements by Artificial Intelligence Technology 151

way he behaves and whether he understands the wrong character of that behavior.11
If the infant understands that he may be criminally liable for the offense, as if he
were mature.
However, there may be some procedural changes in the criminal process in
comparison to the standard process (e.g., juvenile court, presence of parents, lenient
punishments, etc.). The rationale behind this general defense is that infants under
certain age (biologic or mental) are presumed to be incapable of forming the
relevant fault required for criminal liability.12 The mental capacity of the infant is
incapable of containing the required fault and understanding the full social and
individual meanings of criminal liability. In this case, the imposition of criminal
liability is irrelevant, unnecessary and vicious.
Consequently, infants are not criminally liable, but rather educated, rehabilitated
and treated.13 Accordingly, the question is whether this rationale is relevant only for
humans, or it may be relevant for other legal entities as well. The general defense of
infancy is not considered applicable for corporations. There are no “infant
corporations”, and the moment the corporation is registered (and legally exists)
criminal liability may be imposed upon it. The reason is that the rationale of this
general defense is irrelevant for corporations. An infant has no mental capability to
form the required fault due to the mental underdevelopment of consciousness in this
age. When the infant becomes older, the mental capacity develops gradually until
possessing the capability of understanding right and wrong.
At this point, criminal liability becomes relevant. The mental capacity of
corporations is not depended on their chronologic “age” (the date of registration).
It is considered to be constant. Moreover, the mental capacity of a corporation is
derivative from its human organs, who are mature entities. Consequently, there is
no legitimate in rem defense for the general defense of infancy to be applicable for
corporations. At this point the question is whether artificial intelligence systems
resemble to humans or to corporations in this context.
The answer is different in relation to different types of artificial intelligence
systems. It should be distinguished between constant artificial intelligence systems
and dynamically developing artificial intelligence systems. Constant artificial intel-
ligence systems begin their activity with the same capacities that would accompany
them during their entire activity over years. Such systems would not experience any

398 (1865); Willet v. Commonwealth, 76 Ky. 230 (1877); Scott v. State, 71 Tex.Crim.R. 41, 158
S.W. 814 (1913); Price v. State, 50 Tex.Crim.R. 71, 94 S.W. 901 (1906).
11
Adams v. State, 8 Md.App. 684, 262 A.2d 69 (1970):

the most modern definition of the test is simply that the surrounding circumstances must
demonstrate, beyond a reasonable doubt, that the individual knew what he was doing and
that it was wrong.
12
A.W.G. Kean, The History of the Criminal Liability of Children, 53 L. Q. REV. 364 (1937).
13
Andrew Walkover, The Infancy Defense in the New Juvenile Court, 31 U.C.L.A. L. REV.
503 (1984); Keith Foren, Casenote: In Re Tyvonne M. Revisited: The Criminal Infancy Defense
in Connecticut, 18 Q. L. REV. 733 (1999).
152 5 Negative Fault Elements and Artificial Intelligence Systems

change in their capacities as time passes. Consequently, their capability of forming


mental requirements (e.g., awareness, intent, negligence, etc.) should be examined
at any point an offense has been committed, and no infancy general defense would
be relevant.
However, the starting point and the end point of dynamically developing artifi-
cial intelligence systems are different. Their capacities, including mental capacities,
are developing in time through machine learning or other techniques. If the system
has begun its activity with no mental capacities required for criminal liability and at
some point the system possesses such capabilities, the time between the starting
point and the point of possessing the relevant mental capacities parallels to infancy.
During the time of infancy, the system has no required capabilities for the
imposition of criminal liability. However, it may be questioned, if the mental
capacity of the artificial intelligence system is already checked at the time of
commission of the offense, would not the infancy general defense be irrelevant. If
the system possesses these capabilities, it is criminally liable regardless the date of
starting point. The answer is parallel to the necessity of this general defense for
humans and its rationale.14
The mental capacity of each child may be examined at any age and determine
particularly whether the child possesses the required capabilities or not. That would
be very inefficient. Any baby who hit another in the kindergarten is immediately
diagnosed for the mental capacities required for criminal liability.
The other way is to determine legal presumption that children under certain age
are not criminally liable whatsoever. So is the situation with artificial intelligence
systems. Massive production of strong artificial intelligence systems (e.g., dozens
of prison guards, military robots, etc.) of the very same capabilities and the very
same learning techniques may turn the particular diagnosis of the mental capacities
of each artificial intelligence system unnecessary. If it is known that this system
possesses its required mental capabilities after 1,000 h of activity, commission of an
offense by one of these artificial intelligence systems under 1,000 h of activity
would be presumed as “infancy”, and no criminal liability is imposed.
If the prosecution insists the specific system possesses required mental capacities
within the term of “infancy”, or if the defense insists the specific system does not
possess required mental capacities although the term of “infancy” is over, the actual
mental capacities of the systems can be examined and diagnosed accordingly. This
is not substantively different than natural gaps between biologic and mental ages of
humans. When a 17 years old human offender is argued to be mentally underdevel-
oped, the court examines the relevant mental capacities and decides whether this
offender has the relevant capacity to become criminally liable or not.
This rationale is relevant for both humans and dynamically developing artificial
intelligence systems, but not for corporations. Therefore, it seems that the general

14
Frederick J. Ludwig, Rationale of Responsibility for Young Offenders, 29 NEB. L. REV.
521 (1950); In re Tyvonne, 211 Conn. 151, 558 A.2d 661 (1989).
5.2 Negative Fault Elements by Artificial Intelligence Technology 153

defense of infancy may be relevant for that type of artificial intelligence systems
under the relevant circumstances.

5.2.1.2 Loss of Self-Control


Could an artificial intelligence system experience loss of self-control in the context
of criminal liability? Loss of self-control is a general defense which relates to
incapability to control bodily movements. When the reason for this incapability is
mental disease, it is considered insanity, and when the reason is the effect of
intoxicating materials, it is considered intoxication. The general defense of loss
of self-control is more general than insanity and intoxication for it does not require
specific type of reasons for the loss of self-control.
Whenever bodily movements of the offender are not under the offender’s full
control, that may be subject to this defense, if its conditions are met.15 The general
rationale of this defense is that uncontrollable bodily movement does not reflect the
offender’s will, and therefore should not be the basis for imposition of criminal
liability. Thus, uncontrollable reflex of an individual may be an expression of loss
of self-control.16 For instance, when the physician during a routine medical exami-
nation knocks on the patient’s knee, it causes a reflex of moving the lower part of
the leg under the knee to move forward. If the leg hit the physician, it may have
been considered as an assault, but since it is a result of a reflex, the general defense
of loss of self-control would be applicable, and not criminal liability is imposed.
However, the loss of self-control must be total for the general defense be
relevant. For instance, in the above example, the patient purposefully knocked on
his own knee for the reflex be activated and accordingly his leg would kick the
physician. In this case, this general defense would not be relevant, as the assault
reflects the offender’s will. Consequently, two accumulative conditions must be
met for the loss of self-control defense be applicable:

(a) Incapability of controlling self-behavior; and-


(b) Incapability of controlling the conditions for the occurrence of that
behavior.

Thus, in this example, the patient did not control his self-behavior (the reflex),
but he surely controlled the conditions for its occurrence (knocking the knee),
therefore the general defense of loss of self-control would not be applicable for
him. Many types of situations were recognized as loss of self-control.

15
In Bratty v. Attorney-General for Northern Ireland, [1963] A.C. 386, 409, [1961] 3 All E.R. 523,
[1961] 3 W.L.R. 965, 46 Cr. App. Rep 1, Lord Denning noted:

The requirement that it should be a voluntary act is essential, not only in a murder case, but
also in every criminal case. No act is punishable if it is done involuntarily.
State v. Mishne, 427 A.2d 450 (Me.1981); State v. Case, 672 A.2d 586 (Me.1996).
16
See, e.g., People v. Newton, 8 Cal.App.3d 359, 87 Cal.Rptr. 394 (1970).
154 5 Negative Fault Elements and Artificial Intelligence Systems

These situations include automatism (acting with no aware central control over
the body),17 convulsions, post-epileptic states,18 post-stroke states,19 organic brain
diseases, central nervous system diseases, hypoglycemia, hyperglycemia,20 som-
nambulism (sleep-walking),21 extreme absence of sleep,22 side effects of bodily23
or mental traumas,24 blackout situations,25 side effects of amnesia26 and brain-
wash,27 and many more other situations.28 For the fulfillment of the first condition,
the cause for the loss of self-control is insignificant. As long as the offender is
actually incapable of controlling his behavior, the first condition is fulfilled.
The second condition relates to the cause for entering the first conditions. If that
cause was controlled by the offender, he is not considered as losing his self-control.
Controlling the conditions for losing the self-control is controlling the behavior.
Therefore, when the offender controls the conditions to become in control or out of
control, he may not be considered as an individual who lost his self-control. In
Europe, the second condition is a doctrine (actio libera in causa), which dictates

17
Kenneth L. Campbell, Psychological Blow Automatism: A Narrow Defence, 23 CRIM. L. Q.
342 (1981); Winifred H. Holland, Automatism and Criminal Responsibility, 25 CRIM. L. Q.
95 (1982).
18
People v. Higgins, 5 N.Y.2d 607, 186 N.Y.S.2d 623, 159 N.E.2d 179 (1959); State v. Welsh,
8 Wash.App. 719, 508 P.2d 1041 (1973).
19
Reed v. State, 693 N.E.2d 988 (Ind.App.1998).
20
Quick, [1973] Q.B. 910, [1973] 3 All E.R. 347, [1973] 3 W.L.R. 26, 57 Cr. App. Rep. 722, 137
J.P. 763; C, [2007] E.W.C.A. Crim. 1862, [2007] All E.R. (D) 91.
21
Fain v. Commonwealth, 78 Ky. 183 (1879); Bradley v. State, 102 Tex.Crim.R. 41, 277 S.W. 147
(1926); Norval Morris, Somnambulistic Homicide: Ghosts, Spiders, and North Koreans, 5 RES
JUDICATAE 29 (1951).
22
McClain v. State, 678 N.E.2d 104 (Ind.1997).
23
People v. Newton, 8 Cal.App.3d 359, 87 Cal.Rptr. 394 (1970); Read v. People, 119 Colo.
506, 205 P.2d 233 (1949); Carter v. State, 376 P.2d 351 (Okl.Crim.App.1962).
24
People v. Wilson, 66 Cal.2d 749, 59 Cal.Rptr. 156, 427 P.2d 820 (1967); People v. Lisnow,
88 Cal.App.3d Supp. 21, 151 Cal.Rptr. 621 (1978); Lawrence Taylor and Katharina Dalton,
Premenstrual Syndrome: A New Criminal Defense?, 19 CAL. W. L. REV. 269 (1983); Michael
J. Davidson, Feminine Hormonal Defenses: Premenstrual Syndrome and Postpartum Psychosis,
2000 ARMY LAWYER 5 (2000).
25
Government of the Virgin Islands v. Smith, 278 F.2d 169 (3rd Cir.1960); People v. Freeman,
61 Cal.App.2d 110, 142 P.2d 435 (1943); State v. Hinkle, 200 W.Va. 280, 489 S.E.2d 257 (1996).
26
State v. Gish, 17 Idaho 341, 393 P.2d 342 (1964); Evans v. State, 322 Md. 24, 585 A.2d
204 (1991); State v. Jenner, 451 N.W.2d 710 (S.D.1990); Lester v. State, 212 Tenn. 338, 370 S.
W.2d 405 (1963); Polston v. State, 685 P.2d 1 (Wyo.1984).
27
Richard Delgado, Ascription of Criminal States of Mind: Toward a Defense Theory for the
Coercively Persuaded (“Brainwashed”) Defendant, 63 MINN. L. REV. 1 (1978); Joshua Dressler,
Professor Delgado’s “Brainwashing” Defense: Courting a Determinist Legal System, 63 MINN.
L. REV. 335 (1978).
28
FRANCIS ANTONY WHITLOCK, CRIMINAL RESPONSIBILITY AND MENTAL ILLNESS 119–120 (1963).
5.2 Negative Fault Elements by Artificial Intelligence Technology 155

that if the entrance to the uncontrolled situation was controlled by itself, the general
defense of loss of self-control is rejected.29
Accordingly, artificial intelligence systems may experience loss of self-control
in the context of criminal law. The loss of self-control may have external and
internal causes. For example, a human pushes an artificial intelligence system onto
another human person. The pushed artificial intelligence system has no control over
that movement. This is an example for external cause for loss of self-control. If the
pushed artificial intelligence system makes non-consensual physical contact with
the other person, it may be considered as assault. The mental element required for
assault is awareness.
If the artificial intelligence system is aware of that physical contact, both factual
and mental elements requirements of the assault are fulfilled. If the artificial
intelligence system were human, it would have probably argued for the general
defense of loss of self-control. Consequently, although both mental and factual
elements of the assault are fulfilled, no criminal liability for assault should be
imposed, since the commission of the offense was involuntary, or due to loss of
self-control. This general defense would have been preventing imposition of crimi-
nal liability upon human offenders. So should it prevent the imposition of criminal
liability upon artificial intelligence technology.
If the artificial intelligence system has no artificial intelligence capabilities of
consolidating awareness, there was no necessary with this defense, since the
artificial intelligence system would have been functioning as not more than a
screwdriver. However, the artificial intelligence system is aware of the assault,
and a screwdriver is not. The capabilities of the artificial intelligence system to
fulfill the mental element requirement of the offense create the necessary to apply
this general defense. This general defense functions with humans and artificial
intelligence systems at the same way.
An example for internal cause for loss of self-control is when internal malfunc-
tion or technical failure of the movement system cause uncontrolled movements of
the artificial intelligence system. The artificial intelligence system may be aware of
the malfunction, but still not be able to control it or fix it. This is also the case for the
general defense of loss of self-control. Thus, whether the cause for the loss of self-
control is external or internal, it is relevant for the applicability of this general
defense.
However, if the artificial intelligence system controlled these causes, the defense
would be inapplicable. For instance, if the artificial intelligence system physically
cased a person to push him onto another person (external cause), or if the artificial
intelligence system caused the malfunction while knowing what are the probable
consequences on its movement mechanism (internal cause), the second condition of
the defense is not fulfilled, and the defense is inapplicable. This is the same

29
RG 60, 29; RG 73, 177; VRS 23, 212; VRS 46, 440; VRS 61, 339; VRS 64, 189; DAR 1985,
387; BGH 2, 14; BGH 17, 259; BGH 21, 381.
156 5 Negative Fault Elements and Artificial Intelligence Systems

situation with humans. Consequently, it seems that the general defense of loss of
self-control may be applicable for artificial intelligence systems.

5.2.1.3 Insanity
Could an artificial intelligence system be considered insane for the question of
criminal liability? Insanity was known to humanity since the forth millennium BC.30
However, it was considered then as serving the sentence for religious sins.31
Insanity was considered the punishment itself, therefore there was no necessary
to research it or find cures for it.32 Only since the middle of the eighteenth century,
insanity has been explored as mental disease together with its legal aspects.33 In the
nineteenth century the terms “insanity” and “moral insanity” described situations
when the individual has no moral orientation or has defective moral concept,
although he is aware of the common moral values.34
Insanity was diagnosed as such only through major deviations from the common
behavior, especially sexual behavior.35 Since the end of the nineteenth century it
has been understood that insanity is mental malfunctioning, which sometimes may
not be expressed through behavioral deviations from the common behavior. This
approach fed the understandings of insanity in criminal law and criminology.36
Mental diseases and defects were categorized through their symptoms and along
with medical treatment for them, their effect on criminal liability was explored and
recorded. However, the different needs of psychiatry and criminal law created
different definitions for insanity.

30
KARL MENNINGER, MARTIN MAYMAN AND PAUL PRUYSER, THE VITAL BALANCE 420–489 (1963);
George Mora, Historical and Theoretical Trends in Psychiatry, 1 COMPREHENSIVE TEXTBOOK OF
PSYCHIATRY 1, 8–19 (Alfred M. Freedman, Harold Kaplan and Benjamin J. Sadock eds., 2nd
ed., 1975).
31
MICHAEL MOORE, LAW AND PSYCHIATRY: RETHINKING THE RELATIONSHIP 64–65 (1984); Anthony
Platt and Bernard L. Diamond, The Origins of the “Right and Wrong” Test of Criminal Responsi-
bility and Its Subsequent Development in the United States: An Historical Survey, 54 CAL. L. REV.
1227 (1966).
32
SANDER L. GILMAN, SEEING THE INSANE (1982); JOHN BIGGS, THE GUILTY MIND 26 (1955).
33
WALTER BROMBERG, FROM SHAMAN TO PSYCHOTHERAPIST: A HISTORY OF THE TREATMENT OF MENTAL
ILLNESS 63 (1975); GEORGE ROSEN, MADNESS IN SOCIETY: CHAPTERS IN THE HISTORICAL SOCIOLOGY OF
MENTAL ILLNESS 33, 82 (1969); EDWARD NORBECK, RELIGION IN PRIMITIVE SOCIETY 215 (1961).
34
JAMES COWLES PRICHARD, A TREATISE ON INSANITY AND OTHER DISORDERS AFFECTING THE MIND
(1835); ARTHUR E. FINK, CAUSES OF CRIME: BIOLOGICAL THEORIES IN THE UNITED STATES, 1800–1915
48–76 (1938); Janet A. Tighe, Francis Wharton and the Nineteenth Century Insanity Defense: The
Origins of a Reform Tradition, 27 AM. J. LEGAL HIST. 223 (1983).
35
Peter McCandless, Liberty and Lunacy: The Victorians and Wrongful Confinement, MADHOUSES,
MAD-DOCTORS, AND MADMEN: THE SOCIAL HISTORY OF PSYCHIATRY IN THE VICTORIAN ERA 339, 354
(Scull ed., 1981); VIEDA SKULTANS, ENGLISH MADNESS: IDEAS ON INSANITY, 1580–1890 69–97 (1979);
MICHEL FOUCAULT, MADNESS AND CIVILIZATION 24 (1965).
36
Seymour L. Halleck, The Historical and Ethical Antecedents of Psychiatric Criminology,
PSYCHIATRIC ASPECTS OF CRIMINOLOGY 8 (Halleck and Bromberg eds., 1968); FRANZ ALEXANDER
AND HUGO STAUB, THE CRIMINAL, THE JUDGE, AND THE PUBLIC 24–25 (1931); FRANZ ALEXANDER, OUR
AGE OF UNREASON: A STUDY OF THE IRRATIONAL FORCES IN SOCIAL LIFE (rev. ed., 1971).
5.2 Negative Fault Elements by Artificial Intelligence Technology 157

For instance, the early English legal definition for insane person (“idiot”) was
that he is not able to count until 20, whereas psychiatry does not consider such a
person as insane.37 The criminal law needed a bright, clear and conclusive defini-
tion for insanity, whereas psychiatry had no such needs. The modern legal defini-
tion of insanity in most modern legal systems is inspired by two nineteenth century
English tests. One is the M’Naughten rules from 1843,38 and the second is the
irresistible impulse test from 1840.39 The combination of both tests forms compli-
ance between the general defense of insanity and the structure of general intent.
The insanity legal definition has both cognitive and volitive aspects. The cogni-
tive aspect of insanity consists of the capability to understand the criminality of the
conduct, whereas the volitive aspect consists of the capability to control the will.
Thus, if a mental disease or defect cause cognitive malfunction (difficulty to
understand the factual reality and the criminality of the conduct) or volitive
malfunction (irresistible impulse), it is considered insanity in its legal manner.40
This is the conclusive common test for insanity.41 It fits the structure of general
intent, which also contains both cognitive and volitive aspects, and it is comple-
mentary to the general intent requirement.42
This definition of insanity is functional and not categorical. There is no neces-
sary of being mentally ill in certain list of mental diseases to be considered insane.
Any mental defect, of any kind, may be the basis for insanity as long as it causes
cognitive or volitive malfunctions. The malfunctions are examined functionally and
not necessarily medically through a certain list of mental diseases.43 As a result, a
person may be considered insane for criminal law and perfectly sane for psychiatry

37
Homer D. Crotty, The History of Insanity as a Defence to Crime in English Common Law,
12 CAL. L. REV. 105, 107–108 (1924).
38
M’Naghten, (1843) 10 Cl. & Fin. 200, 8 E.R. 718.
39
Oxford, (1840) 9 Car. & P. 525, 173 E.R. 941.
40
United States v. Freeman, 357 F.2d 606 (2nd Cir.1966); United States v. Currens, 290 F.2d
751 (3rd Cir.1961); United States v. Chandler, 393 F.2d 920 (4th Cir.1968); Blake v. United States,
407 F.2d 908 (5th Cir.1969); United States v. Smith, 404 F.2d 720 (6th Cir.1968); United States
v. Shapiro, 383 F.2d 680 (7th Cir.1967); Pope v. United States, 372 F.2d 710 (8th Cir.1970).
41
Commonwealth v. Herd, 413 Mass. 834, 604 N.E.2d 1294 (1992); State v. Curry, 45 Ohio St.3d
109, 543 N.E.2d 1228 (1989); State v. Barrett, 768 A.2d 929 (R.I.2001); State v. Lockhart, 208 W.
Va. 622, 542 S.E.2d 443 (2000). See also 18 U.S.C.A. }17.
42
THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT AND EXPLANATORY NOTES 61–
62 (1962, 1985):

(1) A person is not responsible for criminal conduct if at the time of such conduct as a result
of mental disease or defect he lacks substantial capacity either to appreciate the criminality
[wrongfulness] of his conduct or to conform his conduct to the requirements of law;
(2) As used in this Article, the terms ‘mental disease or defect’ do not include an
abnormality manifested only by repeated criminal or otherwise antisocial conduct.
43
State v. Elsea, 251 S.W.2d 650 (Mo.1952); State v. Johnson, 233 Wis. 668, 290 N.W. 159
(1940); State v. Hadley, 65 Utah 109, 234 P. 940 (1925); HENRY WEIHOFEN, MENTAL DISORDER AS A
CRIMINAL DEFENSE 119 (1954); K. W. M. Fulford, Value, Action, Mental Illness, and the Law,
158 5 Negative Fault Elements and Artificial Intelligence Systems

(e.g., cognitive malfunction that is not categorized as mental disease). The opposite
possibility (sane for criminal law, but insane for psychiatry) is feasible as well (e.g.,
mental disease that does not cause any cognitive or volitive malfunction).
The insane person is presumed to be incapable of forming the relevant fault
required for criminal liability. On that basis the question is whether the general
defense of insanity is applicable for artificial intelligence systems. The general
defense of insanity requires a mental, or inner, defect which causes cognitive or
volitive malfunction. No certain types of mental diseases are required, but any
mental defect. The question is how it can be known about the existence of that
“mental defect”.
Since the mental defect is examined functionally and not through certain
categories, therefore the symptoms of that mental defect are crucial in identification
of it. If the inner defect causes cognitive or volitive malfunction, whether that inner
defect is classified as “mental disease”, chemical imbalance in brain, electric
imbalance in brain etc., or not. The inner cause is examined through its functional
value over human mind. So is the legal situation with humans, and so may it be with
artificial intelligence systems. The more complicated and advanced the artificial
intelligence system is, the higher probability for inner defects.
The defects may be mainly in software, but also in hardware. Some inner defects
cause no malfunction of the artificial intelligence system, some do. If the inner
defect caused a cognitive or volitive malfunction of the artificial intelligence
system, it matches the criminal law definition of insanity. Since strong artificial
intelligence systems are capable of forming all general intent components, and
these components consist on cognitive and volitive components due to the general
intent structure, it is most probable that some inner defects may cause malfunction
of these capabilities.
When an inner defect causes such malfunction it matches the definition of
insanity in criminal law. Partial insanity would be applicable when the cognitive
or volitive malfunctions are not complete. Temporary insanity would be applicable
when these malfunctions affect the offender (human or artificial intelligence sys-
tem) for determined period.44
One may argue that this is not the typical character of the insane person, as it
does not match his perspective of the concept of insanity as drawn by psychiatry,
culture, folklore, literature and even movies. However, it still is insanity for
criminal law. First, criminal law definition for insanity is different than its
definitions in psychiatry, culture, etc, and this definition is used for human insanity.
Why should another definition be used just for artificial intelligence systems?
Second, criminal law does not require mental disease for human insanity, why
should it be required for artificial intelligence system insanity?

ACTION AND VALUE IN CRIMINAL LAW 279 (Stephen Shute, John Gardner and Jeremy Horder
eds., 2003).
44
People v. Sommers, 200 P.3d 1089 (2008); McNeil v. United States, 933 A.2d 354 (2007);
Rangel v. State, 2009 Tex.App. 1555 (2009); Commonwealth v. Shumway, 72 Va.Cir. 481 (2007).
5.2 Negative Fault Elements by Artificial Intelligence Technology 159

Criminal law definitions may seem to be too technical, but, technical or not, if
fulfilled by the offender, they are applied. If both human offenders and artificial
intelligence systems offenders have the capability of fulfilling the insanity require-
ments in criminal law, there is no legitimate reason to make the general defense of
insanity be applicable just for one type of offenders. Consequently, it seems that the
general defense of insanity may be applicable for artificial intelligence systems.

5.2.1.4 Intoxication
Could an artificial intelligence system be considered intoxicated for the question of
criminal liability? The effects of intoxicating materials were known to humanity
since prehistory. In the early law of ancient era the term “intoxication” referred to
drunkenness as a result of exposure to alcohol. Later, when the intoxicating effects
of other materials became known to humanity, this term has been expanded.45 Until
the beginning of the nineteenth century intoxication was not accepted as general
defense. The archbishop of Canterbury wrote in the seventh century that imposing
criminal liability upon a drunk person who committed homicide is justified for two
reasons: first, the very drunkenness, and second, the homicide of a Christian
person.46
Drunkenness was conceptualized as religious and moral sin, therefore it was
considered immoral to let offenders be in personam defense from criminal liability
for being drunk.47 Only in the nineteenth century a serious legal discussion towards
intoxication was made by courts. This kind of discussion was enabled due to the
legal and scientific developments, which have created the understanding that an
intoxicated person is not necessarily mentally competent for criminal liability (non
compos mentis). From the very beginning of the legal evaluation of intoxication in
the nineteenth century, the courts distinguished cases of voluntary and involuntary
intoxication.48
Voluntary intoxication was considered as part of the offender’s fault, therefore it
could not be the basis for in personam defense from criminal liability. However,
voluntary intoxication could have been considered as relevant circumstances for
imposition of more lenient punishment.49 In addition, voluntary intoxication could
have refuted premeditation in first degree murder cases.50 Courts have

45
R. U. Singh, History of the Defence of Drunkenness in English Criminal Law, 49 LAW Q. REV.
528 (1933).
46
THEODORI LIBER POENITENTIALIS, III, 13 (668–690).
47
Francis Bowes Sayre, Mens Rea, 45 HARV. L. REV. 974, 1014–1015 (1932).
48
WILLIAM OLDNALL RUSSELL, A TREATISE ON CRIMES AND MISDEMEANORS 8 (1843, 1964).
49
Marshall, (1830) 1 Lewin 76, 168 E.R. 965.
50
Pearson, (1835) 2 Lewin 144, 168 E.R. 1108; Thomas, (1837) 7 Car. & P. 817, 173 E.R. 356:

Drunkenness may be taken into consideration in cases where what the law deems sufficient
provocation has been given, because the question is, in such cases, whether the fatal act is to
be attributed to the passion of anger excited by the previous provocation, and that passion is
more easily excitable in a person when in a state of intoxication than when he is sober.
160 5 Negative Fault Elements and Artificial Intelligence Systems

distinguished cases by the reasons for entering into the intoxication situations.
Entering voluntary intoxication out of the will to commit an offense was considered
different case the entering voluntary intoxication for no criminal reason.51
However, involuntary intoxication has been recognized and accepted as an in
personam defense from criminal liability.52 Involuntary intoxication is a situation
imposed upon the individual, therefore it is not just and fair to impose criminal
liability in such situations. Thus, the general defense of intoxication has two main
functions. When it is involuntary intoxication, it prevents imposition of criminal
liability. When it is voluntary, but not purposed for commission of an offense, it is
to be considered for imposition of more lenient punishment.
The modern understanding of intoxication includes any mental effect which is
caused by external material (e.g., chemicals). The required mental effect matches
the structure of general intent, discussed above. Consequently, the effect may be
cognitive or volitive.53 The intoxicating effect may relate to the offender’s percep-
tion, understanding of factual reality or awareness (cognitive effect), or it may
relate to the offender’s will up to irresistible impulse (volitive effect). Intoxication
is initialized by an external material. There is no close list of materials, and they
may be illegal (e.g., heroin, cocaine, etc.) or perfectly legal (e.g., alcohol, sugar,
pure water, etc.).
The effect of the external materials on the individual is subjective. One person
may be affected very differently than another from the same materials at the same
quantity. Sugar may cause one person hyperglycemia, whereas another is barely
affected. Pure water may imbalance the electrolytes in one person, whereas another
is barely affected, etc. Cases of addiction raised the question whether the absence of
the external material may be considered as a cause for intoxication. For instance,
when a narcotist person is in procedure of weaning from drugs, he may experience
cognitive and volitive malfunctions due to the absence of drugs.
Consequently, narcotic addiction cognitive and volitive effects were considered
intoxication for the criminal law.54 For the question of voluntary and involuntary
intoxication in these cases (the addicted wanted to begin the weaning procedure),

51
Meakin, (1836) 7 Car. & P. 297, 173 E.R. 131; Meade, [1909] 1 K.B. 895; Pigman v. State,
14 Ohio 555 (1846); People v. Harris, 29 Cal. 678 (1866); People v. Townsend, 214 Mich.
267, 183 N.W. 177 (1921).
52
Derrick Augustus Carter, Bifurcations of Consciousness: The Elimination of the Self-Induced
Intoxication Excuse, 64 MO. L. REV. 383 (1999); Jerome Hall, Intoxication and Criminal Respon-
sibility, 57 HARV. L. REV. 1045 (1944); Monrad G. Paulsen, Intoxication as a Defense to Crime,
1961 U. ILL. L. F. 1 (1961).
53
State v. Cameron, 104 N.J. 42, 514 A.2d 1302 (1986); State v. Smith, 260 Or. 349, 490 P.2d
1262 (1971); People v. Leonardi, 143 N.Y. 360, 38 N.E. 372 (1894); Tate v. Commonwealth,
258 Ky. 685, 80 S.W.2d 817 (1935); Roberts v. People, 19 Mich. 401 (1870); People v. Kirst,
168 N.Y. 19, 60 N.E. 1057 (1901); State v. Robinson, 20 W.Va. 713, 43 Am.Rep. 799 (1882).
54
Addison M. Bowman, Narcotic Addiction and Criminal Responsibility under Durham, 53 GEO.
L. J. 1017 (1965); Herbert Fingarette, Addiction and Criminal Responsibility, 84 YALE L. J.
413 (1975); Lionel H. Frankel, Narcotic Addiction, Criminal Responsibility and Civil Commit-
ment, 1966 UTAH L. REV. 581 (1966); Peter Barton Hutt and Richard A. Merrill, Criminal
5.2 Negative Fault Elements by Artificial Intelligence Technology 161

the cause for the addiction is examined as voluntary or involuntary and not the
cause for weaning.55 Thus, intoxication is examined through functional examina-
tion of its cognitive and volitive effects upon the specific individual, regardless the
exact identity of the external material which is the initiative cause for those effects.
On that basis the question is whether the general defense of intoxication is applica-
ble for artificial intelligence systems.
As aforesaid, the general defense of intoxication requires an external material
(e.g., presence or absence of certain chemical material), which affects the inner
process of consciousness through cognitive or volitive effects. For example, the
manufacturer of artificial intelligence systems wanted to reduce the production
expenses, therefore cheap materials were used. After a few months in some of the
initial components of the artificial intelligence systems a process of corrosion has
begun. Consequently, transmission of information was defected in a way affected
the awareness process. Technically, this is a very similar process to the effects of
alcohol on the human neurons.
Another example, a military artificial intelligence system is designed to function
in civil zones after chemical weapon attack. When such an attack occurred (during
training or as real attack), the artificial intelligence system has been activated. After
exposed to the gas, parts of its hardware were infected, and consequently began to
malfunction. This malfunction affected the identification process of the artificial
intelligence system, therefore it began to attack innocent civilians. Analyzing the
artificial intelligence system’s records afterwards showed that the exposure to the
gas was the only reason for attacking civilians. If this artificial intelligence system
would have been human, any court would have accepted the general defense of
intoxication and exonerate the defendant.
Why should not it be the same for an artificial intelligence system? If examined
functionally, when there is no difference between the effects of external materials
on humans and artificial intelligence systems, there is no legitimate in rem defense
for the applicability of the general defense of intoxication on only one of them.
Strong artificial intelligence systems may possess both cognitive and volitive inner
processes. These processes may be affected by various factors. When they are
affected by external materials, as demonstrated above, it fulfills the requirements of
intoxication as general defense.
As a result, if the exposure to certain materials affects cognitive and volitive
processes of an artificial intelligence system in a way that causes the system to
commit an offense, there is no reason why to prevent the applicability of intoxica-
tion as general defense. It may be true that artificial intelligence systems cannot be
drunk out of alcohol nor have illusions and delusions out of drugs, but these effects

Responsibility and the Right to Treatment for Intoxication and Alcoholism, 57 GEO. L. J.
835 (1969).
55
Powell v. Texas, 392 U.S. 514, 88 S.Ct. 2145, 20 L.Ed.2d 1254 (1968); United States v. Moore,
486 F.2d 1139 (D.C.Cir.1973); State v. Herro, 120 Ariz. 604, 587 P.2d 1181 (1978); State v. Smith,
219 N.W.2d 655 (Iowa 1974); People v. Davis, 33 N.Y.2d 221, 351 N.Y.S.2d 663, 306 N.E.2d
787 (1973).
162 5 Negative Fault Elements and Artificial Intelligence Systems

are not the only possible effects related to intoxication. If a human soldier attacks
his friends due to exposure to chemical gas attack, his argument for intoxication is
accepted. If this exposure has the same substantive and functional effects upon both
humans and artificial intelligence systems, there is no legitimate reason to make the
general defense of intoxication be applicable just for one type of offenders. Conse-
quently, it seems that the general defense of intoxication may be applicable for
artificial intelligence systems.

5.2.1.5 Factual Mistake


Could an artificial intelligence system be considered factually mistaken for the
question of criminal liability? The general defense of factual mistake provides a
revised perspective on the cognitive aspect of consciousness. When discussed on
awareness, it has been assumed that there is an existing factual reality, in which the
individual may or may not be aware of it. That factual reality was considered
constant, objective and external to the individual. However, the only way an
individual may know about that factual reality is through the process of awareness,
i.e., perception by senses of factual data and its understanding.56
Only when the human brain tells its possessor that this is the factual reality, the
possessor believes it. There is no other way to evaluate the existence of factual
reality. However, sights, sounds, smells, physical pressure etc. may be simulated,
and human brain may be stimulated to feel them, although they are not necessarily
exist. The ultimate example is the human dream. How many people do really know
that they are dreaming while they are dreaming. Most of people, at least, cannot
distinguish between the dream and the factual reality while they are dreaming.
During the dream people see sights, hear sounds, smell, feel, talk, run, as if it is the
factual reality.
Is the dream not the factual reality? Why? What makes the dream be only a
dream, in the eyes of the dreamer? For most people, the dream is a dream just
because they have waken up from it. What if, perhaps, they have waken up into a
dream? This option is not accepted by most of us due to their intuition, that if they
open their eyes, it is morning and they are in bed, then the dream chapter of this day
has been ended. Nevertheless, when they are dreaming, in the middle of a dream,
they have no reliable way to distinguish dreams from what is called “factual
reality”.
“Factual reality” (with commas) and not factual reality, because people do not
have a reliable way to verify it. People “see” things by their brains, not eyes. The
eyes are only sensors of light. The light is interpreted to electric currents in the brain
between the neurons, and they form the “sight” through stimulating the brain in the
right spots. However, this stimulation of the brain may be imitated. A human brain
may be connected to electrodes which stimulate it in the right spots, and the brain
would “see”, “hear”, “feel” etc. The stimulated brain would be aware of them and
any human would swear that this is the factual reality.

56
William G. Lycan, Introduction, MIND AND COGNITION 3–13 (William G. Lycan ed., 1990).
5.2 Negative Fault Elements by Artificial Intelligence Technology 163

If people already begin to doubt their awareness of the factual reality, they must
consider, in addition, the problem of perspective. Even if people assume that they
experience the factual reality (with no commas), they may experience it only
through their subjective perspective, which is not necessarily the only perspective
of that very factual reality. For instance, if one cuts a triangle shape from cardboard,
it may be seen a triangle if it is looked at towards its flat side. However, it may also
be seen as a narrow strip if it is rotated in 90 on the main axis crossing its flat shape.
The problem of perspective may be crucial if different interpretations are added to
different perspectives.
For instance, two person see and hear a man telling a woman that he is about to
kill her, and he holds a long knife. One person would understand that as a serious
threat on the woman’s life, call the police or attempt to save her by attacking the
man, whereas the other would understand it as part of a show (e.g., street theatre),
that requires no intervention of his. On that basis, the deep question in criminal law
in this context is what should be the factual basis of criminality—the factual reality
as actually occurred, or what the offender believed it to be the factual reality,
although it may have not actually occurred.
For example, a defendant in rape confesses that he and the complainant had full
sexual intercourse. The defendant proves that he had really believed that it was
done consensually, and the prosecution proves that in fact the complainant did not
consent. The court believes both of them, and both of them are telling the truth.
Should the court acquit or convict the defendant. The modern criminal law since the
seventeenth century prefers the subjective perspective of the individual
(as defendant) on the factual reality upon the factual reality itself.57 The general
concept is that the individual cannot be legitimately criminally liable, but only for
“facts” he believed to be knowing, whether they actually occurred in the factual
reality or not.58
The limitations on this concept in most legal systems were evidentiary, so that
the defendant’s argument would be considered true and authentic. However, if the
argument is considered true and authentic, this becomes the basis for the imposition
of criminal liability. Thus, in the above example of rape, the defendant is to be
exonerated since he really believed it was consensual. If the defendant’s perspective
negates the mental element requirement, the factual mistake works as a general
defense which reveals the defendant’s acquittal.59

57
Edwin R. Keedy, Ignorance and Mistake in the Criminal Law, 22 HARV. L. REV. 75, 78 (1909);
Levett, (1638) Cro. Car. 538.
58
State v. Silveira, 198 Conn. 454, 503 A.2d 599 (1986); State v. Molin, 288 N.W.2d
232 (Minn.1979); State v. Sexton, 160 N.J. 93, 733 A.2d 1125 (1999).
59
State v. Sawyer, 95 Conn. 34, 110 A. 461 (1920); State v. Cude, 14 Utah 2d 287, 383 P.2d
399 (1963); Ratzlaf v. United States, 510 U.S. 135, 114 S.Ct. 655, 126 L.Ed.2d 615 (1994); Cheek
v. United States, 498 U.S. 192, 111 S.Ct. 604, 112 L.Ed.2d 617 (1991); Richard H. S. Tur,
Subjectivism and Objectivism: Towards Synthesis, ACTION AND VALUE IN CRIMINAL LAW
213 (Stephen Shute, John Gardner and Jeremy Horder eds., 2003).
164 5 Negative Fault Elements and Artificial Intelligence Systems

The general defense of factual mistake is not applicable only in general intent
offenses, but also in negligence and strict liability offenses. The only difference is
in the required type of mistake. In general intent offenses any authentic mistake
negates awareness of the factual reality, and is considered adequate for that general
defense. In negligence offenses, the mistake should be reasonable as well, for the
defendant to be considered as acting reasonably.60 In strict liability offenses, the
mistake should be inevitable although the defendant has taken all reasonable
measures to prevent it.61 On that basis the question is whether the general defense
of factual mistake is applicable for artificial intelligence systems.
Both humans and artificial intelligence systems may experience difficulties,
errors and malfunctions in the process of awareness of the factual reality. These
difficulties may be both in the process of absorbing the factual data by senses and in
the process of creating a relevant general image towards this data. In most cases the
result of such malfunctioned process is creation of inner factual image which is
different than the factual reality, as the court understands it. This is factual mistake
concerning the factual reality. Factual mistakes are part of human everyday life and
they are the wide basis for human behavior.
In some cases, the factual mistake of both humans and artificial intelligence
systems may reveal to commission of an offense. It means that according to the
factual reality it is considered as an offense, but not according to the subjective
inner factual image of the individual, which happens to be considered as involving
factual mistake. For instance, a human soldier mistakenly identifies his friend as an
enemy soldier and he shoots him. The shot soldier, for unknown reasons, wore the
enemy uniform, spoke the enemy language, and was looked like as if he intends to
attack the shooting soldier. Although he was called to identify himself, he ignored
the requirement. In this case, the mistake is authentic, reasonable and inevitable.
If the shooting soldier is human and he argues for factual mistake, he would
probably be exonerated (if indicted at all). Now, let us assume that the soldier is not
human, but a strong artificial intelligence system. Why should the criminal law treat
the artificial intelligence system soldier differently than the human soldier? The
error for both human and artificial intelligence system soldiers is substantively and
functionally identical. The factual mistake of both humans and artificial intelligence
systems causes the same substantive and functional effects on cognition and on the
perception of factual reality. As a result, there is no reason why to prevent the
applicability of factual mistake as general defense on artificial intelligence systems,
the very same way it is applicable on humans.
Although computers may have the appeal of not making mistakes, they do. The
probability of mistake in mathematical calculations of a computer may be low, but
if the computer absorbs mistaken factual data, the final figures and calculations may
be regarded wrong. Their calculations towards the required, possible and

60
United States v. Lampkins, 4 U.S.C.M.A. 31, 15 C.M.R. 31 (1954).
61
People v. Vogel, 46 Cal.2d 798, 299 P.2d 850 (1956); Long v. State, 44 Del. 262, 65 A.2d
489 (1949).
5.2 Negative Fault Elements by Artificial Intelligence Technology 165

impossible courses of conduct are affected accordingly, the same way as humans.62
This is the case for the general defense of factual mistake. If factual mistakes have
the same substantive and functional effects upon both humans and artificial intelli-
gence systems, there is no legitimate reason to make the general defense of factual
mistake be applicable just for one type of offenders. Consequently, it seems that the
general defense of factual mistake may be applicable for artificial intelligence
systems.

5.2.1.6 Legal Mistake


Could an artificial intelligence system be considered legally mistaken for the
question of criminal liability? Legal mistake is a situation of mistake at law, i.e.,
either misinterpretation or ignorance of the law. The general idea behind this
defense is that a person, who does not know about certain prohibition and conse-
quently commits an offense, does not consolidate the required fault for imposition
of criminal liability. However, the mental element of the particular offenses does
not include knowing about the very prohibition.
For instance, the particular offense of rape requires mental element of general
intent, which includes awareness of the commission of sexual intercourse with a
woman and of the absence of consent. That offense does not require that the rapist
would know that rape is prohibited as a criminal offense. The mental element
requirement of rape is satisfied through awareness of the factual element
components, regardless the rapist knowledge whether rape is prohibited or not.
This is the legal situation in most offenses.63 The reason for this situation is affected
by prospective considerations of the ordinary life in society.
If offenders would have been required to know about the prohibition as a
condition for imposition of criminal liability, they would have been encouraged
not to learn the law. As long as they were ignorant of the law, they were would have
had immunity from criminal liability. If no such condition is required, the public is
encouraged to know the law, to check it and to obey it. These prospective
considerations do not fully match the fault requirement in criminal law, therefore
criminal law should have found a balance between justice, requiring fault, and these
prospective considerations, concerned of life in society.
The starting point was totally prospective. The Roman law stated the ignorance
of law does not excuse commission of offenses (ignorantia juris non excusat),64 and
until the nineteenth century this was the general approach of most legal systems.65

62
Fernand N. Dutile and Harold F. Moore, Mistake and Impossibility: Arranging Marriage
Between Two Difficult Partners, 74 NW. U. L. REV. 166 (1980).
63
Douglas Husak and Andrew von Hirsch, Culpability and Mistake of Law, ACTION AND VALUE IN
CRIMINAL LAW 157, 161–167 (Stephen Shute, John Gardner and Jeremy Horder eds., 2003).
64
Digesta, 22.6.9: “juris quidam ignorantiam cuique nocere, facti vero ignorantiam non nocere”.
65
See, e.g., Brett v. Rigden, (1568) 1 Plowd. 340, 75 E.R. 516; Mildmay, (1584) 1 Co. Rep. 175a,
76 E.R. 379; Manser, (1584) 2 Co. Rep. 3, 76 E.R. 392; Vaux, (1613) 1 Blustrode
197, 80 E.R. 885; Bailey, (1818) Russ. & Ry. 341, 168 E.R. 835; Esop, (1836) 7 Car. & P. 456,
173 E.R. 203; Crawshaw, (1860) Bell. 303, 169 E.R. 1271; Schuster v. State, 48 Ala. 199 (1872).
166 5 Negative Fault Elements and Artificial Intelligence Systems

In the nineteenth century, when culpability requirement in criminal law has dra-
matically developed, a balance was required. Consequently, the required legal
mistake was required to be made in good faith (bona fide),66 and in the highest
standard of mental element, i.e., strict liability. According to this standard, the
required mistake is an inevitable legal mistake although all reasonable measures
have been taken to prevent it.67
This high standard of mistake is required in relation to all types of offenses,
regardless their mental element requirement. Thus, the general standard of legal
mistake is higher than that of factual mistake. The main debate in courts in this
context is whether the offender has indeed taken all reasonable measures to prevent
the legal mistake. That includes the questions of reasonable reliance upon statutes,
judicial decisions,68 official interpretations of the law (including pre-rulings),69 and
advices of private counsel.70 On that basis the question is whether the general
defense of legal mistake is applicable for artificial intelligence systems.
Technically, if the relevant entity, human or artificial intelligence system, has
the capability of fulfilling the mental element requirement of strict liability
offenses, this entity has the capabilities of arguing for legal mistake as general
defense. Since strong artificial intelligence systems have the capabilities of fulfill-
ing the mental element requirement of strict liability offenses, they have the
relevant capabilities that the general defense of legal mistake would be relevant
for them. The absence of legal knowledge towards specific issue may be proven
through the artificial intelligence records of knowledge, and thus the good faith
requirement is fulfilled as well.
The basic meaning of the applicability of legal mistake defense to artificial
intelligence systems is that the system has not been restricted by any formal legal
restriction, and it acted accordingly. If the artificial intelligence system has a
software mechanism that searches for such restrictions and although activated no
such legal restriction has been found, this general defense would be relevant.
However, the system’s in personam defense from criminal liability does not
function as an in personam defense from criminal liability for the programmers

66
Forbes, (1835) 7 Car. & P. 224, 173 E.R. 99; Parish, (1837) 8 Car. & P. 94, 173 E.R. 413; Allday,
(1837) 8 Car. & P. 136, 173 E.R. 431; Dotson v. State, 6 Cold. 545 (1869); Cutter v. State,
36 N.J.L. 125 (1873); Squire v. State, 46 Ind. 459 (1874).
67
State v. Goodenow, 65 Me. 30 (1876); State v. Whitoomb, 52 Iowa 85, 2 N.W. 970 (1879).
68
Lutwin v. State, 97 N.J.L. 67, 117 A. 164 (1922); State v. Whitman, 116 Fla. 196, 156
So. 705 (1934); United States v. Mancuso, 139 F.2d 90 (3rd Cir.1943); State v. Chicago, M. &
St.P.R. Co., 130 Minn. 144, 153 N.W. 320 (1915); Coal & C.R. v. Conley, 67 W.Va.
129, 67 S.E. 613 (1910); State v. Striggles, 202 Iowa 1318, 210 N.W. 137 (1926); United States
v. Albertini, 830 F.2d 985 (9th Cir.1987).
69
State v. Sheedy, 125 N.H. 108, 480 A.2d 887 (1984); People v. Ferguson, 134 Cal.App. 41, 24
P.2d 965 (1933); Andrew Ashworth, Testing Fidelity to Legal Values: Official Involvement and
Criminal Justice, 63 MOD. L. REV. 663 (2000); Glanville Williams, The Draft Code and Reliance
upon Official Statements, 9 LEGAL STUD. 177 (1989).
70
Rollin M. Perkins, Ignorance and Mistake in Criminal Law, 88 U. PA. L. REV. 35 (1940).
5.2 Negative Fault Elements by Artificial Intelligence Technology 167

or users of the system. If these persons could have restricted the system to legal
activity, but have not done it, they may be criminally liable for the offense through
the perpetration-through-another liability or probable consequence liability.
For instance, an artificial intelligence system absorbs factual data upon certain
persons, and it is required to analyze their personality accordingly and publish it in
certain way. In one case the publication is considered criminal libel. If the records
of the system show that the system has not been restricted by any restriction towards
libelous publications and neither had it mechanism for searching for such
restrictions nor have it found such restriction if it had that mechanism, the system
would not be criminally liable for the libel. However, the manufacturer,
programmers and users may be criminally liable for the libel as perpetrators-
through-another or through probable consequence liability.
Artificial intelligence system may have very wide knowledge towards many
kinds of issues, but it does not necessarily contain legal knowledge of every system
on every legal issue. The system may be searching for legal restrictions, if designed
to do that, but not necessarily find such. This is the case for the general defense of
legal mistake. If legal mistakes have the same substantive and functional effects
upon both humans and artificial intelligence systems, there is no legitimate reason
to make the general defense of legal mistake be applicable just for one type of
offenders. Consequently, it seems that the general defense of legal mistake may be
applicable for artificial intelligence systems.

5.2.1.7 Substantive Immunity


Could an artificial intelligence system have a substantive immunity in the context of
criminal law? Certain types of persons enjoy substantive immunity from criminal
liability due to their office (ex officio). This immunity is granted ex ante to these
persons for not being troubled by criminal law issues related to their office. The
society grants these immunities since these persons’ office is regarded much more
important than the probable criminal offenses may be committed through fulfilling
their duties. This immunity is not absolute, but it relates to offenses that were
committed as part of the duty and for the fulfillment of that duty.
For instance, a fireman is in a mission of saving the life of a young woman who is
in a burning apartment on the 15th floor of a certain building. After lifted by a crane,
he stands in front her window. If the fireman brakes the window to enter the
apartment, that fulfills the factual and mental elements requirements of several
offenses (e.g., intrusion, eruption, property damage, etc.). For the fireman to fulfill
the function of saving life, he is granted a substantive immunity from criminal
liability for these offenses. However, if the fireman would have entered the apart-
ment and rape the young woman, the immunity would have been irrelevant for him,
since rape is not part of fulfilling his job.
This kind of immunity is substantive as it annihilates the criminal liability of the
relevant person, and it is not only a barrier from indictment or other criminal
proceedings. For the substantive immunity to be granted, the law determines it
specifically for the relevant types of persons (e.g., firemen, policemen, soldiers,
etc.). The general defense is applicable for these persons only if they have
168 5 Negative Fault Elements and Artificial Intelligence Systems

committed the relevant offense during their official duty, for the fulfillment of their
official duty, and with good faith (bona fide), i.e., not exploiting the immunity for
deliberate commission of other criminal offenses. On that basis the question is
whether the general defense of substantive immunity is applicable for artificial
intelligence systems.
Let us assume that in the above example (fireman saving a young woman from
her burning apartment), that the fireman is human. That fireman, if indicted in
causing property damage, would have probably argued for substantive immunity.
The court would have probably accept this argument and acquit him immediately. It
may be assumed, that the fire was too heavy to risk human life, and therefore an
artificial intelligence system has been sent to save the woman’s life. If the human
fireman has been granted such immunity, why would not it be granted for the
artificial intelligence system fireman? At the point of breaking the window, the
artificial intelligence system, if equipped with strong artificial intelligence system,
the very same decision the human fireman does. Why would it be different as to
their criminal liability?
If all conditions to grant this general defense are met, there is no reason to use
different standards for humans and artificial intelligence systems. Artificial intelli-
gence systems are already in use for official duties (e.g., as guards), and inevitably
they sometimes have to commit offenses for fulfilling their duties. For instance,
prison guards might be physically assaulting escaping prisoners to prevent the
escape. If such situations have the same substantive and functional effects upon
both humans and artificial intelligence systems, there is no legitimate reason to
make the general defense of substantive immunity be applicable just for one type of
offenders. Consequently, it seems that the general defense of substantive immunity
may be applicable for artificial intelligence systems.

5.2.2 In Rem Negative Fault Elements

In rem negative fault elements are in rem defenses which general defenses, that are
related to the characteristics of the factual event (in rem), as noted above.71 The
applicability of in rem defenses upon artificial intelligence criminal liability raises
the question of the capability of artificial intelligence systems to be part of such
situations as self-defense, necessity or duress. This raises deep questions. For
instance, would it be legitimate to enable an artificial intelligence system to defend
itself from an attack? What if the attack is driven by humans—would it be legiti-
mate to let an artificial intelligence system attack humans for the artificial intelli-
gence system’s sake?
In general, since in rem defenses are in rem general defenses, the personal
characteristics of the individual (human or artificial intelligence system) should
be considered insignificant. However, these general defenses were designed to

71
Above at Sect. 5.1.
5.2 Negative Fault Elements by Artificial Intelligence Technology 169

humans while being aware of the human weaknesses and for these weaknesses. For
instance, self-defense was designed to protect the human instinct of life. Is this
instinct relevant to artificial intelligence systems, which are machines? If not, why
would the self-defense be relevant for machines? The applicability of in rem
defenses as general defenses upon artificial intelligence systems is explored below.

5.2.2.1 Self-Defense
Self-defense is one of the most ancient defenses in human culture. Its basic essence
is to partly reduce of the thorough applicability of the general concept of the
society’s monopoly over power.72 According to this concept, only the society
(i.e., the state as such) has the authority to use force upon the individuals. No
individual is authorized to do that. Consequently, when one individual has a dispute
with another, he may not use power, but apply the state (e.g., through courts, police,
etc.) for the state solve the problem and use power. This concept excludes the power
from being in the individuals’ hands.
However, for this concept to be effective the state’s representatives must be
present all the time in all places. If one individual is attacked by another in a dark
corner of the street, he may not retaliate, but rather wait to the state representatives.
They may come, but they also may be unavailable at that point of time. In order to
enable individuals to protect themselves from attackers in this kind of situations, the
society must retreat, partly, from that concept. One retreat is through the acceptance
of self-defense as general defense in criminal law. The self-defense enables the
individual to protect some values while using force outside the society’s monopoly
over power concept.
Being in situation that requires self-defense is considered as negating the
individual’s fault required for the imposition of criminal liability. This concept
has been accepted by legal systems in the world since ancient ages.73 In time this
defense became wider and more accurate. Its modern basis is to enable the individ-
ual to repel forthcoming attack upon a legitimate interest. Consequently, there are
several conditions to enter the sphere of this general defense:

72
Chas E. George, Limitation of Police Powers, 12 LAW. & BANKER & S. BENCH & B. REV.
740 (1919); Kam C. Wong, Police Powers and Control in the People’s Republic of China: The
History of Shoushen, 10 COLUM. J. ASIAN L. 367 (1996); John S. Baker Jr., State Police Powers and
the Federalization of Local Crime, 72 TEMP. L. REV. 673 (1999).
73
Dolores A. Donovan and Stephanie M. Wildman, Is the Reasonable Man Obsolete? A Critical
Perspective on Self-Defense and Provocation, 14 LOY. L. A. L. REV. 435, 441 (1981); Joshua
Dressler, Rethinking Heat of Passion: A Defense in Search of a Rationale, 73 J. CRIM. L. &
CRIMINOLOGY 421, 444–450 (1982); Kent Greenawalt, The Perplexing Borders of Justification and
Excuse, 84 COLUM. L. REV. 1897, 1898, 1915–1919 (1984).
170 5 Negative Fault Elements and Artificial Intelligence Systems

(a) The protected interest should be legitimate. Legitimate interests are life,
freedom, body and property74—of the individual or of other individuals.75
No previous introduction between them is required.76 Thus, self-defense is
not entirely “self”;
(b) The protected interest should be attacked illegitimately.77 When a police-
man attacks the individual to arrest him by a warrant, this is a legitimate
attack78;
(c) The protected interest should be in an immediate and actual danger79;
(d) The act (self-defense) should be repelling the attack, proportional to it,80
and immediate81; and-
(e) The defender did not control the attack or the conditions for its occurrence
(actio libera in causa).82

If all these conditions are fulfilled, the individual is considered to be acting under
self-defense, and accordingly no criminal liability is imposed upon him for the
commission of the offense. Thus, not every time an attack is repelled, it may be
considered self-defense, but only when the repelling act follows the above
conditions at full. On that basis the question is whether the general defense of
self-defense is applicable for artificial intelligence systems. The answer for this

74
State v. Brosnan, 221 Conn. 788, 608 A.2d 49 (1992); State v. Gallagher, 191 Conn. 433, 465
A.2d 323 (1983); State v. Nelson, 329 N.W.2d 643 (Iowa 1983); State v. Farley, 225 Kan. 127, 587
P.2d 337 (1978).
75
Commonwealth v. Monico, 373 Mass. 298, 366 N.E.2d 1241 (1977); Commonwealth
v. Johnson, 412 Mass. 368, 589 N.E.2d 311 (1992); Duckett v. State, 966 P.2d 941 (Wyo.1998);
People v. Young, 11 N.Y.2d 274, 229 N.Y.S.2d 1, 183 N.E.2d 319 (1962); Batson v. State,
113 Nev. 669, 941 P.2d 478 (1997); State v. Wenger, 58 Ohio St.2d 336, 390 N.E.2d
801 (1979); Moore v. State, 25 Okl.Crim. 118, 218 P. 1102 (1923).
76
Williams v. State, 70 Ga.App. 10, 27 S.E.2d 109 (1943); State v. Totman, 80 Mo.App.
125 (1899).
77
Lawson, [1986] V.R. 515; Daniel v. State, 187 Ga. 411, 1 S.E.2d 6 (1939).
78
John Barker Waite, The Law of Arrest, 24 TEX. L. REV. 279 (1946).
79
People v. Williams, 56 Ill.App.2d 159, 205 N.E.2d 749 (1965); People v. Minifie, 13 Cal.4th
1055, 56 Cal.Rptr.2d 133, 920 P.2d 1337 (1996); State v. Coffin, 128 N.M. 192, 991 P.2d
477 (1999).
80
State Philbrick, 402 A.2d 59 (Me.1979); State v. Havican, 213 Conn. 593, 569 A.2d 1089
(1990); State v. Harris, 222 N.W.2d 462 (Iowa 1974); Judith Fabricant, Homicide in Response to a
Threat of Rape: A Theoretical Examination of the Rule of Justification, 11 GOLDEN GATE U. L. REV.
945 (1981).
81
Celia Wells, Battered Woman Syndrome and Defences to Homicide: Where Now?, 14 LEGAL
STUD. 266 (1994); Aileen McColgan, In Defence of Battered Women who Kill, 13 OXFORD J. LEGAL
STUD. 508 (1993); Joshua Dressler, Battered Women Who Kill Their Sleeping Tormenters:
Reflections on Maintaining Respect for Human Life while Killing Moral Monsters, CRIMINAL
LAW THEORY – DOCTRINES OF THE GENERAL PART 259 (Stephen Shute and A. P. Simester eds., 2005).
82
State v. Moore, 158 N.J. 292, 729 A.2d 1021 (1999); State v. Robinson, 132 Ohio App.3d
830, 726 N.E.2d 581 (1999).
5.2 Negative Fault Elements by Artificial Intelligence Technology 171

question is depended on the artificial intelligence systems capabilities of fulfilling


the above conditions in the relevant certain situations.
The legitimate protected interest is in the core of the self-defense. When the
protected interest is of another human individual (human interest protected by
artificial intelligence system through self-defense), society does not seem to have
any problem with that. In fact, this is the wide basis for the legal activity of guard
artificial intelligence systems (on humans, prisoners, borders, dwelling, etc.). When
these artificial intelligence systems have the authority to repel attacks on human
interests, the self-defense is the legal in rem defense. However, the question is
whether the self-interest of an artificial intelligence system may be legally protected
through self-defense as well.
In fact, there are two questions in this issue: one is moral and the other is legal.
Morally, the question is whether society accepts the idea of an artificial intelligence
system protecting itself and having the derivative rights, some of them are consti-
tutional.83 The moral question has nothing to do with the legal one. However, the
human approach through this moral question is generally positive since the 1950s.
The third “law” of Asimov, which provides: “An artificial intelligence system must
protect its own existence, as long as such protection does not conflict with the First
or Second Laws”.84 Accordingly, the artificial intelligence system is not only
authorized to do that, under the relevant circumstances, but it “must”.
However, as aforesaid, this is not part of the legal question, and only the legal
question is relevant for the applicability of self-defense upon artificial intelligence
systems. The legal question is much simpler. The question is whether an artificial
intelligence system has the capability of protecting its own life, freedom, body or
property. For protecting these legitimate interests the artificial intelligence system
must possess them. Only if an artificial intelligence system possesses property, it
can protect it. If not, it can only protect the property of others.
So is the situation with life, freedom and body. At the moment, the criminal law
protects human life, freedom and body. The question of whether an artificial
intelligence system has analogous life, freedom or body is a question of legal
interpretation, besides the moral questions involved. Analogously, the corporation,
which is not a human entity, has been recognized for decades by courts as
possessing life, freedom, body and property in the criminal law context.85
However, if the legal question towards corporations, which are abstract
creatures, has been decided positively, it would be unreasonable to decide oppo-
sitely in the case of artificial intelligence systems, which physically simulate these
human values much better than abstract corporations. For instance, a prison guard

83
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231, 1255–
1258 (1992).
84
ISAAC ASIMOV, I, ROBOT 40 (1950).
85
See, e.g., United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988); John
C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry into the Problem
of Corporate Punishment, 79 MICH. L. REV. 386 (1981).
172 5 Negative Fault Elements and Artificial Intelligence Systems

artificial intelligence system during prisoners escape is attacked by the escaping


prisoners. They intend to tie it up for incapacitating its abilities to interfere their
escape.
Would not it be legitimate for the artificial intelligence system to defend its
mission and protect its freedom? If they intend to cut off its arms, would not it be
legitimate for the artificial intelligence system to defend its mission and protect its
“body”? If the answers are analogous to corporations, they should be positive.
Anyway, even if the idea of artificial intelligence system’s “life”, “body” or
“freedom” is not accepted, in spite of the analogy to corporations and in spite of
the moral positive attitude, the artificial intelligence system still fulfils the first
condition of self-defense through protecting other human’s life, body, freedom or
property.
The following two legal conditions of self-defense seem not to be different
between humans and artificial intelligence systems. The condition of illegitimate
attack is depended on the attacker and not on the defender, whether the defender is
human or not. The legitimacy and legality of the attack is not affected by the
defender identity. The condition of immediate and actual danger to the protected
interest is neither depended on the defender. The nature of the danger is constituted
by attacker, not by the defender. Consequently, this condition is fulfilled not
through the defender’s behavior, but through the analysis of the attack, independent
of the defender’s identity.
The condition of repelling, proportional and immediate act may raise another
question. Would it be legitimate for an artificial intelligence system to attack human
person, even if as repelling, proportional and immediate reaction to that human’s
attack. This question has two sub-questions. The first is towards the legitimacy of
preferring the defender’s rights upon the attacker’s, regardless their identity as
humans, corporations or artificial intelligence systems. The second is towards any
restriction on artificial intelligence systems as defenders, different than humans or
corporations. Both sub-questions have both legal and moral aspects.
The first sub-question is generally answered through the risk-taking by the
attacker. The society creates peaceful social mechanisms for dispute resolutions,
such as legal proceedings, arbitrations, mediations, etc. The attacker chooses not to
use these peaceful mechanisms, but rather use illegal violence. When acting this
way and excluding the legal mechanisms of dispute resolutions, the attacker takes
the risk of causing reaction against the illegal attack. The criminal law, in this case,
prefers the innocent reaction upon the illegal action.86 This answer sees no differ-
ence between all types of attackers and defenders.
The second sub-question may have different moral answers, but none of them is
relevant to the legal answer. Thus, despite Asimov’s first “law”, which prohibits

86
JUDITH JARVIS THOMSON, RIGHTS, RESTITUTION AND RISK: ESSAYS IN MORAL THEORY 33–48 (1986);
Sanford Kadish, Respect for Life and Regard for Rights in the Criminal Law, 64 CAL. L. REV.
871 (1976); Patrick Montague, Self-Defense and Choosing Between Lives, 40 PHIL. STUD.
207 (1981); Cheyney C. Ryan, Self-Defense, Pacificism, and the Possibility of Killing, 93 ETHICS
508 (1983).
5.2 Negative Fault Elements by Artificial Intelligence Technology 173

artificial intelligence systems to harm humans, artificial intelligence systems are


actually used in various ways to harm humans. Using artificial intelligence systems
in military and police utilities (e.g., soldiers, guards, prison guards, armed drones,
etc.) requires inherently the plausible possibility of causing harm to humans.
As long as this has not been explicitly prohibited by criminal law, this is
legitimate. It seems that legal systems around the world not only made their legal
choice in this matter, but they have made their moral choice as well, rejecting
Asimov’s first “law”, which has been regarded as too panic. Consequently, the
criminal law sees no legal problem with the possibility than an artificial intelligence
system would protect legitimate interests illegally attacked by humans. This ques-
tion would not have been raised if the attacker is an artificial intelligence system,
would it? Whether the defender is human or not, the reaction must be repelling,
proportional and immediate.
These descriptions of the reaction are measured identically whether the defender
is human or not. “Repelling” means that the act is a reaction to the attack, which is
the cause for the reaction. “Immediate” means that the defending reaction is
subsequent to the attack. The reaction is not illegal revenge, but subsequent act
purposed to neutralize the threat. “Proportional” means that the defender does not
use excessive power to neutralize the threat. The defender should be evaluating the
threat and the means that may be used to neutralize it. Eventually, the defender
should choose the means that are not excessive in relation to the specific threat
under the particular circumstances.
Proportionality resembles reasonability in many ways, and some understand
proportionality as part of reasonability. Consequently, for the fulfillment of this
condition, the artificial intelligence system must have the capabilities of reason-
ableness. These capabilities are required for negligence as well.
Finally, it is required that the defender did not control the attack or the conditions
for its occurrence. This condition is intended to impose criminal liability upon
persons who brought the attack upon themselves and attempt to save themselves
from criminal liability through the self-defense argument. The court is bound to
search for the deep reason of the attack. If the defender was innocent in this context,
the defense is applicable. This condition is not differently applied for humans,
corporations or artificial intelligence systems. As long as the artificial intelligence
entity was not part of such a plot, the general defense of self-defense is applicable
for it. Proving this may be through the artificial intelligence system’s records.
Consequently, it seems that the general defense of self-defense may be applicable
for artificial intelligence systems.

5.2.2.2 Necessity
Could an artificial intelligence system be considered as acting under necessity in the
criminal law context? Necessity is a in rem defense from the same “family” of self-
defense. Both are partly reduce of the thorough applicability of the general concept
of the society’s monopoly over power, discussed above.87 The major difference

87
Above at Sect. 5.2.2.1.
174 5 Negative Fault Elements and Artificial Intelligence Systems

between self-defense and necessity is in the identity of the reaction’s object. In self-
defense the defender’s reaction is against the attacker, whereas in necessity it is
against an innocent object (innocent person, property, etc.). The innocent object is
not necessarily connected to the cause of the reaction.
For instance, two persons are sailing in a boat in the high seas. The boat crashes
into an iceberg and sinks. Both of the persons are surviving on an improvised raft,
but with no water or food. After a few days, one of them eats the other in order to
survive.88 The eaten person did not attack the eater, and was not blame for the crash,
he was completely innocent. So was the eater, but he knew that if he does not eat the
other person, he definitely dies. If the eater is indicted for murder of the other
person, he may argue for necessity. Self-defense is not relevant in this case, since
the eaten person did not perform any attack against the eater.
The traditional approach towards necessity is that under the right circumstances
it may justify the commission of offenses (quod necessitas non habet legem).89 The
traditional reason is the criminal law’s understanding of the human nature’s
weaknesses. The individual who acts under necessity is considered to be choosing
the lesser of two evils, from his own point of view.90 In the above example, if the
eater chooses not to eat the other person, they both would die. If he chooses to eat,
only one of them would die. Both situations are “evil”, but the lesser “evil” of the
two is the one that one person survives. The victim of necessity is not considered
blame for anything, but innocent, and still the act would be justified.91
Since the act of necessity is a self-act of the individual, when the authorities are
not available, the general defense of necessity partly reduces of the thorough
applicability of the general concept of the society’s monopoly over power,
discussed above. Being in situation that requires act of necessity is considered as
negating the individual’s fault required for the imposition of criminal liability. This
concept has been accepted by legal systems in the world since ancient ages.92 In
time this defense became wider and more accurate. Its modern basis is to enable the
individual to protect legitimate interests through choosing the lesser of two evils, as

88
See, e.g., United States v. Holmes, 26 F. Cas. 360, 1 Wall. Jr. 1 (1842); Dudley and Stephens,
[1884] 14 Q.B. D. 273.
89
W. H. Hitchler, Necessity as a Defence in Criminal Cases, 33 DICK. L. REV. 138 (1929).
90
Edward B. Arnolds and Norman F. Garland, The Defense of Necessity in Criminal Law: The
Right to Choose the Lesser Evil, 65 J. CRIM. L. & CRIMINOLOGY 289 (1974); Lawrence P. Tiffany
and Carl A. Anderson, Legislating the Necessity Defense in Criminal Law, 52 DENV. L. J.
839 (1975); Rollin M. Perkins, Impelled Perpetration Restated, 33 HASTINGS L. J. 403 (1981).
91
Long v. Commonwealth, 23 Va.App. 537, 478 S.E.2d 324 (1996); State v. Crocker, 506 A.2d
209 (Me.1986); Humphrey v. Commonwealth, 37 Va.App. 36, 553 S.E.2d 546 (2001); United
States v. Oakland Cannabis Buyers’ Cooperative, 532 U.S. 483, 121 S.Ct. 1711, 149 L.Ed.2d
722 (2001); United States v. Kabat, 797 F.2d 580 (8th Cir.1986); McMillan v. City of Jackson,
701 So.2d 1105 (Miss.1997).
92
BENJAMIN THORPE, ANCIENT LAWS AND INSTITUTES OF ENGLAND 47–49 (1840, 2004); Reniger
v. Fogossa, (1551) 1 Plowd. 1, 75 E.R. 1, 18; Mouse, (1608) 12 Co. Rep. 63, 77 E.R. 1341;
MICHAEL DALTON, THE COUNTREY JUSTICE ch. 150 (1618, 2003).
5.2 Negative Fault Elements by Artificial Intelligence Technology 175

aforesaid. Consequently, there are several conditions to enter the sphere of this
general defense:

(a) The protected interest should be legitimate. Legitimate interests are life,
freedom, body and property—of the individual or of other individuals. No
previous introduction between them is required93;
(b) The protected interest should be in an immediate and actual danger94;
(c) The act (of necessity) is directed towards an external or innocent interest95;
(d) The act (of necessity) should be neutralizing the danger, proportional to
it,96 and immediate97; and-
(e) The defender did not control the causes of the danger or the conditions for
its occurrence (actio libera in causa).

If all these conditions are fulfilled, the individual is considered to be acting under
necessity defense, and accordingly no criminal liability is imposed upon him for the
commission of the offense. Thus, not every time a danger is neutralized through
causing harm to an innocent interest, it may be considered necessity, but only when
the act follows the above conditions at full. On that basis the question is whether the
general defense of necessity is applicable for artificial intelligence systems. The
answer for this question is depended on the artificial intelligence systems
capabilities of fulfilling the above conditions in the relevant certain situations.
Four of these conditions [(a), (b), (d) and (e)] are identical to the self-defense
conditions, mutatis mutandis. Instead of an attack on the legitimate interest, it
would be an actual danger to that very interest. The main difference between self-

93
United States v. Randall, 104 Wash.D.C.Rep. 2249 (D.C.Super.1976); State v. Hastings,
118 Idaho 854, 801 P.2d 563 (1990); People v. Whipple, 100 Cal.App. 261, 279 P. 1008 (1929);
United States v. Paolello, 951 F.2d 537 (3rd Cir.1991).
94
Commonwealth v. Weaver, 400 Mass. 612, 511 N.E.2d 545 (1987); Nelson v. State, 597 P.2d
977 (Alaska 1979); City of Chicago v. Mayer, 56 Ill.2d 366, 308 N.E.2d 601 (1974); State v. Kee,
398 A.2d 384 (Me.1979); State v. Caswell, 771 A.2d 375 (Me.2001); State v. Jacobs, 371 So.2d
801 (La.1979); Anthony M. Dillof, Unraveling Unknowing Justification, 77 NOTRE DAME L. REV.
1547 (2002).
95
United States v. Contento-Pachon, 723 F.2d 691 (9th Cir.1984); United States v. Bailey,
444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980); Hunt v. State, 753 So.2d 609 (Fla.
App.2000); State v. Anthuber, 201 Wis.2d 512, 549 N.W.2d 477 (App.1996).
96
State v. Fee, 126 N.H. 78, 489 A.2d 606 (1985); United States v. Sued-Jimenez, 275 F.3d 1 (1st
Cir.2001); United States v. Dorrell, 758 F.2d 427 (9th Cir.1985); State v. Marley, 54 Haw.
450, 509 P.2d 1095 (1973); State v. Dansinger, 521 A.2d 685 (Me.1987); State v. Champa,
494 A.2d 102 (R.I.1985); Wilson v. State, 777 S.W.2d 823 (Tex.App.1989); State v. Cram,
157 Vt. 466, 600 A.2d 733 (1991).
97
United States v. Maxwell, 254 F.3d 21 (1st Cir.2001); Andrews v. People, 800 P.2d
607 (Colo.1990); State v. Howley, 128 Idaho 874, 920 P.2d 391 (1996); State v. Dansinger,
521 A.2d 685 (Me.1987); Commonwealth v. Leno, 415 Mass. 835, 616 N.E.2d 453 (1993);
Commonwealth v. Lindsey, 396 Mass. 840, 489 N.E.2d 666 (1986); People v. Craig, 78 N.Y.2d
616, 578 N.Y.S.2d 471, 585 N.E.2d 783 (1991); State v. Warshow, 138 Vt. 22, 410 A.2d 1000
(1979).
176 5 Negative Fault Elements and Artificial Intelligence Systems

defense and necessity lies within one condition. Whereas in self-defense the act is
directed towards the attacker, in necessity the act is directed towards an external or
innocent interest. In necessity the defender should choose between the lesser of two
evils, which one of them is causing harm to an external interest, which may be an
innocent person, who may have nothing to do with that danger.
The question towards artificial intelligence systems in this context is whether
they possess the capability of choosing the “lesser of two evils”. For instance, a
locomotive artificial intelligence system drone is transporting 20 passengers. The
drone arrives to a rails-junction of two rails. On one rail there is a playing child, but
the second rail is ended on the near cliff. If the drone chooses the first rail, the child
would definitely die, but the 20 passengers would survive. However, if the drone
chooses the second rail, the child would survive, but due to its velocity and the
distance from the cliff, the train would defiantly fall from the 200 ft cliff and no
passenger would survive.
If the locomotive was driven by a human driver, and this deriver would have
chosen the first rail (with the child on it), no criminal liability would have been
imposed upon him due to the general defense of necessity. An artificial intelligence
system may calculate the probabilities for each possibility and choose the possibil-
ity with minimum casualties. Strong artificial intelligence systems are already used
for prediction of very complicated events (e.g., climate computers), and calculating
the probabilities in the above example is considered much simpler. Analyzing the
case by the artificial intelligence system would probably reveal to the same two
possibilities of the human driver.
If the artificial intelligence system takes into consideration the number of
probable casualties, it would probably choose to run over the child. This may be
taken into considerations due to the basic programming of the system or due to
relevant machine learning. In such choice, all conditions of necessity are fulfilled.
Therefore, if it were human, no criminal liability was imposed due to the general
defense of necessity. Why would the artificial intelligence system be treated
differently? Moreover, if the artificial intelligence system chooses the other possi-
bility and causes not the lesser but the greater of two evils, society would probably
want to impose criminal liability (upon the programmer, the user or the artificial
intelligence system), exactly the same way if the artificial intelligence system was
human.
Of course, there may occur some moral dilemmas in such choices and decisions,
e.g., is it legitimate for artificial intelligence system to decide upon human life, or is
it legitimate for artificial intelligence system to cause human death or severe injury.
However, these dilemmas are not different than the moral dilemmas of the self-
defense, discussed above.98 Moreover, the moral questions are not to be taken into
consideration in relation to the criminal liability question. Consequently, it seems
that the general defense of necessity may be applicable for artificial intelligence
systems in similar way of self-defense.

98
Above at Sect. 5.2.2.1.
5.2 Negative Fault Elements by Artificial Intelligence Technology 177

5.2.2.3 Duress
Could an artificial intelligence system be considered as acting under duress in the
criminal law context? Duress is a in rem defense from the same “family” of self-
defense and necessity. All are partly reduce of the thorough applicability of the
general concept of the society’s monopoly over power, discussed above.99 The
major difference between self-defense, necessity and duress is in the course of
conduct. In self-defense the defender’s reaction is repelling the attacker, in neces-
sity it is a reaction against an external innocent object and in duress it is
surrendering to a threat through commission of an offense.
For instance, a retired criminal with expertise in braking into safes is no longer
active. He is invited by ex-friends to participate in another robbery, where his
expertise is required. He says no. They try to convince him, but he still refuses.
Therefore, they kidnap his son and threat him, that if he does not participate in the
robbery, they would kill his son. He knows them very well and knows that they are
serious. He also knows that if police is involved, they would kill his son. As a result,
he surrenders to the threat, participates in the robbery and uses his expertise. If
captured, he may argue for duress. Self-defense and necessity are irrelevant here,
since he surrendered to the threat rather facing it.
The traditional approach towards duress is that under the right circumstances it
may justify the commission of offenses.100 The traditional reason is the criminal
law’s understanding of the human nature’s weaknesses. Sometimes the individual
would rather commit an offense under threat rather than face the threat and pay the
price of causing harm to precious interests. Until the eighteenth century the general
defense of duress was applicable for all offenses.101 Later, the Anglo-American
legal systems made its applicability narrower, and it does not include severe
homicide offenses which require general intent.102
Thus, for instance, in the above example, if it were not robbery but murder, the
general defense of duress would not have been applicable for the imposition of
criminal liability, but only as consideration of punishment. The reason for the
narrow applicability is the sanctity of human life.103 However, this approach has

99
Ibid.
100
John Lawrence Hill, A Utilitarian Theory of Duress, 84 IOWA L. REV. 275 (1999); Rollin
M. Perkins, Impelled Perpetration Restated, 33 HASTINGS L. J. 403 (1981); United States
v. Johnson, 956 F.2d 894 (9th Cir.1992); Sanders v. State, 466 N.E.2d 424 (Ind.1984); State
v. Daoud, 141 N.H. 142, 679 A.2d 577 (1996); Alford v. State, 866 S.W.2d 619 (Tex.Crim.
App.1993).
101
McGrowther, (1746) 18 How. St. Tr. 394.
102
United States v. LaFleur, 971 F.2d 200 (9th Cir.1991); Hunt v. State, 753 So.2d 609 (Fla.
App.2000); Taylor v. State, 158 Miss. 505, 130 So. 502 (1930); State v. Finnell, 101 N.M. 732,
688 P.2d 769 (1984); State v. Nargashian, 26 R.I. 299, 58 A. 953 (1904); State v. Rocheville,
310 S.C. 20, 425 S.E.2d 32 (1993); Arp v. State, 97 Ala. 5, 12 So. 301 (1893).
103
State v. Nargashian, 26 R.I. 299, 58 A. 953 (1904).
178 5 Negative Fault Elements and Artificial Intelligence Systems

many exceptions.104 In general, besides the narrow exception of homicide, duress is


world-widely applicable as general defense in criminal law. The individual who
acts under duress is considered to be choosing the lesser of two evils, from his own
point of view: the evil of committing an offense as surrender to the threat or the evil
of harm to the legitimate interest.
Since the act of duress is a self-act of the individual, when the authorities are not
available or effective, the general defense of duress partly reduces of the thorough
applicability of the general concept of the society’s monopoly over power,
discussed above. Being in situation that requires act of duress is considered as
negating the individual’s fault required for the imposition of criminal liability. The
modern basis of this defense is the understanding that an individual may surrender a
threat, and not necessarily face it.105 Not all persons are heroes, and no one is
required by law to be a hero.
Consequently, there are several conditions to enter the sphere of this general
defense:

(a) The protected interest should be legitimate. Legitimate interests are life,
freedom, body and property—of the individual or of other individuals, and
no previous introduction between them is required106;
(b) The protected interest should be in an immediate and actual danger107;
(c) The act (of duress) is a surrender to the threat;
(d) The act (of duress) should be proportional to the danger108; and-
(e) The defender did not control the causes of the danger or the conditions for
its occurrence (actio libera in causa).109

104
People v. Merhige, 212 Mich. 601, 180 N.W. 418 (1920); People v. Pantano, 239 N.Y. 416,
146 N.E. 646 (1925); Tully v. State, 730 P.2d 1206 (Okl.Crim.App.1986); Pugliese
v. Commonwealth, 16 Va.App. 82, 428 S.E.2d 16 (1993).
105
United States v. Bakhtiari, 913 F.2d 1053 (2nd Cir.1990); R.I. Recreation Center v. Aetna Cas.
& Surety Co., 177 F.2d 603 (1st Cir.1949); Sam v. Commonwealth, 13 Va.App. 312, 411 S.E.2d
832 (1991).
106
Commonwealth v. Perl, 50 Mass.App.Ct. 445, 737 N.E.2d 937 (2000); United States v. -
Contento-Pachon, 723 F.2d 691 (9th Cir.1984); State v. Ellis, 232 Or. 70, 374 P.2d 461 (1962);
State v. Torphy, 78 Mo.App. 206 (1899).
107
People v. Richards, 269 Cal.App.2d 768, 75 Cal.Rptr. 597 (1969); United States v. Bailey,
444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980); United States v. Gomez, 81 F.3d 846 (9th
Cir.1996); United States v. Arthurs, 73 F.3d 444 (1st Cir.1996); United States v. Lee, 694 F.2d
649 (11th Cir.1983); United States v. Campbell, 675 F.2d 815 (6th Cir.1982); State v. Daoud,
141 N.H. 142, 679 A.2d 577 (1996).
108
United States v. Bailey, 444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980); People v. Handy,
198 Colo. 556, 603 P.2d 941 (1979); State v. Reese, 272 N.W.2d 863 (Iowa 1978); State v. Reed,
205 Neb. 45, 286 N.W.2d 111 (1979).
109
Fitzpatrick, [1977] N.I. 20; Hasan, [2005] U.K.H.L. 22, [2005] 4 All E.R. 685, [2005] 2 Cr.
App. Rep. 314, [2006] Crim. L.R. 142, [2005] All E.R. (D) 299.
5.2 Negative Fault Elements by Artificial Intelligence Technology 179

If all these conditions are fulfilled, the individual is considered to be acting under
duress defense, and accordingly no criminal liability is imposed upon him for the
commission of the offense. Thus, not every time a person surrenders a threat, it may
be considered duress, but only when the act follows the above conditions at full. On
that basis the question is whether the general defense of duress is applicable for
artificial intelligence systems. The answer for this question is depended on the
artificial intelligence systems capabilities of fulfilling the above conditions in the
relevant certain situations.
Four of these conditions [(a), (b), (d) and (e)] are almost identical to the self-
defense and necessity conditions, mutatis mutandis. In most legal systems duress
does not require immediate act, for the threat and danger to the legitimate interest
may be continuous. However, the main difference between self-defense, necessity
and duress lies within one condition. Whereas in self-defense the act is directed
towards the attacker and in necessity the act is directed towards an external or
innocent interest, the act in duress is surrender to the relevant threat. The commis-
sion of the offense in duress is the surrender to the threat and the way the individual
face that threat.
The question towards artificial intelligence systems in this context is whether
they possess the capability of choosing the “lesser of two evils”. For instance, a
prison guard artificial intelligence system has captured an escaping prisoner. The
prisoner point a loaded gun to a human prison guard and says that if he is not
released immediately by the artificial intelligence system, he shoots down the
human guard. The artificial intelligence system calculates probabilities and figures
out that the danger is real. If the artificial intelligence system surrenders to the
threat, the human guard’s life are saved, but an offense is committed (e.g., acces-
sory to escape). If the artificial intelligence system does not surrender, no offense is
committed, the escape is failed, but the human guard is murdered.
If the prison guard who captured the prisoner were human, no criminal liability
would have been imposed upon him due to the general defense of duress as all
conditions of this defense are fulfilled. An artificial intelligence system may
calculate the probabilities for each possibility and choose the possibility with
minimum casualties. Strong artificial intelligence systems are already used for
prediction of very complicated events (e.g., climate computers), and calculating
the probabilities in the above example is considered much simpler. Analyzing the
case by the artificial intelligence system would probably reveal to the same two
possibilities of a human prison guard.
If the artificial intelligence system takes into consideration the probability of
casualties, it would probably choose to surrender the threat. This may be taken into
considerations due to the basic programming of the system or due to relevant
machine learning. In such choice, all conditions of duress are fulfilled. Therefore,
if it were human, no criminal liability was imposed due to the general defense of
duress. Why would the artificial intelligence system be treated differently? More-
over, if the artificial intelligence system chooses the other possibility and causes not
the lesser but the greater of two evils, society would probably want to impose
180 5 Negative Fault Elements and Artificial Intelligence Systems

criminal liability (upon the programmer, the user or the artificial intelligence
system), exactly the same way if the artificial intelligence system was human.
Of course, there may occur some moral dilemmas in such choices and decisions,
e.g., is it legitimate for artificial intelligence system to decide upon human life, or is
it legitimate for artificial intelligence system to cause, directly or indirectly, human
death or severe injury. However, these dilemmas are not different than the moral
dilemmas of the self-defense and necessity, discussed above.110 Moreover, the
moral questions are not to be taken into consideration in relation to the criminal
liability question. Consequently, it seems that the general defense of duress may be
applicable for artificial intelligence systems in similar way of self-defense and
necessity.

5.2.2.4 Superior Orders


Is an artificial intelligence system in an official duty committing an offense under
superior orders protected from criminal liability? The general defense of superior
orders is relevant for individuals who serve under official duties in authoritarian
hierarchical official organizations, e.g., army, police, rescue forces, etc. These
individuals are often required to act against their natural instinct. The natural
instinct towards a huge fire is to escape, not to enter into it and save trapped
persons. For these missions one of the strongest powers of success is discipline.
When the soldier is disciplined, he would likely to fulfill the mission, even if it
involves risks to life. Therefore, when a soldier is drafted, the first thing to learn is
discipline.
However, sometimes the orders from superiors are in contradiction to the
criminal law, as the performance of the order includes the commission of an
offense. There are two extreme models of solutions. The first is the absolute defense
model, in which all actions under superior orders are protected.111 This model
places discipline as higher than the rule of law. Only the commanders are to be
criminally liable according to this model. The other extreme model is the absolute
responsibility model, in which no action is protected from criminal liability, even if
performed under superior order.112 This model places the rule of law as higher than
discipline.
Under the first extreme model the individual has no discretion in committing
offenses. This model revealed the commission of war crimes and crimes against
humanity during World War II. Under the second, the individual must be an expert
in criminal law or must be accompanied with an attorney all day long for not being
criminally liable. The disadvantages of these models revealed the creation of
moderate models. The common moderate model is the manifestly illegal order

110
Above at Sects. 5.2.2.1 and 5.2.2.2.
111
Michael A. Musmanno, Are Subordinate Officials Penally Responsible for Obeying Superior
Orders which Direct Commission of Crime?, 67 DICK. L. REV. 221 (1963).
112
Axtell, (1660) 84 E.R. 1060; Calley v. Callaway, 519 F.2d 184 (5th Cir.1975); United States
v. Calley, 48 C.M.R. 19, 22 U.S.C.M.A. 534 (1973).
5.2 Negative Fault Elements by Artificial Intelligence Technology 181

model. Accordingly, the individual is protected from criminal liability unless he


performs a manifestly illegal order. If the individual performs an illegal order which
is not manifestly illegal, he is protected.113
The ultimate question is, of course, what is the difference between illegal and
manifestly illegal order. Both orders are objectively contradicted to the law, but the
manifestly illegal order is contradicted to the public policy as well. Every society
has its public policy, which consists on its common values. These values may be
moral, social, cultural, religious, etc. The public policy reflects the basic values of
the certain society. Different societies have different values and different public
policies. A manifestly illegal order is an order which harms these values, and thus
harms the society’s public policy and its self-image.
For instance, a soldier is ordered to rape a civilian who resists the military
operation. Rape is illegal, however in this situation it is manifestly illegal as it
violates the basic values of the modern western society. There is no accurate and
conclusive examination for distinction between illegal and manifestly illegal
orders, since public policy is dynamic, changes by time, population and social
trends. However, public policy may be taught. In most cases, inductively, from case
to case. Consequently, there are two main conditions for the applicability of the
general defense of superior orders:

(a) Hierarchical subordination to authorized public authority; and-


(b) Superior order which requires obedience and is not manifestly illegal.

If all these conditions are fulfilled, the individual is considered to be acting under
superior orders defense, and accordingly no criminal liability is imposed upon him
for the commission of the offense. Thus, not every time an individual obeys a
superior order, the general defense is applicable, but only when the act follows the
above conditions at full. On that basis the question is whether the general defense of
superior orders is applicable for artificial intelligence systems. The answer for this
question is depended on the artificial intelligence systems capabilities of fulfilling
the above conditions in the relevant certain situations.
The first condition relates to objective characteristics of the relationships
between the individual and the relevant organization.114 That requires hierarchical
subordination to authorized public authority for the systems of hierarchical orders
would be legitimate and operative. Such systems are existed in the army, the police
etc. However, private organizations have no authority to commit offenses. Artificial
intelligence systems are in use in many of these organizations. Artificial intelli-
gence systems are in military use, police use, prisons use etc. These systems are
operated under superior orders for their regular activity. The tasks given to these
systems are various.

113
A. P. ROGERS, LAW ON THE BATTLEFIELD 143–147 (1996).
114
Jurco v. State, 825 P.2d 909 (Alaska App.1992); State v. Stoehr, 134 Wis.2d 66, 396 N.W.2d
177 (1986).
182 5 Negative Fault Elements and Artificial Intelligence Systems

The second condition relates to the characteristics of the given superior order.
The order must require obedience, otherwise it cannot be considered an order. As to
its content, the order should not be manifestly illegal. If the order is legal or illegal,
but not manifestly illegal, it satisfies this condition. The classification of the order as
illegal or manifestly illegal is determined by the court. However, it may be taught
inductively from case to case. Artificial intelligence systems which are equipped
with machine learning utilities have the capability of inference the general outlines,
at least, of the manifestly illegal order.
For instance, an aircraft artificial intelligence system drone is operated by the Air
Force. Its mission is to search for specific terrorist lab and destroy it. The drone
found it, delivered the information to the headquarters and request for orders.
According to the information, the lab is populated by a known terrorist and his
family. The order is to attack by a heavy bomb. The drone calculates probabilities
and the probability is that all the people in the lab would die. The drone executes the
order. After the order is executed the drone records are examined, and it turns out
that the drone understood the legality of this ordered as situated in a grey area as to
the terrorist’s family, since it could fly lower and destroy the lab with less
casualties.
If the drone were human, it would have probably arguing for the general defense
of superior order. Since the international law accepts such orders under certain
situations, this order may be either legal or illegal, but not manifestly illegal.
Consequently, a human pilot would have probably been acquitted in such case for
this general defense would have been applicable. Why would the artificial intelli-
gence system be treated differently? If both human pilot and artificial intelligence
system has the same functional discretion and both fulfill the relevant conditions of
this general defense, then there is no legitimate reason for having a double standard
in such cases.
Of course, the artificial intelligence system criminal liability, if any, does not
affect the superiors’ criminal liability, if any (in cases of illegal order). There may
also occur some moral dilemmas in such choices and decisions, e.g., is it legitimate
for artificial intelligence system to decide upon human life, or is it legitimate for
artificial intelligence system to cause, directly or indirectly, human death or severe
injury. However, these dilemmas are not different than the moral dilemmas
involved in the applicability of other general defenses, discussed above.115 More-
over, the moral questions are not to be taken into consideration in relation to the
criminal liability question. Consequently, it seems that the general defense of
superior orders may be applicable for artificial intelligence systems.

5.2.2.5 De Minimis Defense


Could the general defense of de minimis be applicable for artificial intelligence
systems? In most legal systems the particular offenses are defined and formulated in
a wide manner. This kind of formulation creates, inevitably, over-inclusion or over-

115
See, e.g., at Sects. 5.2.2.1, 5.2.2.2, and 5.2.2.3.
5.2 Negative Fault Elements by Artificial Intelligence Technology 183

criminalization, i.e., cases, which are not supposed to be considered criminal, are
included within the scope of the relevant offenses. Sometimes the criminal
proceedings in these cases would be rather socially harmful than useful. For
instance, within the scope of the particular offense of theft comes the case of
14-years-old boy who steals his brother’s basketball, and the question is whether
this is the relevant case for criminal proceedings in theft, considering its social
consequences.
The key in most legal systems to solve such problem is through granting wider
discretion to the prosecution and the court. The prosecution may decide not to open
criminal proceedings in cases of low public interest. If opened, the court may decide
to acquit the defendant due to low public interest. When the prosecution exercises
this discretion, it is within its administrative discretion. When the court exercises
this discretion, it is within its judicial power through the general defense of de
minimis. Thus, in general, the general defense of de minimis enables the court to
acquit the defendant for low public interest in the particular case.
This type of judicial discretion has been widely accepted since ancient times.
The Roman law, for example, determined that criminal law does not extend upon
minor and petty matters (de minimis non curat lex), and the judge should not be
troubled by such matters (de minimis non curat praetor).116 In the modern criminal
law the general defense of de minimis is exercised by court seldom and very rarely
due to the wide administrative discretion of the prosecution. However, in the
relevant extreme cases, this judicial discretion may be exercised by the court in
addition to the administrative discretion of the prosecution.117
The basic examination for de minimis defense is towards the social endanger-
ment reflected by the commission of the particular offense. The commission of the
offense should reflect an extremely low social endangerment for the general
defense of de minimis would be applicable.118 Of course, different societies in
different times may realize different social endangerments for the same offenses,
since social endangerment is dynamically conceptualized through morality, culture,
religion, etc. The relevant social endangerment is determined by the court. On that
basis the question is whether this general defense may be relevant for offenses
committed by artificial intelligence systems.
The applicability of de minimis defense is upon the relevant case, regarding all
relevant aspects, and not necessarily upon the offender as such. The personality of
the offender may be taken into consideration, but only as part of assessing the case.
For this reason, there is no difference between humans, corporations or artificial
intelligence systems as to the applicability of this defense. The required low social

116
Vashon R. Rogers Jr., De Minimis Non Curat Lex, 21 ALBANY L. J. 186 (1880); Max L. Veech
and Charles R. Moon, De Minimis non Curat Lex, 45 MICH. L. REV. 537 (1947).
117
THE AMERICAN LAW INSTITUTE, MODEL PENAL CODE – OFFICIAL DRAFT AND EXPLANATORY NOTES
40 (1962, 1985).
118
Stanislaw Pomorski, On Multiculturalism, Concepts of Crime, and the “De Minimis” Defense,
1997 B.Y.U. L. REV. 51 (1997).
184 5 Negative Fault Elements and Artificial Intelligence Systems

endangerment is reflected from the factual event (in rem). For instance, a human
driver slipped with his car on the road and hit the pavement. No damages caused to
the pavement or other property, and of course there are no casualties. This is a
relevant case for de minimis, although this case might be within the scope of several
traffic offenses.
Would the case be legally different, if the driver was not human, but an artificial
intelligence system drone? Would have it been different, if the car was related to a
corporation? There is no substantive difference between humans, corporations or
artificial intelligence systems as to the applicability if de minimis defense, espe-
cially not when this general defense is directed to the characteristics of the factual
event and not necessarily to those of the offender. Consequently, it seems that the
general defense of de minimis may be applicable for artificial intelligence systems.
Punishibility of Artificial Intelligence
Technology 6

Contents
6.1 General Purposes of Punishments and Sentencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.1.1 Retribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.1.2 Deterrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6.1.3 Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.1.4 Incapacitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6.2 Relevance of Sentencing to Artificial Intelligence Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6.2.1 Relevant Purposes to Artificial Intelligence Technology . . . . . . . . . . . . . . . . . . . . . . 210
6.2.2 Outlines for Imposition of Specific Punishments on Artificial Intelligence
Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

6.1 General Purposes of Punishments and Sentencing

In determining the type and measure of punishment to be imposed on the offender,


the court is guided by the general purposes of punishment. The court is expected to
assess and evaluate each case in this context, based on two types of considerations:
data about the offense (in rem) and the about the offender (in personam).1 When the
court evaluates these two types of data, it does so through the prism of the general
purposes of punishment. In the modern criminal law there are four accepted general
purposes of punishment. The four purposes are retribution, deterrence, rehabilita-
tion, and incapacitation. These purposes are discussed below.

1
See in general GABRIEL HALLEVY, THE RIGHT TO BE PUNISHED – MODERN DOCTRINAL SENTENCING
15–56 (2013).

# Springer International Publishing Switzerland 2015 185


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8_6
186 6 Punishibility of Artificial Intelligence Technology

6.1.1 Retribution

Retribution is the most ancient purpose of punishment in criminal law. Retribution


is based on the feeling of revenge. In modern criminal law retribution embodies the
contemporary expression of the ancient feeling of revenge. The traditional justifi-
cation of retribution was the social legitimacy of the revenge exacted by the
damaged person from his damager. Carrying out the revenge was meant to satisfy
the sense of revenge of the injured person, who was to experience it as a type of
catharsis. Legitimate revenge is not unique to criminal law. In most human societies
revenge against the damager functioned as an integral part of legal proceedings,
both criminal and civil.2
Punishment is the causing of suffering to the offender for committing the
offense, and retribution is intended to make him pay the price for it. Retribution
emphasizes the necessity to make exact payment, by means of suffering, for the
offense—not more, not less (“suffering for suffering”). In the ancient world the
general assumption was that suffering can be measured objectively, and that there
may be a universal range of suffering that is applicable to any person. This
assumption enabled retribution to be formalized as lex talionis (“an eye for an
eye”). The suffering caused by removing one’s eye was considered to be identical
with that caused by removing any other person’s eye.
This assumption is too general, however, because it ignores the subjective
meaning of suffering. People experience suffering in different ways. Different
people suffer from different things, and the same measures of suffering cause
dissimilar actual suffering in different people. For example, the retributive rule of
“an eye for an eye” does not necessarily produce the same suffering. If an offender
who is blind on one eye removes the eye of another person who has both eyes and is
then punished by the removal of his one seeing eye, the punishment makes him
completely blind, causing much greater suffering than that which he inflicted on the
injured. If, however, his unseeing eye is removed, his situation does not change,
causing him much less suffering than what he inflicted on the injured. Clearly, in
this case the rule of “an eye for an eye” can never produce identical suffering.
The same is true for economic punishment as well. Two thieves are caught for
stealing the same object under the same circumstances. The only difference
between them is their economic situation. The same fine is imposed on both,
based on the suffering caused by the theft. It is clear, however, that the same fine
causes much greater suffering in the poorer thief than it does in the richer one, and
the absence of money and goods caused by the fine is felt much more intensely by
the poor thief than it is by the rich one.
For retribution to be both effective and just, the suffering caused by the punish-
ment must be adjusted to the individual offender, and the suffering caused by the
offense should be accurately matched with the suffering caused by the punishment.
This match requires measuring the suffering from the offender’s point of view,

2
BRONISLAW MALINOWSKI, CRIME AND CUSTOM IN SAVAGE SOCIETY (1959, 1982).
6.1 General Purposes of Punishments and Sentencing 187

because it is the offender who is the object of the suffering caused by the punish-
ment. Retribution, therefore, measures the subjective price of suffering from the
offender’s point of view.3
The equation that defines the subjective price of suffering has two parts. The first
is the suffering caused by the offender, and it includes the suffering caused to
society as well, not only to the individual victim of the offense.4 When a thief steals
an object from someone he causes suffering to the person from whom he stole as the
victim feels the absence of the stolen object. But this is not the only suffering the act
causes, and not the most important one. The theft also causes suffering to society
through loss of economic security, the need for professional attention to deal with
the theft, the necessity to protect individuals from further thefts, and so on. All
relevant types of sufferings must be taken into consideration when meting out the
offender’s punishment.
The second part of the equation is the subjective price of the suffering as viewed
through the offender’s eyes. The suffering caused by the offender to the victim and
to society must be translated into individual suffering imposed on the offender
through punishment. That subjective price determines the type and amount of
punishment. Naturally, such pricing is limited to the legal punishments accepted
in a given legal system. In most legal systems the suffering caused by the offense is
interpreted in terms of imprisonment, fines, public service, etc. Moreover, the rate
at which these punishments can be imposed is limited by the law.
For example, even if the court translates suffering caused by a theft into a
punishment of 10 years of imprisonment, it is not authorized to punish the thief
for more than 3 years of imprisonment if this is the maximum rate determined by
law.5 Based on this approach to retribution, the court must develop an internal
factual image of the offender that is sufficiently broad to allow it to carry out the
process of pricing. Because the process is subjective for each offender, this subjec-
tivity must be filled with relevant factual data that is crucial for applying proper
retribution in the process of sentencing.
Retribution is considered to be the dominant purpose of punishment, but it is not
the only one, and it does not provide solutions to all the needs of modern sentenc-
ing. Retribution is retrospective (it focuses on past events) and causes suffering to
the offender. As such, it lacks a prospective aspect and it does not provide a solution
to the social need of preventing of offenses. Furthermore, retribution does not
provide a solution to the social need of rehabilitating the offender through sentenc-
ing. Therefore, retribution must be complemented by other general purposes of
punishment.

3
NIGEL WALKER, WHY PUNISH? (1991).
4
Paul Butler, Retribution, for Liberals, 46 U.C.L.A. L. REV. 1873 (1999); Michele Cotton, Back
With a Vengeance: The Resilience of Retribution as an Articulated Purpose of Criminal Punish-
ment, 37 AM. CRIM. L. REV. 1313 (2000); Jean Hampton, Correcting Harms versus Righting
Wrongs: The Goal of Retribution, 39 U.C.L.A. L. REV. 1659 (1992).
5
See, e.g., article 242 of the German Penal Code.
188 6 Punishibility of Artificial Intelligence Technology

This does not diminish the status of retribution as a major purpose of punishment
among the general purposes of punishment. In most modern legal systems retribu-
tion is still considered as the dominant purpose of punishment, and the other three
purposes (deterrence, rehabilitation, and incapacitation) are auxiliary purposes.
Retribution retains the proper connection between the damage caused to society
by the offense and the punishment imposed on the offender. This proportional
sentencing is achieved by prevention ex ante of disproportional revenge by society
on the offender.
Retribution can ensure a high level of certainty in the expected punishment.
Certainty is one result of focusing on the actual damage caused by the offense rather
than on potential damage, the offender’s will, or his personality. Retribution does
not neglect the offender’s personal character, and it aims to adjust the proper
suffering to the offender’s subjective attributes. The connection that retribution
aims most to retain is the one between the consequences of the offense and the
punishment being imposed. Retribution can thus assuage the thirst for revenge of
the victims and of society.
Nevertheless, the nature of retribution contains some disadvantages as well, in
areas in which other general purposes of punishment can offer solutions. As noted
above, retribution is a manifestation for the desire to make the offender suffer for
his injurious acts (lex talionis), a desire that does not take into consideration
prospective social consequences. Retribution may be the basis for punishment
even if no direct social benefit is expected to ensue from that punishment. Thus,
from the point of view of retribution, the future social consequences of the punish-
ment are entirely immaterial.
If retribution completely indifferent to the social benefit of punishment, it may
raise questions about its efficiency with respect to social values. Proportional
punishment may be socially deterring and may deter offender from reoffending,
but from the point of view of retribution this effect is entirely insignificant6; if a
punishment has no deterrence value at all, it is still considered proper punishment.
In the eighteenth century Immanuel Kant justified retribution and supported the
punishment of the last person on earth, if it meant the extinction of mankind, in the
name of retribution, which is blind to future social benefits.7
Retribution does not distinguish between different types of offenders who may
require different types of social treatment in order to prevent further delinquency on
their part. For example, a recidivist may require different social treatment than a
first offender.8 A thief who commits ten identical thefts and each time is captured,
convicted, sentenced, imprisoned, and released, after which he immediately
commits another theft would be justifiably punished each time with the same

6
Ledger Wood, Responsibility and Punishment, 28 AM. INST. CRIM. L. & CRIMINOLOGY 630 (1938).
7
IMMANUEL KANT, METAPHYSICAL ELEMENTS OF JUSTICE: PART I – THE METAPHYSICS OF MORALS
102 (trans. John Ladd, 1965).
8
Gabriel Hallevy, Victim’s Complicity in Criminal Law, 2 INT’L J. PUNISHMENT & SENTENCING
74 (2006).
6.1 General Purposes of Punishments and Sentencing 189

punishment, as far as the purposes of retribution are concerned, although it is clear


that the punishment is completely ineffective for that offender.
Retribution is not actually daunted or affected by the criminal record of the
offender, only by the characteristics of the offense, especially by the damage it
caused. Although the subjective price of the suffering embodied in retribution takes
into account the personal characteristics of the offender, this is only for the purpose
of matching the adequate suffering to him, not in order to rehabilitate, deter, or
incapacitate him. A better future for mankind is not an issue for retribution, which
considers the actual damage caused by the offense but not the social endangerment
reflected in delinquency. Consequently, retribution cannot suggest any solution to
social endangerment considerations in criminal law.
A prime example is the punishment of criminal attempts. Commission of a
criminal attempt includes the failure to complete the offense. Therefore, in most
cases of criminal attempt no actual damage is caused to anyone. For example, a
person aims a gun at someone and pulls the trigger fully intending to kill that
person, but the gun malfunctions and nothing happens. Retribution recommends a
lenient punishment, if any, for criminal attempt, but criminal attempts represent
extremely high social endangerment. For example, the offender who failed in his
attempt will commit further attempts until his purpose is achieved, a problem that
retribution does not address at all.
Socially, retribution does not incorporate any attempt to address the root
problems that caused the offender to become one, nor does it pretend to try to
solve these problems. The motivation to solve these problems is generally rooted in
the desire to prevent recidivism. But having no prospective considerations at all,
retribution does not try to deter, rehabilitate, or incapacitate the delinquent
capabilities of the offender. Retribution is not prospective and does not take into
account the social effects or benefits of punishment. It examines the factual
components of the offense narrowly, to the exclusion of the wide social
considerations of punishment and sentencing. Therefore, retribution must be
complemented by other, prospective, general purposes of punishment, namely
deterrence, rehabilitation, and incapacitation.9 These purposes are discussed below.

6.1.2 Deterrence

Deterrence is a modern purpose of punishment in criminal law. It is based on the


assumption that the offender is a rational person and therefore examines the
expected costs and benefits of committing or not committing the offense. The
examination takes place in the offender’s mind before the decision to commit the

9
ANDREW VON HIRSCH, DOING JUSTICE: THE CHOICE OF PUNISHMENT 50 (1976). Compare United States
v. Bergman, 416 F.Supp. 496 (S.D.N.Y.1976); Richard S. Frase, Limiting Retributivism, PRINCI-
PLED SENTENCING: READINGS ON THEORY AND POLICY 135 (Andrew von Hirsch, Andrew Ashworth
and Julian Roberts eds., 3rd ed., 2009).
190 6 Punishibility of Artificial Intelligence Technology

offense. Deterrence is a prospective purpose of punishment because it relates to the


future alone. From the point of view of the offender and of society, deterrence does
not address the offense already committed but further offenses. Deterrence is not
relevant for the past, as no offender can be deterred retroactively. Consequently,
individuals are always deterred from the commission of further offenses in the
future, a phenomenon known as recidivism.
Thus, deterrence is intended to prevent recidivism of the offender. The offense
that has already been committed serves only as the initial trigger for activating the
deterrence, but it is not in itself addressed by the deterrence. This aspect of
deterrence may be considered to express social maturity. Society understands that
it cannot change the past and make the social harm caused by the offense disappear
entirely. No punishment has such capability. But society also understands that it can
impose punishments in order to create a better society and to improve the social
environment. Deterrence is guided by the expected social benefit in the future,
whereas retribution focuses the past. If retribution affects the future it does so as a
by-product. Even if a punishment has absolutely no effect on the future, it may still
be considered legitimate because of the retribution it metes out, but not because of
its deterrence.
There are two levels of deterrence: individual and public. Individual deterrence
is aimed at deterring the individual from recidivism, as noted above. The individual
to be deterred in this case is the offender. By contrast, public deterrence is aimed at
deterring potential offenders (the public) by punishing the individual offender. The
public as a whole is considered to be potential offenders in this context. Individual
deterrence is the basis of deterrence as a general purpose of punishment. Public
deterrence is a controversial expansion of the former.
In the case of individual deterrence, society considers the realistic possibility
that the offender, who has just committed an offense, may continue to commit
further offenses as long as the benefit he derives from the commission of the offense
is higher than its costs and than the benefit of not committing it. The sheer
commission of a single offense is adequate basis for suspecting that the offender
is considering additional offenses. Any offender is considered as potential offender
of further offenses. Deterrence functions as a behavioral motivation for the offender
not to commit further offenses in light of the offender’s experience with the current
offense and its consequences.
In general, in order to motivate individuals, society can use positive incentives
(“rewards”) and negative ones (“punishments”). Society can grant positive
incentives for behavior it encourages, although not engaging in this behavior is
not considered wrongful. For example, society may use the tax code to encourage
the business activity of corporations by setting a lower tax rate for corporations than
for individuals. But if an individual prefers to do business not through a corporation,
this is not considered wrong. Positive incentives encourage individuals to behave in
a certain way, but they do not mandate engaging in that behavior.
Negative incentives are aimed at exercising a different type of social control and
directing of behavior. The negative incentive is intended to deter individuals from
acting in a certain way and from deviating from certain types of behavior. The
6.1 General Purposes of Punishments and Sentencing 191

individual who is prevented from deviating receives no reward, but if he deviates a


punishment is imposed. The negative incentive is a characteristic of criminal law:
“good” behavior is not rewarded, whereas “bad” behavior is punished.
For most individuals, negative incentives serve as a much stronger deterrent than
non-entitlement to positive incentives does. Not being entitled to positive
incentives does not worsen the situation of an individual, whereas negative
incentives can do so. When positive incentives are used, the worst situation is the
current situation, which can only be better or remain unchanged. When negative
incentives are applied, the situation can become much worse. Therefore, in critical
social situations that require firm social intervention negative incentives are used,
and delinquency is considered to be the ultimate case for this type of social
intervention through criminal law.
From the point of view of the individual’s socio-economic situation, the com-
mission of an offense may produce benefits, which form the basic incentive for the
commission of the offense. The benefit may be tangible (e.g., money left in the
offender’s pocket as a result of tax evasion or fraud), but it can also be abstract, as in
the case of a purely mental personal satisfaction (e.g., the satisfaction of killing an
enemy or the sexual satisfaction derived from committing a sex offense). The
benefit derived from the commission of the offense can be both tangible and
abstract, and at times the personal satisfaction follows from the very commission
of the offense, without any further benefit.
Balancing the scale of these benefits stands the punishment for the commission
of the offense. The punishment includes not only the formal punishment in criminal
law (e.g., imprisonment, fine, etc.), but any additional inconvenience involved in
the criminal process, including the public humiliation, legal fees, loss of time, fears
of uncertainty, etc. The respective values of the benefits and of the punishment are
subjectively determined by each individual. Different individuals may assign dif-
ferent values to the same benefits and punishments. Fines in the same amount are
valued differently by poor and rich individuals, and the same is true for the same
amount of money obtained by fraud.
It would stand to reason that a rational individual considers the values of the
benefits and of the punishment, compares the two, and decides accordingly whether
or not to commit the offense. This is not true, however. A direct linkage between the
benefit obtained by the offense and punishment imposed as a result of it is unrealis-
tic. Not all offenders who commit offenses are immediately captured and punished.
An important ex ante consideration is the risk of being caught or captured. This risk
is an integral part of the rational individual’s consideration of whether or not to
commit the offense.
If all factual data were known to the individual, he would also know whether or
not he would be caught. But in most cases the individual acts under factual
uncertainty. The risk is probabilistic and changes under the influence of various
factors, some objective, others not. These factors include the type of the offense
(easy or difficult to be concealed), the professionalism of the offender, the duration
of the offense, the number of accomplices, etc. The individual would consider all
these factors if he were aware of them.
192 6 Punishibility of Artificial Intelligence Technology

In sum, what the individual considers is not benefits vs. punishment but the
expected value of the benefits (if not caught) vs. the expected value of the punish-
ment (if caught). For the rational individual, it pays to commit the offense if the
expected value of the benefits is greater than the expected value of the punishment.
This reality can be expressed through the following inequality10:

W  ð1  RÞ > P  R

Naturally, the situation described in this formula is not acceptable for society.
When it pays to commit offenses in a given society, the social fabric is in danger
and the negative incentive is not sufficient to avoid delinquency. In these cases, to
make legal social control effective, society must increase the value of the right side
of the inequality (P·R) by increasing the level of punishment (P) or the risk of being
caught (R). The question is which option is more effective.
Increasing the value of punishment (P) may cause difficulties. This is the
cheapest solution for society, as amending the sanction clause of an offense requires
little effort on the part of legislators. If the punishment is a fine, this may increase
the revenues of the state, and if the punishment is imprisonment it may increase the
expenses of the state. In either case, from the point of view of the offender, the real
value of the punishment remains subjective, as noted above.11 Thus, raising the
level of punishment for a given offense is not necessarily effective for any given
offender.
Society may also use secondary means to increase the value of the punishment,
in addition to amending the sanction clause (for example by publicizing the
offender’s suffering and humiliation), but the primary means remains increasing
the rate of punishment. Most states use this means regularly when faced with
delinquency of a certain type. In general, the value of punishment is first determined
according to the presumed preferences of society and the presumed severity of the
offense. Thus, because murder is considered more severe than theft, the punishment
for murder is harsher than the punishment for theft.
Offenses are reexamined when deterrence becomes relevant. If the offense is
committed regularly, the sanction may be interpreted as inadequate to create the
required deterrence, and the legislators are likely to raise the level of the punish-
ment. Before this step is taken, however, the courts may impose harsher
punishments within the limits of the existing offense. But legislator cannot increase
the level of punishment indefinitely. Each society has its upper limits for punish-
ment, and harsher punishments are considered illegitimate, illegal, or not feasible.
In societies that accept the capital penalty, the upper limit of punishment is
capital penalty with full confiscation of property. In other societies the upper limit is

10
Where W is the value of the benefit, R is the risk of being caught, and P is the value of the
punishment.
11
In a higher point of view, this may prospectively reduce the state’s expenses, if the sanction is
effective. If delinquency is prevented or reduced, some of the state’s sources may be available for
other social tasks.
6.1 General Purposes of Punishments and Sentencing 193

lower. The question is how should society act when the punishment has already
exceeded the upper limit, and the offense is still being committed. Making the
punishment harsher is not a valid option anymore. Moreover, from the offender’s
point of view, the value of punishment is continuously eroding.12 For the recidivist
offender the deterrence of punishment is at its highest when the punishment is
imposed for the first time. Each subsequent time that the punishment is imposed its
deterrent value erodes.
Thus, courts would have to impose increasingly harsher punishments on recidi-
vist offenders in order to achieve deterrence. When the punishment reaches the
upper limit of the offense, no harsher punishment can be imposed in order to
increase deterrence and society has a serious problem with that offender: the
maximum punishment does not deter the offender, who keeps committing the
offense.
Nevertheless, increasing punishment (P) is not the best way of increasing the
expected value of the punishment (P·R). Increasing P has its advantages as it is
inexpensive, focuses on the substantive law, and is a simple method. But it is also
possible to increase the expected value of the punishment by increasing the risk of
being caught (R). Increasing R has to do with the efforts of the authorities to enforce
the law, which are significantly more expensive and require many more means than
increasing P. A common example of such efforts is increasing the number of police
officers and their presence, which naturally requires expending greater resources by
society.
Prima facie, the choice between increasing P or R may be settled simply in favor
of increasing P because it is cheaper, simpler, and does not require many resources.
But modern criminological research points out that increasing the risk factor is
much more effective in preventing delinquency than increasing the punishment.13
Both factors increase the expected value of punishment, but the more significant of
the two is the risk factor, which pays the most important role in the offender’s
considerations whether to commit the offense.
There are many examples to substantiate this argument. For instance, when
municipal workers are on strike and do not write tickets for illegal parking, most
drivers park their cars without paying or in prohibited places. Furthermore, if the
factors are compared, it would be found that for most individuals the value of the
punishment is insignificant compared to the value of the risk. Consider the driver
who knows that if he is caught exceeding the speed limit, he will pay a fine of $100

12
Gabriel Hallevy, The Recidivist Wants to Be Punished – Punishment as an Incentive to
Re-offend, 5 INT’L J. OF PUNISHMENT & SENTENCING 124 (2009).
13
SUSAN EASTON AND CHRISTINE PIPER, SENTENCING AND PUNISHMENT: THE QUEST FOR JUSTICE 124–126
(2nd ed., 2008); NIGEL WALKER, WHY PUNISH? (1991); ANDREW VON HIRSCH, ANTHONY E. BOTTOMS
AND ELIZABETH BURNEY, CRIMINAL DETERRENCE AND SENTENCE SEVERITY (1999); Daniel Nagin,
General Deterrence: A Review of the Empirical Evidence, DETERRENCE AND INCAPACITATION:
ESTIMATING THE EFFECTS OF CRIMINAL SANCTIONS ON CRIME RATES 95 (Alfred Blumstein, Jacqueline
Cohen and Daniel Nagin eds., 1978); MARGERY FRY, ARMS OF THE LAW 76 (1951).
194 6 Punishibility of Artificial Intelligence Technology

and be on record with the registry of motor vehicles. The authorities examine two
options: (a) increasing P and decreasing R, and (b) increasing R and decreasing P.
In the first option the fine is raised to $1,000, but all policemen, speed traps, and
cameras are removed from the road. It is likely that most drivers will drive faster
because the risk of being caught has become significantly lower. In the second
option the fine is lowered to $10, but at every 100 yards there is a police officer
operating a speed trap. Most likely drivers will slow down because the risk of being
caught has increased significantly.
Historically and empirically it has been shown that there is a sharp increase in
delinquency whenever the risk of being caught is lowered, but no significant
decrease in delinquency when punishments become harsher. This conclusion is
borne out by Wolpin’s research, carried out over 73 years, between 1894 and
1967.14 Other studies pointed out the same phenomenon in different locations.
For example, the policemen’s strike in 1923 in Melbourne, Australia,15 the
policemen’s strike in 1919 in Liverpool, England, and the arrest in 1944 of the
Danish policemen by the Nazi authorities for assisting the local resistance to enable
Danish Jews to escape to Sweden.16
These studies show that the most dominant factor in increasing the rate of
deterrence is related to law enforcement rather than to severity of punishment.
Law enforcement, in this context, has to do with an increase in the offender’s risk of
being caught, with immediate action on the part of the authorities in activating the
criminal process, and with the certainty that punishment will be imposed.17 At the
same time, punishments that are too lenient decrease the deterrence significantly
because the offender does not experience the value of the negative incentive even if
he is caught by the authorities.
The personal character of the offender naturally plays an important role in
considering deterrence. Even if the expected value of the punishment (P·R) is
lower than the expected value of the benefits, this is not necessarily an adequate
incentive for delinquency. For prudent offenders (risk haters) a significant gap
between the values would be needed to provide them with an incentive to offend.
For other offenders (risk lovers) a situation in which the expected value of the
benefit exceeds that of the punishment would be considered adequate to offend.
It appears, therefore, that the right combination of a proper rate of punishment
and proper risk of capture can form an optimal value for individual deterrence. But
deterrence as a general purpose of punishment focuses on punishment and sentenc-
ing, not on the methods of law enforcement. The punishment factor itself is crucial
for achieving deterrence, but its highest effectiveness is achieved only when it is
combined with a proper risk of the offender being captured.

14
JAMES Q. WILSON, THINKING ABOUT CRIME 123–142 (2nd ed., 1985).
15
Laurence H. Ross, Deterrence Regained: The Cheshire Constabulary’s “Breathalyser Blitz”,
6 J. LEGAL STUD. 241 (1977).
16
STEPHAN HURWITZ, CRIMINOLOGY 303 (1952).
17
Easton and Piper, supra note 13, at pp. 124–126.
6.1 General Purposes of Punishments and Sentencing 195

It has been distinguished between individual deterrence (or “special” deterrence)


and public deterrence (or “general” deterrence). The principal basis for deterrence
as a general purpose of punishment is the individual deterrence, in which society
regards the individual offender as a potential perpetrator of further offenses if the
benefit from committing the offense is greater than the punishment. In this sense,
individual deterrence serves to direct behavior in order to prevent the commission
of further offenses. But whereas individual deterrence focuses on the offender who
has already offended, public deterrence focuses on potential offenders who are not
related, directly or indirectly, to the actual commission of any given offense.
Public deterrence, in its modern sense, became part of deterrence since the
beginning of the nineteenth century,18 as reflected both in the judicial rulings and
in the legislation of the nineteenth century penal codes in Europe.19 The main
justification for public deterrence at that time was that it may inculcate moral values
into the public by imposing punishment on individuals.20 This justification, how-
ever, was problematic. The assimilation of moral values by society requires their
understanding and solidarity with them, not intimidation. Intimidation through
deterrence is much more akin to training (e.g., dressage) than to inculcating
morality. Dressage requires neither understanding nor solidarity.21
Classic punishments lead to public fear of being punished, not necessarily to the
assimilation of moral values. After a mouse in the laboratory touches an electrode
and is electrocuted, it will not touch it again, but no moral value is involved in this
decision. When the public is truly terrified of being punished, it is assumed not to
commit offenses, but not necessarily to feel solidarity with the social or moral
values of the offense. Empiric research shows, however, that the effectiveness of
public deterrence is extremely limited, if any. It is also difficult to measure public
deterrence because it cannot be isolated from other social factors affecting the
public’s behavior.22
An assumption of public deterrence is that the public is aware of the punishment
imposed upon individuals, and consequently would not wish to experience that

18
PAUL JOHANN ANSELM FEUERBACH, LEHRBUCH DES GEMEINEN IN DEUTSCHLAND GÜLTIGEN PEINLICHEN
RECHTS 117 (1812, 2007).
19
Johannes Andenaes, The General Preventive Effects of Punishment, 114 U. PA. L. REV. 949, 952
(1966).
20
Johannes Andenaes, The Morality of Deterrence, 37 U. CHI. L. REV. 649 (1970).
21
JEFFRIE G. MURPHY, GETTING EVEN: FORGIVENESS AND ITS LIMITS (2003); Jeffrie G. Murphy,
Marxism and Retribution, 2 PHILOSOPHY AND PUBLIC AFFAIRS 43 (1973).
22
Dan M. Kahan, Between the Economics and Sociology: The New Path of Deterrence, 95 MICH
L. REV. 2477 (1997); Neal Kumar Katyal, Deterrence’s Difficulty, 95 MICH. L. REV. 2385 (1997);
Jonathan S. Abernethy, The Methodology of Death: Reexamining the Deterrence Rationale,
27 COLUM. HUM. RTS. L. REV. 379 (1996); Craig J. Albert, Challenging Deterrence: New Insights
on Capital Punishment Derived from Panel Data, 60 U. PITT. L. REV. 321 (1999); James
M. Galliher and John F. Galliher, A “Commonsense” Theory of Deterrence and the “Ideology”
of Science: The New York State Death Penalty Debate, 92 J. CRIM. L. & CRIMINOLOGY 307 (2002);
Andrew D. Leipold, The War on Drugs and the Puzzle of Deterrence, 6 J. GENDER RACE & JUST.
111 (2002).
196 6 Punishibility of Artificial Intelligence Technology

punishment. The public deterrence is aimed both at potential offenders who have
never been caught and have never been punished, and at those who have already
experienced criminal proceedings. The general concept of public deterrence is that
individuals are capable of learning from the experience of others and not only from
self-experience. Thus, it is assumed that if the media publicizes criminal verdicts,
the public will tend to avoid delinquency out of the fear of being punished. But not
all verdicts are publicized, and not all individuals are capable of understanding the
verdicts or have access to them.
Despite all of the above, the most important difficulty in public deterrence is its
contradiction with the principle of personal liability, which is one of the fundamen-
tal principles of criminal law.23 According to this principle, the offender can be
punished only for his own behavior, never for the behavior of other persons,
including the potential behavior of other persons. Thus, the society may impose a
punishment on the individual to inflict suffering on him for what he did (retribu-
tion), to deter him personally from recidivism (deterrence), to rehabilitate him
(rehabilitation), and to disable his delinquent capabilities (incapacitation), but not
in order to deter other persons from committing the same offense.
For example, let us assume that the common punishment for commission of
robbery under certain circumstances is 4 years of imprisonment, and that this would
be the punishment in a specific case if the court did not consider public deterrence.
But if the court were to consider public deterrence, it may impose 8 years of
imprisonment only to deter the public. Is it justified to punish the individual doubly
for the sake of public deterrence when half of the punishment already satisfies the
purposes of punishment, including those of individual deterrence?
This raises the question of the legitimacy of the public deterrence. The question
is an acute one because the public is not necessarily knowledgeable in legal matters
of this type, and even if it were, it may not have sufficient legal knowledge to fully
understand the legal meaning of a given punishment. Moreover, individuals are
required to pay a heavy price in order to produce a short-lived deterrence in the
public. In the example above, in order to provide a deterrent for some individuals
who may spend a few minutes reading a short article in the local newspaper, the
offender must serve four additional years in prison. Is it fair? And is it legitimate?
Public deterrence may be consistent with the principle of personal liability,
however, if it becomes only an incidental consequence of the punishment imposed
on the individual. When the court does not aim the punishment at deterring the
public, but the public is nevertheless deterred by the punishment, the deterrence is
legitimate. When, however, the court aims the punishment ex ante at deterring the
public, it is illegitimate, whether the public is actually deterred or not. It is not
legitimate for the court to use the individual instrumentally merely to deter the

23
For the principle of personal liability in criminal law see GABRIEL HALLEVY, THE MATRIX OF
DERIVATIVE CRIMINAL LIABILITY 1–61 (2012).
6.1 General Purposes of Punishments and Sentencing 197

public.24 The individual has the right to be punished for his behavior and not for the
purpose of deterring other people.
The individual’s right to be punished cannot tolerate the instrumental use of the
individual for purposes of deterring others. If a deserved punishment is imposed on
the offender, and one of the incidental consequences of the punishment is that the
public is deterred, the individual pays no additional price for the deterrence of the
public, and public deterrence may be considered legitimate under these
circumstances. Therefore, punishment in criminal law must always be personal
and focused on the individual. It may have public consequences, but these cannot be
deliberate or included in the purposes of punishment.
In general, deterrence is the prospective general purpose of punishment. Deter-
rence is not intended to address the offense that has already been committed, only to
prevent the commission of further offenses. The offense already committed serves
deterrence only as the initial trigger for activating the criminal process, including
sentencing and punishment. This trigger may serve as an indication of the required
measures needed to intimidate or deter the offender from committing further
offenses. Consequently, deterrence is not intended to repair the social harm that
has already been caused by the commission of the offense.
The purpose of deterrence is to provide an answer to the potential social
endangerment embodied in the offender’s behavior.25 It is assumed that through
punishment it is possible to prevent the commission of further offenses, although
the already committed offense cannot be changed. Thus, deterrence is aimed at the
future and not at the past, and it focuses on the prevention of recidivism. The major
role of deterrence is the creation of a better future, free of repeated offending.
Focusing on the past is the role of retribution. Deterrence accepts the fact that the
past is beyond change.
With deterrence in view, it is possible to impose identical punishments on two
offenders who have committed offenses of different severity. If the danger of
recidivism to society is identical for both offenders, identical means can serve the
purpose of preventing recidivism regardless of the severity of the already
committed offenses. The social harm caused by the offenses is immaterial for
deterrence (although it is most significant for retribution).
Because deterrence is affected by the personal character of the offender, there is
a chance that the offender is punished for his personal character and not for any
behavior that occurred in the past.26 Punishing a person for his personal character is
problematic in modern criminal law because it represents punishment for personal

24
Antony Robin Duff and David Garland, Introduction: Thinking about Punishment, A READER ON
PUNISHMENT 1, 11 (Antony Robin Duff and David Garland eds., 1994).
25
Easton and Piper, supra note 13, at pp. 124–126.
26
LEON RADZINOWICZ AND ROGER HOOD, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRA-
TION FROM 1750 VOL. 5: THE EMERGENCE OF PENAL POLICY (1986).
198 6 Punishibility of Artificial Intelligence Technology

status, regardless the behavior, which is prohibited.27 Modern criminal law prefers
punishing for behavior (in rem) rather than for personal status (in personam).
Because of all the above-mentioned limitations, deterrence cannot function as
the sole purpose of punishment. To formulate a fair punishment that also provides
an adequate and satisfactory solution to the various problems raised by punishment
and sentencing, deterrence must be balanced and completed by other purposes of
punishment. Combining deterrence with retribution can provide a solution to
problems both prospectively and retrospectively. But deterrence alone may not
necessarily exhaust all the required prospective aspects of punishment.
Deterrence is indeed a prospective purpose of punishment, but it relates to only
one aspect: the prevention of further delinquency. Deterrence does so by creating
fear and intimidation of expected punishment, including fear of the criminal
process itself, which includes humiliation, loss of time, money, etc. Deterrence
does not address the substantive problems that have led the offender to delinquency,
nor does it pretend to ensure the physical prevention of further delinquency, as it
focuses on mental intimidation. If the substantive problems are acute and remain
unsolved, and mental intimidation is not effective, the result may be that deterrence
is ineffective even prospectively, as it is substantively not different from dressage
through intimidation.28
Thus, deterrence is balanced and completed by retribution retrospectively, and it
is balanced and completed by rehabilitation and incapacitation prospectively.
Rehabilitation focuses on the substantive problems that have led the offender to
delinquency, and incapacitation is concerned with the actual physical prevention of
further delinquency.

6.1.3 Rehabilitation

The general assumption behind rehabilitation as a general purpose of punishment is


that the offender commits the offense because of certain reasons (social, economic,
mental, behavioral, physical, etc.) or under certain circumstances (social, eco-
nomic, mental, behavioral, physical, etc.), and that proper treatment of these
reasons and circumstances may prevent further delinquency. Rehabilitation is a
prospective purpose of punishment as it relates only to the future. From the point of
view of the offender and of society, rehabilitation does not address the offense
already committed but only further offenses. Rehabilitation is not relevant to the
past, as no offender can be rehabilitated retroactively, and its aim is only the
prevention of recidivism. The offense that has been committed serves only as the
initial trigger for initiating the process of rehabilitation, but is not addressed directly
by that process.

27
MIRKO BAGARIC, PUNISHMENT AND SENTENCING: A RATIONAL APPROACH (2001).
28
Jeffrie G. Murphy, Marxism and Retribution, 2 PHILOSOPHY AND PUBLIC AFFAIRS 43 (1973).
6.1 General Purposes of Punishments and Sentencing 199

Although both rehabilitation and deterrence are prospective purposes of punish-


ment, and both are aimed at preventing recidivism, they are substantively different.
The purpose of rehabilitation is to treat the internal roots of the problem that has led
the offender to delinquency, whereas deterrence treats only the external symptoms
of delinquency. For example, in attempting to prevent recidivism on the part of an
offender who uses prohibited drugs, deterrence tries to intimidate the offender with
the prospect of the expected punishment if he reoffends, regardless of the real
problems that have led him to use drugs.
By contrast, rehabilitation attempts to understand and explore the reasons behind
the offender’s use of prohibited drugs. Understanding these reasons dictates the
treatment and rehabilitation applied to the offender to prevent further drug delin-
quency. If the offender uses drugs because of physical addiction, the appropriate
treatment programs include weaning. The social benefit expected from the success-
ful treatment is the prevention of further drug use by the offender. If the right
treatment is successful in preventing recidivism, there is no need for further
prospective punishments.
Treatment and rehabilitation programs, however, are not appropriate for all
offenders. Matching the right program, if it exists, to the offender depends on
many factors, including the offender’s personality and social characteristics. This
was one of the lessons learned from the failure of the rehabilitation programs before
the 1970s. Therefore, as a first step, the court must examine the rehabilitation
potential of the offender, then decide on enrollment in a rehabilitation program if
a program that matches the offender’s personality and social characteristics is
available.29
The court may seek the assistance of professionals to examine and assess the
offender’s rehabilitation potential. Usually, these professionals belong to the fields
of social work, medicine, and social sciences, including behavioral sciences (e.g.,
psychology and criminology). In many legal systems the assessment is carried out
by the probation service at the request of the court. The offender’s rehabilitation
potential indicates his inner capability to rehabilitate under the given circumstances
and leave the sphere of delinquency.
This potential may be examined from various perspectives, but the two main
factors considered by the courts are the offender’s personality and social
characteristics. After the offender’s rehabilitation potential is assessed, the court
may order appropriate treatment or a rehabilitation program. In most cases, the
court order includes the recommendations of the professionals and of the probation
service, although an appropriate rehabilitation programs that match the offender’s
data may not always be available.
The offense itself can also affect the assessment, although not as a decisive
factor. For example, from the point of view of assessing the offender’s

29
Gabriel Hallevy, Therapeutic Victim-Offender Mediation within the Criminal Justice Process –
Sharpening the Evaluation of Personal Potential for Rehabilitation while Righting Wrongs under
the Alternative-Dispute-Resolution (ADR) Philosophy, 16 HARV. NEGOT. L. REV. 65 (2011).
200 6 Punishibility of Artificial Intelligence Technology

rehabilitation potential, there is a difference between a murderer with a psycho-


pathic character and the one who committed the murder in response to years of
oppression perpetrated by the murdered. The psychopathic murderer, who does not
wish to assimilate the wrongfulness of his behavior and acts in cold blood, has an
extremely low personal potential for rehabilitation. By contrast, the oppressed
murderer, who was not able to bear the continuous oppression, may be treated to
channel his rage to legal paths in order to deal with oppression. Based on the
assessed potential for rehabilitation, the offender is matched with an appropriate
program, if one is available.
The question of the legitimacy of assessing the offender’s rehabilitation poten-
tial within the sentencing process may arise. It may be argued that the court
punishes the offender not for the offense but for his personality and personal
characteristics. But because rehabilitation is a prospective purpose of punishment,
it cannot focus on the offense, which has already been committed in the past, in the
same way as retribution does. The offender’s delinquent behavior in the past may
affect rehabilitation, but it cannot play the main role in it. Therefore, offenders who
committed severe offenses can be rehabilitated as well as those who committed
light offenses.
Assessing the offender’s personal rehabilitation potential requires a deep under-
standing of the motives and factors that led the offender to commit the offense, and
to delinquency in general. These factors and motives may be external to the
offender (e.g., social, economic, environmental, etc.) or internal (e.g., mental,
behavioral, valent, etc.). Treatment of these factors and of the ensuing problems
is at the focus of the rehabilitation process. In the modern approach, understanding
these motives and factors is the key to the effectiveness of rehabilitation, and any
rehabilitation program must be matched to the individual offender.30
Rehabilitation programs vary in different societies at different times owing to
scientific and social developments. They may incorporate changes in existing
punishments or may create entirely new ways of punishing. For example, indeter-
minate sentences were common in the U.S. until the 1970s. In an indeterminate
sentence, the court sets the upper and lower limits of the imprisonment (e.g.,
between 3 and 6 years), but the final date of release from prison is determined by
the release committee based on the prisoner’s personal and social progress in
employment, professional training, reduction in the level of violence, etc.31
The indeterminate sentence was an adaptation of the existing punishment of
imprisonment and a product of the rehabilitation purpose of punishment. But this
punishment was proven to be inefficient in preventing recidivism, and since the

30
DAVID ABRAHAMSEN, CRIME AND THE HUMAN MIND (1945); ELMER H. JOHNSON, CRIME, CORRECTION
AND SOCIETY 44–439 (1968); WILLIAM C. MENNINGER, PSYCHIATRIST TO A TROUBLED WORLD (1967).
31
JOHN LEWIS GILLIN, CRIMINOLOGY AND PENOLOGY 708 (1927); Paul W. Tappan, Sentences for Sex
Criminals, 42 J. CRIM. L. CRIMINOLOGY & POLICE SCI. 332 (1951).
6.1 General Purposes of Punishments and Sentencing 201

1970s the courts have used it sparingly.32 Probation, another new punishment
created by rehabilitation, is still being used in most developed countries, but
much more carefully than before.
At the beginning of the twenty-first century, the dominant trend in the use of
rehabilitation as a general purpose of punishment is to instill cognitive and social
qualifications in the offenders that would enable them to deal with the external and
internal factors that led them to delinquency.33 These qualifications are internal
tools the offender is expected to use in order to face factual reality without turning
to delinquency and to carry out a conscious internal change with respect to both the
external and internal factors mentioned above. The aim is to change the
rehabilitated offender’s outlook in the aspects relevant to delinquency.34
Rehabilitation can offer an opportunity to the offender to undergo a process of
re-socialization and to reintegrate into society in a way that does not involve
delinquency. It may be difficult for legal practitioners to identify rehabilitation as
a general purpose of punishment because it emphasizes the correction of the
offender and not the suffering involved in the punishment. But rehabilitation is a
general purpose of punishment because the rehabilitation process and the punish-
ment are integrated, and the punishment is the trigger that initiates the rehabilitation
program.35
At times, the involvement of the community and of the social circles close to the
offender (e.g., family, friends, teachers, etc.) is required to complete the

32
Robert W. Kastenmeier and Howard C. Eglit, Parole Release Decision-Making: Rehabilitation,
Expertise and the Demise of Mythology, 22 AM. U. L. REV. 477 (1973); JESSICA MITFORD, KIND AND
USUAL PUNISHMENT: THE PRISON BUSINESS (1974).
33
DAVID P. FARRINGTON AND BRANDON C. WELSH, PREVENTING CRIME: WHAT WORKS FOR CHILDREN,
OFFENDERS, VICTIMS AND PLACES (2006); LAWRENCE W. SHERMAN, DAVID P. FARRINGTON, DORIS
LEYTON MACKENZIE AND BRANDON C. WELSH, EVIDENCE-BASED CRIME PREVENTION (2006); ROSEMARY
SHEEHAN, GILL MCLVOR AND CHRIS TROTTER, WHAT WORKS WITH WOMEN OFFENDERS (2007); Laaman
v. Helgemoe, 437 F.Supp. 269 (1977); Secretary of State for the Home Department, [2003]
E.W.C.A. Civ. 1522, [2003] All E.R. (D) 56; Secretary of State for Justice, [2008]
E.W.C.A. Civ. 30, [2008] All E.R. (D) 15, [2008] 3 All E.R. 104; Anthony E. Bottoms, Empirical
Research Relevant to Sentencing Frameworks: Reform and Rehabilitation, PRINCIPLED SENTENCING:
READINGS ON THEORY AND POLICY 16 (Andrew von Hirsch, Andrew Ashworth and Julian Roberts
eds., 3rd ed., 2009); Peter Raynor, Assessing the Research on ‘What Works’, PRINCIPLED SENTENC-
ING: READINGS ON THEORY AND POLICY 19 (Andrew von Hirsch, Andrew Ashworth and Julian
Roberts eds., 3rd ed., 2009); Francis T. Cullen and Karen E. Gilbert, Reaffirming Rehabilitation,
PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY 28 (Andrew von Hirsch, Andrew
Ashworth and Julian Roberts eds., 3rd ed., 2009); Andrew von Hirsch and Lisa Maher, Should
Penal Rehabilitation Be Revived?, PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY
33 (Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009).
34
Richard P. Seiter and Karen R. Kadela, Prisoner Reentry: What Works, What Does Not, and
What Is Promising, 49 CRIME AND DELINQUENCY 360 (2003); Clive R. Hollin, Treatment Programs
for Offenders, 22 INT’L J. OF LAW & PSYCHIATRY 361 (1999).
35
Francis A. Allen, Legal Values and the Rehabilitative Ideal, 50 J. CRIM. L. CRIMINOLOGY &
POLICE SCI. 226 (1959); LIVINGSTON HALL AND SHELDON GLUECK, CRIMINAL LAW AND ITS ENFORCE-
MENT 18 (2nd ed., 1958); Edward Rubin, Just Say No to Retribution, 7 BUFF. CRIM. L. REV.
17 (2003).
202 6 Punishibility of Artificial Intelligence Technology

rehabilitation process. In general, when the offender’s rehabilitation potential is


reasonably high, the social efforts of the community and of the social circle close to
the offender are considered essential for the success of the rehabilitation process.36
As a result of this social concept of rehabilitation, it has been accepted that the court
needs all the relevant information in order to create a wide factual view of the
offender’s individual case. This information relates to both internal and external
factors that have led the offender to delinquency.
In general, rehabilitation is a prospective general purpose of punishment. Reha-
bilitation is not intended to deal with the offense that has already been committed,
only to prevent the commission of further offenses. The offense already committed
serves rehabilitation only as the initial trigger that activates the criminal process,
including sentencing and punishment. This trigger may assist in identifying the
measures needed to rehabilitate and treat the offender. Thus, rehabilitation is not
intended to repair the social harm that has already been caused by the commission
of the offense.
The purpose of rehabilitation is to provide a solution to the potential social
endangerment embodied in the offender’s behavior. It is assumed that punishment
is able to prevent the commission of further offenses, although the offense already
committed cannot be changed. Rehabilitation, therefore, is oriented toward the
future, not toward the past, and it focuses on the prevention of recidivism. The
primary function of rehabilitation is the creation of a better future, free from
reoffending. Focus on the past is the domain of retribution, whereas rehabilitation
accepts the fact that the past, is beyond change.
From the point of view of rehabilitation, it is possible to impose identical
punishments on two offenders who committed offenses of vastly different severity.
If the personal rehabilitation potential of the offenders is identical and the internal
and external factors that led to delinquency are identical, it is generally reasonable
to use the same rehabilitative treatment for both even if they committed different
offenses and irrespective of the severity of the offenses committed. The social harm
caused by the offenses is immaterial for rehabilitation, although it is most signifi-
cant for retribution.
Because rehabilitation is affected by the personality of the offender and by his
personal and social characteristics, when the purpose of punishment is rehabilita-
tion the offender is being punished for his personality and not for his past behav-
ior.37 In modern criminal law, punishing a person for his personality is problematic
because punishing for personal status is prohibited, regardless of behavior.38

36
Andrew Ashworth, Rehabilitation, PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY 1, 2
(Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009); PETER RAYNOR AND
GWEN ROBINSON, REHABILITATION, CRIME AND JUSTICE 21 (2005); SHADD MARUNA, MAKING GOOD:
HOW CONVICTS REFORM AND BUILD THEIR LIVES (2001); STEPHEN FARRALL, RETHINKING WHAT WORKS
WITH OFFENDERS: PROBATION, SOCIAL CONTEXT, AND DESISTANCE FROM CRIME (2002).
37
LEON RADZINOWICZ AND ROGER HOOD, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRA-
TION FROM 1750 VOL. 5: THE EMERGENCE OF PENAL POLICY (1986).
38
MIRKO BAGARIC, PUNISHMENT AND SENTENCING: A RATIONAL APPROACH (2001).
6.1 General Purposes of Punishments and Sentencing 203

Modern criminal law prefers punishing for behavior (in rem) rather than for
personal status (in personam). As a result, rehabilitation cannot serve as the sole
consideration or purpose of punishment, and can only be complementary to the
other purposes of punishment.
Deterrence is also a prospective purpose of punishment, but it relates to another
aspect of delinquency prevention. Deterrence is intended to prevent recidivism
through intimidation. The means that prevents reoffending is the offender’s fear
of the potential punishment. Deterrence does not consider the substantive reasons
and roots of delinquency of the offender, and thus it is not intended to solve these
problems, but only to handle their external symptoms expressed by the commission
of the offense. By contrast, rehabilitation is designed to address these problems.
Nevertheless, rehabilitation does not provide solutions to all prospective
problems of delinquency: it is not intended to eliminate the physical factors that
lead to delinquency or to solve the various types of social risk associated with the
offender. Moreover, the internal cognitive change in the offender is not always
sufficiently powerful to prevent reoffending. Furthermore, the reasons for delin-
quency are not always internal. For example, when the reasons for delinquency are
physical (e.g., chemical imbalance, genetic problems, etc.) or mental (e.g., mental
impairment that cannot be treated without medication), rehabilitation is likely to be
irrelevant and ineffective despite the fact that it is a prospective purpose of
punishment.39
Thus, whereas rehabilitation is balanced and completed by retribution as a
retrospective purpose of punishment, deterrence and incapacitation balance and
complete rehabilitation as prospective purposes. Deterrence focuses on the social
risk associated with the offender and incapacitation focuses on the physical preven-
tion of further delinquency.

6.1.4 Incapacitation

Incapacitation is considered to be a modern general purpose of punishment. It is


based on the assumption that at times society has no other option to protect itself
from delinquency than physically preventing the offender from reoffending. Physi-
cal prevention takes the form of incapacitating the physical (bodily) capabilities of
the offender to commit the offense. The preventive means can vary according to the
type of offense that must be prevented and according to the physical capabilities of
the offender.
These means can include capital penalty, long-term incarceration, the amputa-
tion of limbs, exile, castration, chemical castration, etc. For example, the

39
Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and Theoretical Relevance of
Hypersexual Desire, 47 INT’L J. OF OFFENDER THERAPY AND COMPARATIVE CRIMINOLOGY 439 (2003);
Matthew Jones, Overcoming the Myth of Free Will in Criminal Law: The True Impact of the
Genetic Revolution, 52 DUKE L. J. 1031 (2003); Sanford H. Kadish, Excusing Crime, 75 CAL.
L. REV. 257 (1987).
204 6 Punishibility of Artificial Intelligence Technology

assumption is that a sex offender who commits his offenses because of endocrino-
logical problems (hormonal imbalance) can achieve the necessary balance through
chemical treatment, and that a property offender can be prevented from committing
further property offenses if his hands are cut off.
Incapacitation is a prospective purpose of punishment because it relates only to
the future. From the point of view of the offender and of society, incapacitation does
not address the offense already committed, only future offenses. Incapacitation is
irrelevant for the past because no offender can be incapacitated retroactively.
Consequently, the purpose of incapacitating the individual is always to prevent
the commission of further offenses in the future, in other words, to prevent recidi-
vism. The offense that has already been committed serves only as the initial trigger
for initiating the process of incapacitation, but it is not treated by that process.
Although incapacitation, rehabilitation, and deterrence are all prospective
purposes of punishment, and all three are intended to prevent recidivism, they are
substantively different. Rehabilitation and deterrence are designed to create an
internal conscious change within the offender’s mind to prevent the offender
from committing further offenses. Rehabilitation is aimed at achieving the same
end by addressing the roots of the delinquency, and the purpose of deterrence is to
deal with the external symptoms of delinquency, as noted above. By contrast,
incapacitation does not operate through internal conscious changes but by the
physical prevention of further delinquency.
As far as incapacitation is concerned, it is immaterial whether or not the offender
has internally assimilated the social value of avoiding delinquency, has been
deterred from delinquency, has been rehabilitated, or wishes to commit any further
offense. Incapacitation is effective even when the offender feels no solidarity with
the social values of delinquency prevention and even if he still exhibits an extreme
desire to commit further offenses.40 Incapacitation operates at two levels: breaking
the linkage between the offender and the opportunity to commit further offenses,
and disabling the offender’s physical ability to reoffend.
Developments in incapacitation as a general purpose of punishment in the
twentieth century have led to the creation of three general circles of incapacitation:

(a) the incapacitation circle of the entire society;


(b) the incapacitation circle of populations at risk; and
(c) the incapacitation circle of the offender.

Incapacitation as a general purpose of punishment relates only to the third


circle.41

40
Ledger Wood, Responsibility and Punishment, 28 AM. INST. CRIM. L. & CRIMINOLOGY 630, 639
(1938).
41
GERALD CAPLAN, PRINCIPLES OF PREVENTIVE PSYCHIATRY (1964).
6.1 General Purposes of Punishments and Sentencing 205

The incapacitation circle of entire society refers to the efforts of society to


prevent delinquency in general by preventing opportunities to offend. The assump-
tion is that the number of offenses committed is lower if there are fewer
opportunities to offend.42 The psychological assumption of this circle is that
individuals tend to offend if they have the opportunity to do so without being
subsequently punished. Similar assumptions apply to deterrence as well.
This incapacitation circle is manifest mostly in the use of means of defense
against delinquency (e.g., the use of alarms to prevent theft), in methods used to
increase the offenders’ risk of being captured (e.g., increased police presence), in
means used to reduce the benefits derived from delinquency (e.g., preventing the
use of stolen credit cards), in means used to reduce the will to offend (e.g.,
prohibition against incitement to violence), and in the use of means to clarify
expected behavior and increase awareness of it (e.g., posting traffic signals on the
roads).43
The incapacitation circle of populations at risk relates to the efforts of society to
prevent delinquency in populations who have a higher potential to offend. This
circle is based on the capability to predict delinquency based on the general social
characteristics of relevant populations.44 In most countries these populations are
specific and include general socio-economic characteristics that make delinquency
more accessible and more desired, for example, juveniles at risk and neighborhoods
that have a high rate of convicted offenders as residents.
This circle of incapacitation operates primarily by increasing the monitoring of
the relevant populations and channeling the activities of potential offenders toward
positive purposes,45 for example by increased presence of law-enforcement
authorities in the relevant locations and the establishment of social or community
frameworks for leisure-time activities.
The circle of incapacitation of the offender involves the efforts of society to
prevent recidivism. Incapacitation as a general purpose of punishment refers only to
this circle. The objects of the other two circles are not actual but potential offenders,
and therefore punishment is irrelevant for them. Incapacitation within the third
circle becomes applicable when the other two circles of incapacitation failed to

42
MARCUS FELSON, CRIME AND EVERYDAY LIFE: INSIGHTS AND IMPLICATIONS FOR SOCIETY 17, 95,
109, 120 (1994).
43
RONALD V. CLARKE, SITUATIONAL CRIME PREVENTION: SUCCESSFUL CASE STUDIES (1992); Ronald
V. Clarke and Derek B. Cornish, Modeling Offenders’ Decisions: A Framework for Policy and
Research, 6 CRIME AND JUSTICE: AN ANNUAL REVIEW OF RESEARCH 147 (1985).
44
Don M. Gottfredson, Assessment and Prediction Methods in Crime and Delinquency, PRESIDENTS
NATIONAL COMMISSION FOR LAW ENFORCEMENT AND ADMINISTRATION OF JUSTICE, TASK FORCE REPORT:
JUVENILE DELINQUENCY AND YOUTH CRIME (1967); Joan Petersilia and Peter W. Greenwood, Man-
datory Prison Sentences: Their Projected Effects on Crime and Prison Populations, 69 J. CRIM.
L. & CRIMINOLOGY 604 (1978).
45
JOHN W. HINTON, DANGEROUSNESS: PROBLEMS OF ASSESSMENT AND PREDICTION (1983); JOHN
MONAHAN, PREDICTING VIOLENT BEHAVIOR: AN ASSESSMENT OF CLINICAL TECHNIQUES (1981); PETER
GREENWOOD AND ALLAN ABRAHAMSE, SELECTIVE INCAPACITATION (1982).
206 6 Punishibility of Artificial Intelligence Technology

prevent delinquency, and society must prevent reoffending. The failure of the other
two circles may be the result of ineffectiveness, inefficiency, or inactivity.
In contrast to rehabilitation and deterrence, which are focused on inner changes
in the offender, incapacitation focuses on the physical prevention of recidivism
either by breaking the linkage between the offender and the opportunity to offend
(e.g., through the object of delinquency, location, devices, etc.) or by neutralizing
the offender’s capability to reoffend. Absolute neutralizing can take the form of
capital penalty, and in the case of certain offenses it can take the form of amputation
of limbs, including castration, or chemical castration.46
In legal systems in which these punishments are allowed, they are used to
achieve absolute incapacitation of delinquent capabilities.47 In other legal systems
alternative punishments are used for the same purposes, despite their inability to
achieve absolute incapacitation. For example, long-term imprisonment removes the
offender from society and reduces the offender’s opportunities for delinquent
activity, but offenses can also be committed in prison as well as after release,
when the offender has greater experience and perhaps more incentive to reoffend
(e.g., because of the economic difficulties of the family due to the imprisonment,
the loss of certain social qualifications, association with other offenders, etc.).48
Prison authorities may be assisted by a system of release committees in
predicting the chances of recidivism after release,49 but not necessarily in
eliminating them. Long-term imprisonment may reduce the risk of recidivism,
but it cannot ensure the incapacitation of the offender’s delinquent capabilities.50
The choice of the most appropriate means to incapacitate the offender’s delinquent
capabilities is a social choice, based on the values of any given society. There are
difficulties in assessing the chances that the offender will reoffend because the
prediction is based on the offender’s criminal record51 and on other personal

46
JACK P. GIBBS, CRIME, PUNISHMENT AND DETERRENCE 58 (1975).
47
BARBARA HUDSON, UNDERSTANDING JUSTICE: AN INTRODUCTION TO IDEAS, PERSPECTIVES AND
CONTROVERSIES IN MODERN PENAL THEORY 32 (1996, 2003).
48
Joseph Murray, The Effects of Imprisonment on Families and Children of Prisoners, THE
EFFECTS OF IMPRISONMENT 442 (Alison Liebling and Shadd Maruna eds., 2005); Shadd Maruna
and Thomas P. Le Bel, Welcome Home? Examining the “Reentry Court” Concept from a Strength-
Based Perspective, 4 WESTERN CRIMINOLOGY REVIEW 91 (2003).
49
Malcolm M. Feeley and Jonathan Simon, The New Penology: Notes on the Emerging Strategy of
Corrections and Its Implications, 30 CRIMINOLOGY 449 (1992); Andrew von Hirsch, Incapacitation,
PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY 75 (Andrew von Hirsch, Andrew
Ashworth and Julian Roberts eds., 3rd ed., 2009); ANDREW VON HIRSCH, PAST OR FUTURE CRIMES:
DESERVEDNESS AND DANGEROUSNESS IN THE SENTENCING OF CRIMINALS 176–178 (1985).
50
FRANKLIN E. ZIMRING AND GORDON J. HAWKINS, DETERRENCE: THE LEGAL THREAT IN CRIME CONTROL
(1973).
51
MARK H. MOORE, SUSAN R. ESTRICH, DANIEL MCGILLIS AND WILLIAM SPELLMAN, DEALING WITH
DANGEROUS OFFENDERS: THE ELUSIVE TARGET OF JUSTICE (1985).
6.1 General Purposes of Punishments and Sentencing 207

characteristics.52 These difficulties have to do with the method used to make such
predictions and not with the substantial need for such assessment.53
Because in some cases the incapacitation of delinquent capabilities of offenders
may exceed the maximum penalty for a given offense, the penalty maximum
limitation has become more flexible. In some legal systems, it has been permitted
to impose harsher punishments than specified in the offense if the court reaches the
conclusion that in this way it can protect society from recidivism.54 Moreover, in
some legal systems preventive detention is used after the offender finishes serving
his imprisonment term, if the court finds that the offender is still dangerous to the
society despite the fact that the punishment has been served in full.55
In some legal systems, the offender is restricted by the court after being released
from prison because he is assessed to be dangerous to society.56 Restrictions may
apply to specific places of residence (as in the case of sex offenders and pedophiles
who may be restricted from living close to their potential victims) or to certain
professions in which the offender may not engage. The offender may also be
required to undergo medical treatment, to meet with relevant professionals, not to
leave a certain territory, to report to the police periodically, and so on.57
At times, the incapacitating measures are not aimed at the offender but at society
at large. For example, the names and photographs of convicted offenders may be
published after their release from prison as a warning to the public to exercise
caution in dealing with these offenders. These preventive measures are used in

52
Anthony E. Bottoms and Roger Brownsword, Incapacitation and “Vivid Danger”, PRINCIPLED
SENTENCING: READINGS ON THEORY AND POLICY 83 (Andrew von Hirsch, Andrew Ashworth and
Julian Roberts eds., 3rd ed., 2009); Andrew von Hirsch and Andrew Ashworth, Extending
Sentences for Dangerousness: Reflections on the Bottoms-Brownsword Model, PRINCIPLED SEN-
TENCING: READINGS ON THEORY AND POLICY 85 (Andrew von Hirsch, Andrew Ashworth and Julian
Roberts eds., 3rd ed., 2009).
53
Andrew von Hirsch and Lila Kazemian, Predictive Sentencing and Selective Incapacitation,
PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY 95 (Andrew von Hirsch, Andrew
Ashworth and Julian Roberts eds., 3rd ed., 2009); Lila Kazemian and David P. Farrington,
Exploring Residual Career Length and Residual Number of Offenses for Two Generations of
Repeat Offenders, 43 J. OF RESEARCH IN CRIME AND DELINQUENCY 89 (2006).
54
ARNE LONBERG, THE PENAL SYSTEM OF DENMARK (1975); JEAN E. FLOUD AND WARREN YOUNG,
DANGEROUSNESS AND CRIMINAL JUSTICE (1981); LINDA SLEFFEL, THE LAW AND THE DANGEROUS
CRIMINAL (1977); Parole Board, [2003] U.K.H.L. 42, [2004] 1 A.C. 1.
55
W.H. HAMMOND AND EDNA CHAYEN, PERSISTENT CRIMINALS (1963); DAVID A. THOMAS, PRINCIPLES
OF SENTENCING 309 (1980); Lawrence Davidoff and John Barkway, Extended Terms of Imprison-
ment for Persistent Offenders, 21 HOME OFFICE RESEARCH BULLETIN 43 (1986); Andrew von Hirsch,
Prediction of Criminal Conduct and Preventive Confinement of Convicted Persons, 21 BUFF.
L. REV. 717 (1972).
56
See, e.g., article 104 of the Sexual Offences Act, 2003, c.42; article 227 of the Criminal Justice
Act, 2003, c.44; articles 98–101 of the Criminal Justice and Immigration Act, 2008, c.4; Richards,
[2006] E.W.C.A. Crim. 2519, [2007] Crim. L.R. 173.
57
Jonathan Simon, The Ideological Effect of Actuarial Practices, 22 LAW & SOCIETY REV.
771 (1988); Jonathan Simon, Megan’s Law: Crime and Democracy in Late Modern America,
25 LAW & SOCIAL INQUIRY 1111 (2000).
208 6 Punishibility of Artificial Intelligence Technology

conjunction with other measures such as close monitoring of offenders who are still
considered to be dangerous to the public, despite having completed serving their
penalty. Common monitoring measures are police tracking or electronic bracelets
that enable the police to locate the offender at any time.58
The general justification for restricting the released offender beyond the period
of penalty specified for the given offense as part of incapacitation has to do with the
desire to protect society from the social danger caused by the offender. Substan-
tively, this is not different from the forcible hospitalization of mentally ill persons,
the quarantine imposed on individuals suffering from an infectious disease, revok-
ing the weapons license of persons convicted of violent offenses, or revoking the
driver’s license of epileptic individuals.59
Incapacitation as a general purpose of punishment is designed to physically
prevent the occurrence of further offenses, regardless of the harm actually caused to
society by the former offense. For example, an offender who attempts to commit an
offense but does not complete it because he is caught in the act is still considered
dangerous to society, although he has not caused any actual harm.60 From the point
of view of incapacitation, the harm already caused to society is immaterial, as
incapacitation is a prospective general purpose of punishment, similar to deterrence
and rehabilitation, as noted above.
In general, incapacitation is a prospective general purpose of punishment and it
is not intended to address the offense that has already been committed, only to
prevent the commission of future offenses. The offense already committed serves
incapacitation only as the trigger that initiates the criminal process, including
sentencing and punishment. This trigger may assist in specifying the measures
required to incapacitate the offender’s delinquent capabilities. Consequently, inca-
pacitation is not intended to provide a solution to the social harm that has already
been caused by the commission of the offense.
Incapacitation, however, is designed to deal with the physical capability of the
offender to reoffend. The assumption is that punishment can prevent recidivism.
The primary measures taken by incapacitation, as noted above, are breaking the
linkage between the offender and the opportunity to offend and eliminating the
offender’s physical capability to reoffend. In this way, incapacitation is oriented
toward the future rather than the past, and focuses on the prevention of recidivism.
The primary role of incapacitation is the creation of a better future, free from
reoffending. It is the role of retribution to focus on the past, whereas incapacitation
accepts the fact that the past is beyond change.

58
Joseph B. Vaughn, A Survey of Juvenile Electronic Monitoring and Home Confinement
Programs, 40 JUVENILE & FAM. C. J. 1 (1989).
59
NIGEL WALKER, PUNISHMENT, DANGER AND STIGMA: THE MORALITY OF CRIMINAL JUSTICE
ch. 5 (1980); Marvin E. Wolfgang, Current Trends in Penal Philosophy, 14 ISR. L. REV.
427 (1979).
60
GABRIEL HALLEVY, THE MATRIX OF DERIVATIVE CRIMINAL LIABILITY 75–83 (2012).
6.1 General Purposes of Punishments and Sentencing 209

From the point of view of incapacitation, it can be plausible to impose identical


punishments on two offenders who have committed offenses of different severity. If
the offenders’ delinquent capabilities and opportunities to reoffend are identical, it
is most reasonable to use the same incapacitation measures with respect to both,
although they committed different offenses, and regardless of the severity of the
offenses they have committed. The social harm caused by the offenses is immaterial
for incapacitation, although it is most significant for retribution.
Because incapacitation is affected by the personal characteristics of the offender
based on his delinquent capabilities, incapacitation may be punishing the offender
for his personal characteristics and not for a certain behavior that has taken place in
the past.61 Punishing a person for his personal characteristics is problematic in
modern criminal law because it represents punishing for personal status, regardless
of behavior, which is prohibited.62 Modern criminal law prefers punishing for
behavior (in rem) rather than for personal status (in personam). Consequently,
incapacitation cannot function as the sole consideration or purpose of punishment,
and it serves to complement other purposes of punishment.
Deterrence is also a prospective purpose of punishment, but it relates to another
aspect of prevention of delinquency. Deterrence is intended to prevent recidivism
by intimidation. The means that prevents reoffending is the offender’s fear of the
potential punishment. Deterrence does not consider the substantive reasons and
roots of delinquency of the offender and thus it is not intended to solve these
problems but rehabilitation is intended to address these problems, contrary to
deterrence. Nevertheless, rehabilitation does not provide solutions to all prospec-
tive problems of delinquency.63 Rehabilitation is not intended to eliminate the
physical factors that lead to delinquency or to solve the various types of social
risk associated with the offender.
Moreover, the internal-cognitive change in the offender is not always suffi-
ciently powerful to prevent reoffending. Furthermore, the reasons for delinquency
are not always internal. For example, when the reasons for delinquency are physical
(e.g., unbalanced hormones, genetic problems, unbalanced chemistry in the body,
etc.) or mental (e.g., mental impairment that cannot be treated without medication),
rehabilitation and deterrence are likely to be irrelevant and ineffective despite the
fact that they are prospective purposes of punishment. In these cases, prevention of
recidivism is completed by incapacitation as a general purpose of punishment.64 In

61
Norval Morris, Incapacitation within Limits, PRINCIPLED SENTENCING: READINGS ON THEORY AND
POLICY 90 (Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009).
62
MIRKO BAGARIC, PUNISHMENT AND SENTENCING: A RATIONAL APPROACH (2001).
63
Herbert L. Packer, The Practical Limits of Deterrence, CONTEMPORARY PUNISHMENT 102, 105
(Rudolph J. Gerber, Patrick D. McAnany and Norval Morris eds., 1972).
64
Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and Theoretical Relevance of
Hypersexual Desire, 47 INT’L J. OF OFFENDER THERAPY AND COMPARATIVE CRIMINOLOGY 439 (2003);
Matthew Jones, Overcoming the Myth of Free Will in Criminal Law: The True Impact of the
Genetic Revolution, 52 DUKE L. J. 1031 (2003); Sanford H. Kadish, Excusing Crime, 75 CAL.
L. REV. 257 (1987).
210 6 Punishibility of Artificial Intelligence Technology

this way, deterrence, rehabilitation, and incapacitation are all prospective general
purposes of punishment, but each is designed to solve different problems in
preventing further delinquency.65

6.2 Relevance of Sentencing to Artificial Intelligence Systems

6.2.1 Relevant Purposes to Artificial Intelligence Technology

From the above four purposes of punishment, what are the relevant purposes to
artificial intelligence technology? Retribution is meant to satisfy society more than
it is purposed to the offender. Causing suffer to the offender, itself, has no
prospective value. The suffer may deter the offender, but that is part of the general
purpose of deterrence, not retribution. Retribution may supply some catharsis to the
society and victims through causing suffer to the offender. Punishing machines
through retribution, in this context, would be meaningless and impractical.
Some people, when they hurry up and suddenly their car cannot be ignited, they
get angry. In their anger they may hit the car, kick it or even shout at it. Punishing
machines, any machines, from cars to highly-sophisticated artificial intelligence
robots, through retribution would not be different than kicking a car. It may ease the
anger for some personalities, but not more than that. A machine does not suffer, and
as long as retribution is based on suffer, retribution would not be very relevant to
punishing robots. This is true for both classic and modern (“just desert”) approaches
of retribution.
Moreover, functionally, if retribution functions as lenient factor of sentencing in
order to prevent revenge, it just strengthens the retribution’s irrelevancy to artificial
intelligence sentencing. Revenge is assumed to cause additional suffer to the
offender than the official punishment, however, since machines do not experience
suffer, the choice between revenge and retribution is meaningless for them.
Deterrence is meant to prevent the commission of the next offense through
intimidation. For machines, at the moment, intimidation is a feeling they cannot
experience. The intimidation itself is based on future suffer imposed in case of
committing the offense. Since machines do not experience suffer either at the
moment, as aforesaid, the reason for intimidation, besides the intimidation itself,
is also annihilated when considering the appropriate punishment for robots. How-
ever, both retribution and deterrence may be relevant as punishment’s purposes
regarding the human participants in the commission of the offense (e.g., users and
programmers).
As to rehabilitation, artificial intelligence systems may experience decision-
making processes and take decisions that might be seem unreasonable. Sometimes
the artificial intelligence system may be needing external directing in order to refine
the making-decisions process. This may be part of the machine learning process.

65
LIVINGSTON HALL AND SHELDON GLUECK, CRIMINAL LAW AND ITS ENFORCEMENT 17 (2nd ed., 1958).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 211

Rehabilitation functions exactly the same way for humans, therefore it may be
applicable to artificial intelligence systems as well. The rehabilitation of humans
causes them to make better decisions in their daily life in the society’s point of
view. So may the criminal process cause for artificial intelligence systems. The
punishment, under this approach, would be directed to refine the machine learning
process.
The artificial intelligence system after being rehabilitated would be able to form
better and more accurate decisions after adding more limitations to its discretion
and refining the process through machine learning. Thus, the punishment, if
adjusted correctly to the particular artificial intelligence system, would be part of
the machine learning process. Through this process, directed by the rehabilitative
punishment, the artificial intelligence system would have better tools to analyze
factual data and deal with it. In fact, this is the same effect of rehabilitative
punishments on humans. Due to the rehabilitative punishment, they have better
tools to face factual reality.
Consequently, rehabilitation may be relevant purpose of punishment for artifi-
cial intelligence systems as it is not based on intimidation or suffer, but it is directed
to creation of better performances of the artificial intelligence systems. For humans,
this consideration may be secondary in most cases, however, for artificial intelli-
gence systems it may be a primary purpose of punishment. Nevertheless, rehabili-
tation is not the only consideration that may be relevant for artificial intelligence
systems, but incapacitation is relevant as well.
As to incapacitation, if an artificial intelligence system commits offenses during
its activation, and it has no capability of changing its ways through inner changes
(e.g., through machine learning), only incapacitation may supply an adequate
answer. Whether the artificial intelligence system understands the meaning of its
activity or not, and whether the artificial intelligence system is equipped with
proper tools to perform inner changes, delinquency must still be prevented. In
such situation the society must take from the artificial intelligence system its
physical capabilities to commit further offenses. The particular artificial intelli-
gence system must be out of the circle of delinquency, regardless its skills.
Substantively, this is what the society does with equivalent cases of human
offenders.66
It may be concluded that towards artificial intelligence systems the two relevant
considerations of punishments are rehabilitation and incapacitation. Both reflect the
extreme edges of sentencing, and both serve the criminal law purposes towards
non-human offenders. When the artificial intelligence system possesses capabilities
of performing inner changes that affect its activity, it seems that rehabilitation
would be the relevant consideration rather than incapacitation. However, when the

66
Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and Theoretical Relevance of
Hypersexual Desire, 47 INT’L J. OF OFFENDER THERAPY AND COMPARATIVE CRIMINOLOGY 439 (2003);
Matthew Jones, Overcoming the Myth of Free Will in Criminal Law: The True Impact of the
Genetic Revolution, 52 DUKE L. J. 1031 (2003); Sanford H. Kadish, Excusing Crime, 75 CAL.
L. REV. 257 (1987).
212 6 Punishibility of Artificial Intelligence Technology

artificial intelligence system does not possess such capabilities, incapacitation


would be relevant. Thus, the relevant punishment for the relevant case is adjusted
to the personal characteristics of the offender, as acceptable for human offenders.

6.2.2 Outlines for Imposition of Specific Punishments on Artificial


Intelligence Technology

Given that sentencing considerations are relevant for artificial intelligence systems,
the question is how would it be possible to impose human punishments upon them.
For instance, how can imprisonment, fine or capital penalty be imposed upon
artificial intelligence systems. For this purpose the legal system need a legal
technique of conversion from human penalties to artificial intelligence penalties.
The required legal technique may be inspired by the legal technique of converting
human penalties for corporations.
Corporations are legal entities in criminal law, and criminal liability may be
imposed upon them as if they were human offenders. When a corporation is found
criminally liable, the question of punishment arises. In general, because there is no
legal difference between corporate and human offenders in the imposition of
criminal liability, there is no reason for substantive differences between them in
punishment, at least not from the point of view of the general purposes of punish-
ment. There may be some technical differences, however, in the way certain
punishments are executed.
Retribution relates to the subjective pricing of suffering, which is affected by the
social harm caused by the offense. The social harm is measured objectively,
regardless of the identity of the offender. Although a corporation may cause greater
harm with a lesser effort, retribution considers the actual harm and not the
offender’s capabilities. For the subjective pricing of suffering, the court must
consider the personal characteristics of the corporation (together with the imper-
sonal characteristics of the offense), in the same way it does in relation to human
offenders. Concerning imposition of certain punishments, some adjustments must
be made, as discussed below.
Deterrence relates to the balance between the expected values of benefit and
punishment resulting from the commission of the offense. The effect of deterrence
through punishment on this balance is not different for corporate and human
offenders. Increasing the expected value of the punishment affects the balance in
the same way for both corporate and human offenders. The corporate rationality
required for deterrence is present in the corporate decision-making processes,
which can be fully affected by the deterrent effect of punishment.
Rehabilitation relates to the offender’s personal rehabilitation potential and
seeks an appropriate solution to the sources of the offender’s delinquency. As
general purpose of punishment, rehabilitation may be relevant whether the offender
is human or a corporation. A corporation may have rehabilitation potential, as a
corporation, and its delinquency may have reasons that can be treated appropriately.
Occasionally the offense reveals a delinquent organizational subculture within a
6.2 Relevance of Sentencing to Artificial Intelligence Systems 213

corporation that encourages offending and provides incentives for it, directly or
indirectly (by disregarding offenses or by unwillingness to prevent their
commission).67
Imposing criminal liability on an officer in the corporation for the commission of
a given offense while disregarding the roots of the delinquency within the corpora-
tion cannot provide an effective solution to corporate delinquency. Often there is
only a minimal difference between a corporation that is incapable of changing its
delinquent subculture and the associated decision-making process, and a corpora-
tion that accepts that subculture.68 At times, the reasons for delinquency are
objective (e.g., internal power struggles that paralyze the operation of the corpora-
tion). Rehabilitation may address appropriately the roots of the delinquent
subculture.
Incapacitation seeks to physically prevent reoffending and stop the social
endangerment posed by the offender. The social endangerment posed by the
offender is evaluated in the same way, whether the offender is human or a
corporation. The opportunities to commit offenses are examined objectively,
based on the behavior of the offender, whether human or a corporation. For
example, the opportunity to release a false report to the tax authorities is based on
the behavior of the offender, not on the offender’s identity. In this context, the
measures of incapacitation are determined based on the social endangerment posed
by the offender, regardless of the offender’s legal identity.
In conclusion, there is no legal difference between human and corporate
offenders as far as the general purposes of punishment are concerned, but there
may be some differences in the way in which certain punishments are carried out.
When a fine is imposed, there is not much difference between human and corporate
offenders, and paying the fine is not physically different from paying taxes. But the
question arises how imprisonment is carried out when the offender is a corporation.
The same question may arise in the case of probation, capital penalty, public
service, etc., all of which are interpreted as physical punishments.
Because no physical punishments have been planned ex ante for corporations, it
has been argued that they are inapplicable to corporations and that therefore, in
these cases, corporations are unpunishable.69 This argument is incorrect for two
main reasons. First, in the case of most offenses the punishment can be converted
into other punishments, including fines. Second, in general, all punishments are

67
PETER A. FRENCH, COLLECTIVE AND CORPORATE RESPONSIBILITY 47 (1984).
68
Stuart Field and Nico Jorg, Corporate Liability and Manslaughter: Should We Be Going
Dutch?, [1991] Crim. L.R. 156 (1991).
69
HARRY G. HENN AND JOHN R. ALEXANDER, CORPORATIONS AND OTHER BUSINESS ENTERPRISES
184 (3rd ed., 1983); People v. Strong, 363 Ill. 602, 2 N.E.2d 942 (1936); State v. Traux,
130 Wash. 69, 226 P. 259 (1924); United States v. Union Supply Co., 215 U.S. 50, 30 S.Ct.
15, 54 L.Ed. 87 (1909); State v. Ice & Fuel Co., 166 N.C. 366, 81 S.E. 737 (1914); Commonwealth
v. McIlwain School Bus Lines Inc., 283 Pa.Super. 1, 423 A.2d 413 (1980).
214 6 Punishibility of Artificial Intelligence Technology

applicable and relevant to both humans and corporations,70 although in the case of
some punishments it is necessary to make some adjustments. These adjustments,
however, do not negate the applicability of the punishments.71
Not only has criminal liability been imposed upon corporations for centuries, but
corporations have also been sentenced, and not only to fines. Corporations are
punished in various ways, including imprisonment. Note that corporations are
punished separately from their human officers (directors, managers, employees,
etc.), exactly in the way that criminal liability is imposed upon them separately
from the criminal liability, if any, of their human officers. There is no debate over
the question whether corporations should be punished using a variety of
punishments, including imprisonment, the question concerns only on the way in
which to do it.72
To answer the question of “how,” a general legal technique of conversion is
needed. This operation is carried out in three principal stages. First, the general
punishment itself (e.g., imprisonment, fine, probation, death, etc.) is analyzed
regarding its roots of meaning. Second, these roots are sought in the corporation.
Third, the punishment is adjusted according to the roots found in the corporation.
For example, in the case of imposition of incarceration on corporations, first
incarceration is traced back to its roots in the act of depriving individuals of their
freedom, then a meaning is sought for the concept of freedom for corporations.
After this meaning has been understood, in the third and final stage the court
imposes a punishment that is the equivalent of depriving a corporation of its
freedom. This is how the general legal technique of conversion works in the case
of sentencing of corporations. At times, this requires the court to be creative in the
adjustments required to make punishments applicable to corporations, but the
general framework is clear, workable, and it has been implemented with all types
of punishments imposed on all types of corporations.73

70
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry Into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981); STEVEN BOX, POWER, CRIME AND
MYSTIFICATION 16–79 (1983); Brent Fisse and John Braithwaite, The Allocation of Responsibility
for Corporate Crime: Individualism, Collectivism and Accountability, 11 SYDNEY L. REV.
468 (1988).
71
Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Misconduct, 60 LAW &
CONTEMP. PROBS. 23 (1997); Richard Gruner, To Let the Punishment Fit the Organization:
Sanctioning Corporate Offenders Through Corporate Probation, 16 AM. J. CRIM. L. 1 (1988);
Steven Walt and William S. Laufer, Why Personhood Doesn’t Matter: Corporate Criminal
Liability and Sanctions, 18 AM. J. CRIM. L. 263 (1991).
72
Stuart Field and Nico Jorg, Corporate Liability and Manslaughter: Should We Be Going
Dutch?, [1991] Crim. L.R. 156 (1991).
73
Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Misconduct, 60 LAW &
CONTEMP. PROBS. 23 (1997); Richard Gruner, To Let the Punishment Fit the Organization:
Sanctioning Corporate Offenders Through Corporate Probation, 16 AM. J. CRIM. L. 1 (1988);
Steven Walt and William S. Laufer, Why Personhood Doesn’t Matter: Corporate Criminal
Liability and Sanctions, 18 AM. J. CRIM. L. 263 (1991).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 215

An excellent example is the American case of the Allegheny Bottling Com-


pany,74 a corporation that was found to be guilty of price-fixing (antitrust). It was
agreed that under the given circumstances, if the defendant were human, the
appropriate punishment would be imprisonment for a certain term. The question
was one of the applicability of imprisonment to corporations, in other words, a
question of “how.” As a general principle, the court declared that it “does not expect
a corporation to have consciousness, but it does expect it to be ethical and abide by
the law.”75
The court did not find any substantive difference between humans and
corporations in this matter and added that “[t]his court will deal with this company
no less severely than it will deal with any individual who similarly disregards the
law.”76 This statement reflects the basic principle of equalizing punishments of
human and corporate defendants.77 In this case, the corporation was sentenced to
3 years imprisonment, a fine of 1 million dollars, and probation for a period of
3 years. The court proceeded to discuss the idea of corporate imprisonment based
on the three stages described above.
First, the court asked what the general meanings of imprisonment were and
accepted the definitions of imprisonment as “constraint of a person either by force
or by such other coercion as restrains him within limits against his will” and as
“forcible restraint of a person against his will.” The court’s conclusion was simple
and clear: “[t]he key to corporate imprisonment is this: imprisonment simply means
restraint” and “restraint, that is, a deprivation of liberty.” The court’s conclusion
was reinforced by several provisions of the law and of case laws as well. Conse-
quently, “[t]here is imprisonment when a person is under house arrest, for example,
where a person has an electronic device which sends an alarm if the person leaves
his own house.”
This concluded the first stage. In the second stage, the court searched for a
meaning of this punishment for corporations and concluded that “[c]orporate
imprisonment requires only that the Court restrain or immobilize the corporation”78
and proceeded to implement the prison sentence on the corporation according to
this insight. Thus, in the third and final stage the court made imprisonment
applicable to the corporations and implemented it.79

74
United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988).
75
Ibid, at p. 858.
76
Ibid.
77
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry Into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981); STEVEN BOX, POWER, CRIME AND
MYSTIFICATION 16–79 (1983); Brent Fisse and John Braithwaite, The Allocation of Responsibility
for Corporate Crime: Individualism, Collectivism and Accountability, 11 SYDNEY L. REV.
468 (1988).
78
Allegheny Bottling Company case, supra note 75, at p. 861.
79
Ibid, at p. 861: “Such restraint of individuals is accomplished by, for example, placing them in
the custody of the United States Marshal. Likewise, corporate imprisonment can be accomplished
by simply placing the corporation in the custody of the United States Marshal. The United States
216 6 Punishibility of Artificial Intelligence Technology

Thus, imprisonment may be applied not only to human but also to corporate
offenders. Following the same approach, imprisonment is not the only penalty
applicable to corporations, but other penalties can be converted as well, even if
they were originally designed for human offenders. And if this is true for imprison-
ment, which is an essentially human penalty, fine can be easily collected from
corporations in the same way as taxes are. Thus, in determining the type of
punishments and its scope based on the general purposes of punishment, it is
immaterial whether the offense was committed by humans or by corporations.
After the court imposes the appropriate punishment, it may be necessary to make
some adjustments to some of the punishments.
This insight raises, of course, the equivalent question for artificial intelligence
systems. Using the general legal technique of conversion, presented above, as taken
from the corporate delinquency world, human punishments may be applicable and
actually imposed upon artificial intelligence systems, the very same way they are
applicable and actually imposed upon corporation. We shall now examine the
applicability of each of the common punishments in the modern criminal law on
artificial intelligence systems. For the applicability and actual imposition of
penalties upon artificial intelligence systems the punishments of capital penalty,
imprisonment, probation, public service and fine.

Marshal would restrain the corporation by seizing the corporation’s physical assets or part of the
assets or restricting its actions or liberty in a particular manner. When this sentence was
contemplated, the United States Marshal for the Eastern District of Virginia, Roger Ray, was
contacted. When asked if he could imprison Allegheny Pepsi, he stated that he could. He stated
that he restrained corporations regularly for bankruptcy court. He stated that he could close the
physical plant itself and guard it. He further stated that he could allow employees to come and go
and limit certain actions or sales if that is what the Court imposes. Richard Lovelace said some
three hundred years ago, ‘stone walls do not a prison make, nor iron bars a cage.’ It is certainly true
that we erect our own walls or barriers that restrain ourselves. Any person may be imprisoned if
capable of being restrained in some fashion or in some way, regardless of who imposes it. Who am
I to say that imprisonment is impossible when the keeper indicates that it can physically be done?
Obviously, one can restrain a corporation. If so, why should it be more privileged than an
individual citizen? There is no reason, and accordingly, a corporation should not be more
privileged. Cases in the past have assumed that corporations cannot be imprisoned, without any
cited authority for that proposition. . . . This Court, however, has been unable to find any case
which actually held that corporate imprisonment is illegal, unconstitutional or impossible. Con-
siderable confusion regarding the ability of courts to order a corporation imprisoned has been
caused by courts mistakenly thinking that imprisonment necessarily involves incarceration in jail.
. . . But since imprisonment of a corporation does not necessarily involve incarceration, there is no
reason to continue the assumption, which has lingered in the legal system unexamined and without
support, that a corporation cannot be imprisoned. Since the Marshal can restrain the corporation’s
liberty and has done so in bankruptcy cases, there is no reason that he cannot do so in this case as he
himself has so stated prior to the imposition of this sentence”.
6.2 Relevance of Sentencing to Artificial Intelligence Systems 217

6.2.2.1 Capital Penalty of Artificial Intelligence Technology


Death penalty is one of the most ancient penalties in human history.80 It is
considered as the most severe penalty in most cultures. In the past it has been
considered as a very common penalty, however, since the eighteenth century the
global general approach has been to restrict it and minimize its use. Accordingly,
the capital penalty has been replaced by more lenient penalties in very many
offenses and the killing methods were developed to cause minimal suffer to the
offender.81 Thus, for instance, the noose in the eighteenth century became longer to
cause faster and painless brake of the neck, and the guillotine was used for the same
reason.
Cruel methods of execution were prohibited. For instance, tearing up the
offender’s body by tying his legs and hands to running horses was prohibited for
the suffer it caused the offender. Some countries totally abolished the capital
penalty, but most countries in the world did not. The United States Supreme
Court ruled in 1979 that the capital penalty in the relevant cases does not infringe
the eighth amendment of the constitution, as it is not cruel and unusual, and
therefore it is constitutionally valid.82 The methods of execution were not consid-
ered unconstitutional either.83
Retribution supports capital penalty only in severe offenses that the penalty
would be parallel (by suffer or result) to the offense (e.g., homicide offenses).84
Deterrence may support the capital penalty only when it is directed to deterring the
public and not to deterring the offender, since a dead person cannot be deterred.85
Rehabilitation is completely irrelevant for this punishment, as dead people cannot
be rehabilitated. Since retribution and deterrence are irrelevant for artificial intelli-
gence sentencing, as aforesaid, and since rehabilitation is irrelevant for capital

80
RUSS VERSTEEG, EARLY MESOPOTAMIAN LAW 126 (2000); G. R. DRIVER AND JOHN C. MILES, THE
BABYLONIAN LAWS, VOL. I: LEGAL COMMENTARY 206, 495–496 (1952): “The capital penalty is most
often expressed by saying that the offender ‘shall be killed’. . .; this occurs seventeen times in the
first thirty-four sections. A second form of expression, which occurs five times, is that ‘they shall
kill’. . . the offender”.
81
Frank E. Hartung, Trends in the Use of Capital Punishment, 284(1) ANNALS OF THE AMERICAN
ACADEMY OF POLITICAL AND SOCIAL SCIENCE 8 (1952).
82
Gregg v. Georgia, 428 U.S. 153, S.Ct. 2909, 49 L.Ed.2d 859 (1979).
83
Provenzano v. Moore, 744 So.2d 413 (Fla. 1999); Dutton v. State, 123 Md. 373, 91 A. 417
(1914); Campbell v. Wood, 18 F.3d 662 (9th Cir. 1994); Wilkerson v. Utah, 99 U.S. (9 Otto)
130, 25 L.Ed. 345 (1878); People v. Daugherty, 40 Cal.2d 876, 256 P.2d 911 (1953); Gray
v. Lucas, 710 F.2d 1048 (5th Cir. 1983); Hunt v. Nuth, 57 F.3d 1327 (4th Cir. 1995).
84
ROBERT M. BOHM, DEATHQUEST: AN INTRODUCTION TO THE THEORY AND PRACTICE OF CAPITAL
PUNISHMENT IN THE UNITED STATES 74 (1999).
85
Peter Fitzpatrick, “Always More to Do”: Capital Punishment and the (De)Composition of Law,
THE KILLING STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE 117 (Austin Sarat ed.,
1999); Franklin E. Zimring, The Executioner’s Dissonant Song: On Capital Punishment and
American Legal Values, THE KILLING STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE
137 (Austin Sarat ed., 1999).
218 6 Punishibility of Artificial Intelligence Technology

penalty, the only general consideration that may support capital penalty for artificial
intelligence sentencing is incapacitation.
There is no doubt that a dead person is incapacitated in relation to commission of
further offenses, therefore the most dominant punishment consideration as to death
penalty is incapacitation.86 The death neutralizes the delinquent capabilities of the
offender and no further offense may be committed. Accordingly, the question is
towards the applicability of capital penalty on artificial intelligence systems: how
can death penalty be imposed upon artificial intelligence systems.
First, the capital penalty should be analyzed as to its roots of meaning. Second,
these roots should be searched for in artificial intelligence systems. Third, the
punishment should be adjusted to these roots in artificial intelligence systems.
Functionally, capital penalty is deprivation of life. Although this deprivation of
life may affect not only the executed offender, but other persons (e.g., relatives,
employees, etc.) as well, the essence of the death penalty is the death of the
offender, which consists on depriving the offender from life. When the offender
is human, life means the person’s very existence as functioning creature. When the
offender is a corporation or an artificial intelligence system, its life may be defined
through its activity.
A living artificial intelligence system is a functioning artificial intelligence
system, therefore the “life” of an artificial intelligence system is its capability of
functioning as such. Stopping the artificial intelligence system’s activity does not
necessarily means “death” of the system. Death means the permanent incapacita-
tion of the system’s “life”. Therefore, capital penalty for artificial intelligence
systems means its permanent shutdown. This act incapacitates the system’s
capabilities and no further offenses or any other activity is expected. When the
artificial intelligence system is shut down by the court’s order, it means that the
society prohibits the operation of that particular entity for it is too dangerous for the
society.
Such applicability of capital penalty on artificial intelligence systems serves both
the purposes of capital penalty and incapacitation (as general purpose of sentenc-
ing) in relation to artificial intelligence systems. When the offender is too dangerous
for the society and the society decided to impose death penalty, prospectively, if
this punishment is acceptable in the particular legal system, it is purposed for the
total and final incapacitation of the offender. This is true for human offenders, for
corporations and for artificial intelligence systems. For artificial intelligence
systems the permanent incapacitation is expressed by an absolute shutdown under
the court’s order with no option of reactivating the system again.
Such a system would not be involved in delinquent events anymore. It may be
argued that such shutdown may affect other innocent persons (e.g., manufacturer of

86
Anne Norton, After the Terror: Mortality, Equality, Fraternity, THE KILLING STATE – CAPITAL
PUNISHMENT IN LAW, POLITICS, AND CULTURE 27 (Austin Sarat ed., 1999); Hugo Adam Bedau,
Abolishing the Death Penalty Even for the Worst Murderers, THE KILLING STATE – CAPITAL
PUNISHMENT IN LAW, POLITICS, AND CULTURE 40 (Austin Sarat ed., 1999).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 219

the system, its programmers, users, etc.). However, this is true not only for artificial
intelligence systems, but for human offenders and corporations as well. As the
offender is executed, it affects his innocent family (in case of human offender) or its
employees, directors, managers, shareholders etc. (in case of corporations). When
the offender is an artificial intelligence system, it also affects other innocent
persons, and this is not unique for artificial intelligence systems. Thus, capital
penalty may be applicable for artificial intelligence systems.

6.2.2.2 Imprisonment and Suspended Imprisonment of Artificial


Intelligence Technology
Imprisonment is a general title for various penalties. The common characteristic of
these penalties is deprivation of the offender’s liberty. The physical incarceration is
only one type of imprisonment, although it is the most common type of imprison-
ment. Thus, public service, for instance, may function under certain circumstances
as imprisonment. The purposes of imprisonment were different in different
societies and times. Sometimes the purpose was to make the offender suffer,
sometimes it was to rehabilitate the offender under tight discipline, and so on.87
However, since the eighteenth century a dominant consideration for the evaluation
of imprisonment is its social efficiency.88
The social efficiency of the imprisonment is evaluated by the rate of recidivism.
If the rate of recidivism remains constant or increases, imprisonment is not consid-
ered socially efficient. Therefore, imposition of imprisonment on the offender has
been used for initiating an inner change in the offender for the benefit of the
society.89 Thus, prisoners were taught professions, to read and write, to experience
weaning from drugs, alcohol, violence, to experience working and many more skills
within their stay in prison. Imprisonment was adapted to the relevant populations,
and thus were created the supermax90 and the shock incarceration,91 for instance.
However, in modern time imprisonment became very popular penalty, and the
problem which is considered most acute with it is that prisons are overcrowded.
Retribution supports the imprisonment, especially the incarceration, since it makes

87
Sean McConville, The Victorian Prison: England 1865–1965, THE OXFORD HISTORY OF THE
PRISON 131 (Norval Morris and David J. Rothman eds., 1995); THORSTEN J. SELLIN, SLAVERY AND
THE PENAL SYSTEM (1976); HORSFALL J. TURNER, THE ANNALS OF THE WAKEFIELD HOUSE OF
CORRECTIONS FOR THREE HUNDRED YEARS 154–172 (1904).
88
JOHN HOWARD, THE STATE OF PRISONS IN ENGLAND AND WALES (1777, 1996).
89
David J. Rothman, For the Good of All: The Progressive Tradition in Prison Reform, HISTORY
AND CRIME 271 (James A. Inciardi and Charles E. Faupel eds., 1980).
90
Roy D. King, The Rise and Rise of Supermax: An American Solution in Search of a Problem?,
1 PUNISHMENT AND SOCIETY 163 (1999); CHASE RIVELAND, SUPERMAX PRISONS: OVERVIEW AND
GENERAL CONSIDERATIONS (1999); JAMIE FELLNER AND JOANNE MARINER, COLD STORAGE: SUPER-
MAXIMUM SECURITY CONFINEMENT IN INDIANA (1997).
91
DORRIS LAYTON MACKANZIE AND EUGENE E. HEBERT, CORRECTIONAL BOOT CAMPS: A TOUGH
INTERMEDIATE SANCTION (1996); Sue Frank, Oklahoma Camp Stresses Structure and Discipline,
53 CORRECTIONS TODAY 102 (1991); ROBERTA C. CRONIN, BOOT CAMPS FOR ADULT AND JUVENILE
OFFENDERS: OVERVIEW AND UPDATE (1994).
220 6 Punishibility of Artificial Intelligence Technology

the prisoner suffer. Deterrence supports the imprisonment as the suffer in prison
may be deterring the offender from recidivism and potential offenders from
offending. However, both retribution and deterrence are irrelevant for artificial
intelligence systems, as artificial intelligence systems experience neither suffer
nor fear. Since rehabilitation and incapacitation are relevant for artificial intelli-
gence systems, imprisonment should be evaluated accordingly.
When the offender’s liberty is deprived, the society may use this term for the
offender’s rehabilitation through initiation of inner change. The inner change may
be the consequence of activity in prison, as aforesaid. If the offender accepts that
inner change and does not return to delinquency, imprisonment is considered to be
successful, as the offender has been rehabilitated. Moreover, when the offender is
under strict supervision within the prison, his capabilities to commit further
offenses is dramatically reduced, and this may be considered incapacitation, if
that supervision actually prevents the offender from recidivism.
Accordingly, the question is towards the applicability of imprisonment on
artificial intelligence systems: how can imprisonment be imposed upon artificial
intelligence systems. First, the imprisonment should be analyzed as to its roots of
meaning. Second, these roots should be searched for in artificial intelligence
systems. Third, the punishment should be adjusted to these roots in artificial
intelligence systems.
Functionally, imprisonment is deprivation of liberty. Although this deprivation
of liberty may affect not only the imprisoned offender, but other persons (e.g.,
relatives, employees, etc.) as well, the essence of the imprisonment is the depriva-
tion of the offender’s liberty, which consists on restricting the offender’s activity.
When the offender is human, liberty means the person’s freedom to act in any way.
When the offender is a corporation or an artificial intelligence system, its liberty
may also be defined through its activity. The artificial intelligence system’s liberty
is exercising its capabilities with no restrictions. This concerns both the very
exercising of the capabilities and the content of these capabilities.
Consequently, imposition of imprisonment on artificial intelligence system is
expressed by depriving its liberty to act through restricting its activity for determi-
nate term and under tight supervision. In this time the artificial intelligence system
may be fixed up in order to prevent commission of further offenses. The artificial
intelligence system’s fix may be more efficient under the system’s incapacitation,
and when it is under the court’s order. This situation may serve both purposes of
rehabilitation and incapacitation, which are the relevant sentencing purposes for
artificial intelligence systems. When the artificial intelligence system is under
custody, restriction and supervision, its actual capabilities to offend are
incapacitated.
When the artificial intelligence system is being fixed through inner changes,
initiated by external factors (e.g., programmers under court’s order) and experi-
enced during the term of restriction, it is substantive rehabilitation, for the system
would work out reducing the chances for involvement in further delinquency. The
actual social value of imposing imprisonment on artificial intelligence systems is
real. The dangerous system is being taken away from the society for it to be repaired
6.2 Relevance of Sentencing to Artificial Intelligence Systems 221

and meanwhile it would not be capable of causing further harm to society. When
this process is complete, the system may be returned to full activity. If the system
has no chance of rehabilitation, incapacitation may take the major role and dictate
long period of imprisonment or even capital penalty.
Suspended imprisonment is conditional penalty.92 The offender is warned that if
further offense is committed, the full penalty of imprisonment is imposed for the
newer offense and in addition the offender would have to serve another term in
imprisonment for the first time of offending. This penalty is purposed to keep the
offender away from offending, at least when the condition is still valid. The actual
way of imposition of this penalty is through adding the relevant line to the
offender’s criminal record. Therefore, the relevant question here is not towards
the actual execution of the penalty, but towards its social meaning when imposed on
artificial intelligence systems.
For artificial intelligence systems suspended imprisonment is an alert for
reconsidering its course of conduct. This process may be led by programmers,
users and manufacturers, the same way as human offenders may be assisted by their
relative or professionals (e.g., psychologists, social workers, etc.) and corporate
offenders may be assisted by their officers or professional. This is a more lenient
measure that calls for reconsidering the course of conduct. By its essence suspended
imprisonment is not substantially different in this context from imprisonment,
although for humans it may be extremely different since it prevents the human
offender from suffering and keeps him out of delinquency through intimidation
(deterrence).

6.2.2.3 Probation of Artificial Intelligence Technology


The overcrowded prisons of the modern time made the state be developing
substitutes. One of the popular substitutes was probation. The probation has been
developed in the mid-nineteenth century by private charity and religious
organizations, that have undertaken to take care of convicted offenders by supply-
ing them social tools to abandon delinquency and reintegrate in the society.93 Most
countries embraced the probation as part of their public sentencing system. The first
state to do that was Massachusetts in 1878. Accordingly, the operation of probation
was by the state, and the state supervised the process.94
The social tools given to the offender vary from case to case. When the offender
is drug-addicted, the social tools include weaning program. When the offender is
unemployed or has no profession, the social tools include vocational/professional
training. During the time the offender is under probation, the authorities supervise

92
MARC ANCEL, SUSPENDED SENTENCE 14–17 (1971); Marc Ancel, The System of Conditional
Sentence or Sursis, 80 L. Q. REV. 334, 336 (1964).
93
United Nations, Probation and Related Measures, UN DEPARTMENT OF SOCIAL AFFAIRS 29–30
(1951).
94
DAVID J. ROTHMAN, CONSCIENCE AND CONVENIENCE: THE ASYLUM AND ITS ALTERNATIVES IN PROGRES-
SIVE AMERICA (1980); FRANK SCHMALLEGER, CRIMINAL JUSTICE TODAY: AN INTRODUCTORY TEXT FOR
THE 21st CENTURY 454 (2003).
222 6 Punishibility of Artificial Intelligence Technology

and inspect him for not committing further offenses. Probation is a dominant
rehabilitative penalty, and it matches offenders who have high potential for reha-
bilitation. Consequently, the court needs a accurate diagnosis of that potential,
prepared by the probation service, in order to sentence the offender through
probation.95
Retribution is irrelevant for probation, as probation is not intended to make the
offender suffer. Deterrence is neither relevant for probation, since probation is
perceived as lenient penalty, in which its deterrent value is negligible. Moreover,
both retribution and deterrence are irrelevant for artificial intelligence sentencing,
as aforesaid. Incapacitation is relevant for artificial intelligence sentencing, but it is
not reflected in probation, unless the particular probation’s framework is extremely
tight and the offender’s delinquent capabilities are actually incapacitated. However,
the dominant purpose of probation is, of course, rehabilitation, as probation is
purposed to rehabilitate the offender through giving him relevant social measures
to reintegrate in the society.
Accordingly, the question is towards the applicability of probation on artificial
intelligence systems: how can probation be imposed upon artificial intelligence
systems. First, the probation should be analyzed as to its roots of meaning. Second,
these roots should be searched for in artificial intelligence systems. Third, the
punishment should be adjusted to these roots in artificial intelligence systems.
Functionally, probation is supervising the offender and granting him measures to
reintegrate in the society. These measures should match the particular type of
delinquency, which has been the immediate cause for the offender’s sentencing.96
The process of probation functions as functional correction of the offender. When
an offense is committed by an artificial intelligence system, the system should be
diagnosed whether it may be fixed up or not. At this stage human offenders are
diagnosed as to their potential of rehabilitation. Both kinds of diagnosis are
performed by professionals. Human offenders may be diagnosed by probation-
service staff, social workers, psychologists, psychiatrists, physicians, etc.
Artificial intelligence systems may be diagnosed by technology experts. If the
diagnosis shows no potential for rehabilitation, the ultimate purpose of sentencing
would become incapacitation, as the society wished to prevent further harm to
society. However, if the diagnosis is positive, and the offender has a high potential
to be rehabilitated, probation may be taken into consideration for implementation of
the rehabilitation purpose of sentencing. This is true for both human offenders,
corporations and artificial intelligence systems. The core question within artificial

95
Paul W. Keve, The Professional Character of the Presentence Report, 26 FEDERAL PROBATION
51 (1962).
96
HARRY E. ALLEN, ERIC W. CARLSON AND EVALYN C. PARKS, CRITICAL ISSUES IN ADULT PROBATION
(1979); Crystal A. Garcia, Using Palmer’s Global Approach to Evaluate Intensive Supervision
Programs: Implications for Practice, 4 CORRECTION MANAGEMENT QUARTERLY 60 (2000); ANDREW
WRIGHT, GWYNETH BOSWELL AND MARTIN DAVIES, CONTEMPORARY PROBATION PRACTICE (1993);
MICHAEL CAVADINO AND JAMES DIGNAN, THE PENAL SYSTEM: AN INTRODUCTION 137–140 (2002).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 223

intelligence system diagnosis is whether it may be fixed in its current configuration


(e.g., through machine learning) or not.
If the conclusion is that the appropriate penalty is probation, it should be applied
regarding the particular problems that the offender’s delinquency raised. This is
true for all types of offenders. The particular treatment’s purpose is to fix a certain
problem, therefore is should match it. The difference between probation and
imprisonment which includes such treatment is the incapacitation of the offender’s
delinquent capabilities during carrying out the penalty. Thus, if the offender is less
dangerous during the treatment, probation may be suitable, but if the offender must
be incapacitated during the treatment, imprisonment may be suitable.
As a result, if the society figures out that the artificial intelligence system may
continue functioning while being under treatment, probation may be suitable. If not,
and that might be too dangerous to society, imprisonment may be suitable. When
probation is finally imposed, the artificial intelligence system should begin the
treatment, which is the process of repairing its inner processes. Some systems
may match intervention in the machine learning process, some may match inter-
vention in the hardware and some may match intervention in the basic software of
the system. During this process the artificial intelligence system continues its
routine activity, but only under supervision imposed by the court’s order.
Socially and functionally, probation for human offenders, corporations and
artificial intelligence systems functions substantively identical. The special
attributes of each offender requires different treatment, but that is true in relation
to different human offender as well. Of course, the manufacturers, programmers
and users may initiate repairing process for the artificial intelligence system with no
intervention of the court, however, when the court orders to do so, it signifies the
society’s will. The same way, a drugs-addicted person (or his family) may initiate
weaning process without court’s intervention, but when the court orders to do that,
it signifies the society’s will.

6.2.2.4 Public Service of Artificial Intelligence Technology


Public service, or community service, is a substitute to imprisonment due to the
overcrowded prisons. In not severe offenses the court may impose on the offender
public service instead of, or in addition to, other penalties.97 The offender is not
incapacitated through this penalty, but forced to contribute to society as social
“compensation” for the involvement in the delinquency. This way the society
signifies that that sort of delinquency is unacceptable, but as the social harm is
not severe, lenient measures should be taken for the required inner change of the
offender.

97
John Harding, The Development of the Community Service, ALTERNATIVE STRATEGIES FOR COPING
WITHCRIME 164 (Norman Tutt ed., 1978); HOME OFFICE, REVIEW OF CRIMINAL JUSTICE POLICY (1977);
Ashlee Willis, Community Service as an Alternative to Imprisonment: A Cautionary View,
24 PROBATION JOURNAL 120 (1977).
224 6 Punishibility of Artificial Intelligence Technology

The public service has another dimension, as it relates to community. The public
service is carried out within the offender’s community to signify that the offender is
part of that community, and causing harm to the community reflects the offender.98
In many cases the public service is added to the probation in order to raise the
chances of full rehabilitation of the offender within the community.99 Public service
has more than mere compensational value. It is purposed to make the offender
understand the needs of the community and be sensitive to these needs. The public
service is part of the learning and reintegration processes that the offender
experience.
Retribution is irrelevant for public service, as public service is not intended to
make the offender suffer. Deterrence is neither relevant for public service, since
public service is perceived as lenient penalty, in which its deterrent value is
negligible. Moreover, both retribution and deterrence are irrelevant for artificial
intelligence sentencing, as aforesaid. Incapacitation is relevant for artificial intelli-
gence sentencing, but it is not reflected in public service, unless the particular public
service’s framework is extremely tight and the offender’s delinquent capabilities
are actually incapacitated.
However, the dominant purpose of public service is rehabilitation, as it is
purposed to rehabilitate the offender through learning and reintegration in the
society. Accordingly, the question is towards the applicability of public service
on artificial intelligence systems: how can public service be imposed upon artificial
intelligence systems. First, the public service should be analyzed as to its roots of
meaning. Second, these roots should be searched for in artificial intelligence
systems. Third, the punishment should be adjusted to these roots in artificial
intelligence systems.
Functionally, public service is a supervised compensation to the society through
experiencing integration with the society. The offender widens his experience with
the society and that enables him to be more easily integrated. Widening the
offender’s social experience is beneficial for the society as it includes a compensa-
tional dimension. The social experience is not exclusive for human offenders. Both
corporations and artificial intelligence systems have strong interactions with the
community. The public service may empower and strengthen these interactions and
make them become the basis for the required inner change.
For instance, a medical expert artificial intelligence system, equipped with
machine learning capabilities, is used in private clinic for performance of more
accurate medical diagnosis to the patients. The system has been considered negli-
gent, and the court imposed public service. Consequently, in order to implement
this penalty, the system may be used by the public medical services or public

98
Julie Leibrich, Burt Galaway and Yvonne Underhill, Community Sentencing in New Zealand: A
Survey of Users, 50 FEDERAL PROBATION 55 (1986).
99
James Austin and Barry Krisberg, The Unmet Promise of Alternatives, 28 JOURNAL OF RESEARCH
IN CRIME AND DELINQUENCY 374 (1982); Mark S. Umbreit, Community Service Sentencing: Jail
Alternatives or Added Sanction?, 45 FEDERAL PROBATION 3 (1981).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 225

hospitals. This serves two main goals. First, the system is exposed to more cases and
through machine learning it may optimize its functioning. Second, that may be
considered compensation to the society for the harm caused by the offense.
At the end of the public service term the artificial intelligence system is more
experienced, and if the machine learning process was effective, the system’s
performance is optimized. Since public service is supervised, whether it is
accompanied by probation or not, the machine learning process or other inner
processes are directed for prevention of further offenses within the public service.
By the end of the public service, the artificial intelligence system has contributed
time and resources for the benefit of the society, and that may be considered as
compensation for the social harm caused by the commission of the offense. Thus,
public service of artificial intelligence systems resembles by its substance to human
public service.
It may be argued that this compensation is actually contributed by the
manufacturers or users of the system, since they suffer the absence of the system’s
activity. This is true not only for artificial intelligence systems, but also for human
offenders and corporations. When a human offender carries out public service, his
absence is felt by his family and relatives. When a corporation carries out public
service, its sources are absent for its workers, directors, clients, etc. This absence is
part of carrying out the public service, regardless the identity of the offender,
therefore artificial intelligence systems are not unique in this context.

6.2.2.5 Fine of Artificial Intelligence Technology


Fine is a payment paid from the offender to the state treasury. It is not considered
compensation as it is not given to the direct victims of the offense, but to the society
as such. If the society is regarded as the victim of any offense, fine may be
considered as type of a general compensation. The fine has been developed from
the general remedy of compensation, when criminal process was still between two
individuals (private plaintiff versus defendant).100 When criminal law became
public, the original compensation has been converted to fine. Today criminal courts
may impose both fines and compensations within the criminal process.
In the eighteenth century the fine was not considered a preferred penalty, since it
was not considered deterrent in comparison to imprisonment.101 However, during
the twentieth century, when prisons became overcrowded and the costs of holding
prisoners in the state’s prison increased, the penal system was criticized to encour-

100
FIORI RINALDI, IMPRISONMENT FOR NON-PAYMENT OF FINES (1976); GERHARDT GREBING, THE FINE IN
COMPARATIVE LAW: A SURVEY OF 21 COUNTRIES (1982).
101
LEON RADZINOWICZ AND ROGER HOOD, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRA-
TION FROM 1750 VOL. 5: THE EMERGENCE OF PENAL POLICY (1986); PETER YOUNG, PUNISHMENT, MONEY
AND THE LEGAL ORDER: AN ANALYSIS OF THE EMERGENCE OF MONETARY SANCTIONS WITH SPECIAL
REFERENCE TO SCOTLAND (1987).
226 6 Punishibility of Artificial Intelligence Technology

age increasing use of fines.102 For the purpose of efficient collection of fines, in
most legal systems the court may impose imprisonment, public service or confisca-
tion of property in case of not paying the fine.103 The imposed fine is not necessarily
proportional to the actual harm caused by the offense, but to the severity of the
offense.
Retribution may be relevant to fines, if the fine is proportional to the social harm
and reflects it. Deterrence may also be relevant to fines, if the fine causes a deterrent
loss of property. However, as aforesaid, both retribution and deterrence are irrele-
vant to artificial intelligence sentencing. Fines have no dominant rehabilitative
value, although paying the fine may require additional occupation of work, and
thus there is less free time to commit offenses. Accordingly, the question is towards
the applicability of fine on artificial intelligence systems: how can public fine be
imposed upon artificial intelligence systems. The main difficulty is for artificial
intelligence systems possess no money or other property of their own.
Whereas corporations possess property of their own, and therefore paying fines
is the easiest way of imposing penalty upon corporations, artificial intelligence
systems do not possess property. First, the fine should be analyzed as to its roots of
meaning. Second, these roots should be searched for in artificial intelligence
systems. Third, the punishment should be adjusted to these roots in artificial
intelligence systems. In addition, this solution should include relevant solutions
for cases of ineffective fines, i.e., when there are difficulties in collecting the fines.
Functionally, fine is forced contribution of valuable property to the society. In
most cases it is reflected by money, but in certain legal systems it is reflected by
other valuable property. In certain legal systems the sum of money is determined by
evaluating the cost of a working day, week or month of the defendant, for the fine to
match this cost.104 Moreover, even if the fine is determined as an absolute sum, the
absence of this sum is translated by the offender, in most cases, as additional
working hours to fulfill that absence. Thus, the fine actually reflects working
hours, days, weeks or months, dependent on the relevant sum.
As discussed above in the context of public service, the productivity of artificial
intelligence system may be evaluated also in working hours for the community.
That is true that artificial intelligence systems do not possess property of their own,
but they possess the capability of working, which is valuable and may be measured
through monetary values. For instance, a working hour of medical expert artificial

102
Gail S. Funke, The Economics of Prison Crowding, 478 ANNALS OF THE AMERICAN ACADEMY OF
POLITICAL AND SOCIAL SCIENCES 86 (1985); Thomas Mathiesen, The Viewer Society: Michel
Foucault’s ‘Panopticon’ Revisited, 1 THEORETICAL CRIMINOLOGY 215 (1997).
103
SALLY T. HILLSMAN AND SILVIA S. G. CASALE, ENFORCEMENT OF FINES AS CRIMINAL SANCTIONS: THE
ENGLISH EXPERIENCE AND ITS RELEVANCE TO AMERICAN PRACTICE (1986); Judith A. Greene, Structur-
ing Criminal Fines: Making an ‘Intermediate Penalty’ More Useful and Equitable, 13 JUSTICE
SYSTEM JOURNAL 37 (1988); NIGEL WALKER AND NICOLA PADFIELD, SENTENCING: THEORY, LAW AND
PRACTICE (1996).
104
MICHAEL H. TONRY AND KATHLEEN HATLESTAD, SENTENCING REFORM IN OVERCROWDED TIMES: A
COMPARATIVE PERSPECTIVE (1997).
6.2 Relevance of Sentencing to Artificial Intelligence Systems 227

intelligence system may be evaluated in $500. Let us assume that this particular
artificial intelligence system has been imposed fine of $1,000. In this case, the fine
is equal to two working hours of the system. Consequently, the system may pay the
fine through the only payment measure it possesses: working hours.
The working hours are contributed to the society, the same way as public service
is contributed. When a human offender does not have the required sum of money to
pay the fine, other penalties are imposed, as aforesaid. One of them is public
service, which may be measured through certain number of working hours. The
working hours as payment measure may serve not only the purpose of paying the
fine (through accurate number of working hours), but also an optional purpose of
the fine’s enforcement together with public service, imprisonment or any other
relevant penalty.
It may be argued that this contribution to society is actually contributed by the
manufacturers or users of the artificial intelligence system, since they suffer the
absence of the system’s activity while paying the fine. This is true not only for
artificial intelligence systems, but also for human offenders and corporations. When
a human offender pays the fine, the absence of the money (or of himself, if
additional working hours are required to fulfill the absence of money) is felt by
his family and relatives. When a corporation pays the fine, its sources are absent for
its workers, directors, clients, etc. This absence is part of paying the fine, regardless
the identity of the offender, therefore artificial intelligence systems are not unique
in this context.
Conclusion

Perhaps no criminal liability should be imposed on artificial intelligence systems, at


least yet, but if basic definitions of criminal law are not changed, this odd conse-
quence is inevitable. The modern criminal law is modular as criminal liability may
be imposed when all of its requirements are fulfilled. Each requirement has its own
definitions and terms to be fulfilled, as defined by the basic concepts of
criminal law.
The development of advanced technology as reflected by artificial intelligence
technology poses a plausible possibility of fulfilling criminal liability requirements
by that technology. No additional barrier or requirement is required from
non-human offenders (such as corporations) in order to impose criminal liability.
Consequently, examination of the elements of criminal law enables the imposition
of criminal liability.
Differently from the criminal liability of corporations, when it has been
established through social decision regarding its benefits and disadvantages, the
criminal law in relation to artificial intelligence technology is dragged into situation
created by old definitions which are not necessary suitable for the modern era. Most
of these definitions were redefined during the nineteenth century regarding the
realm which has been existed then.
The realm has changed, at least technologically, since then. As a result, the
criminal law is required to adapt itself to the new realm, otherwise it becomes
irrelevant or cause odd consequences, such as imposition of criminal liability upon
machines. Most of the changes may be done through case-law which interprets the
current definitions to be matched to the current realm, but some require legislative
changes, as criminal law does represent legal social control.

# Springer International Publishing Switzerland 2015 229


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8
Cases

• Alford v. State, 866 S.W.2d 619 (Tex.Crim.App.1993)


• Allday, (1837) 8 Car. & P. 136, 173 E.R. 431
• Almon, (1770) 5 Burr. 2686, 98 E.R. 411
• Anderson v. State, 66 Okl.Cr. 291, 91 P.2d 794 (1939)
• Anderson, [1966] 2 Q.B. 110, [1966] 2 All E.R. 644, [1966] 2 W.L.R. 1195,
50 Cr. App. Rep. 216, 130 J.P. 318
• Andrews v. People, 800 P.2d 607 (Colo.1990)
• Ann v. State, 30 Tenn. 159, 11 Hum. 159 (1850)
• Arp v. State, 97 Ala. 5, 12 So. 301 (1893)
• Axtell, (1660) 84 E.R. 1060
• B. v. Director of Public Prosecutions, [2000] 2 A.C. 428, [2000] 1 All E.R. 833,
[2000] 2 W.L.R. 452, [2000] 2 Cr. App. Rep. 65, [2000] Crim. L.R. 403
• Bailey, (1818) Russ. & Ry. 341, 168 E.R. 835
• Barnes v. State, 19 Conn. 398 (1849)
• Barnfather v. Islington London Borough Council, [2003] E.W.H.C. 418
(Admin), [2003] 1 W.L.R. 2318, [2003] E.L.R. 263
• Bateman, [1925] All E.R. Rep. 45, 94 L.J.K.B. 791, 133 L.T. 730, 89 J.P. 162,
41 T.L.R. 557, 69 Sol. Jo. 622, 28 Cox. C.C. 33, 19 Cr. App. Rep. 8
• Batson v. State, 113 Nev. 669, 941 P.2d 478 (1997)
• Beason v. State, 96 Miss. 165, 50 So. 488 (1909)
• Benge, (1865) 4 F. & F. 504, 176 E.R. 665
• Birmingham, &c., Railway Co., (1842) 3 Q. B. 223, 114 E.R. 492
• Birney v. State, 8 Ohio Rep. 230 (1837)
• Blake v. United States, 407 F.2d 908 (5th Cir.1969)
• Blaker v. Tillstone, [1894] 1 Q.B. 345
• Bolden v. State, 171 S.W.3d 785 (2005)
• Bonder v. State, 752 A.2d 1169 (Del.2000)
• Boson v. Sandford, (1690) 2 Salkeld 440, 91 E.R. 382
• Boushea v. United States, 173 F.2d 131, 134 (8th Cir. 1949)
• Bradley v. State, 102 Tex.Crim.R. 41, 277 S.W. 147 (1926)
• Bratty v. Attorney-General for Northern Ireland, [1963] A.C. 386, 409, [1961]
3 All E.R. 523, [1961] 3 W.L.R. 965, 46 Cr. App. Rep 1
• Brett v. Rigden, (1568) 1 Plowd. 340, 75 E.R. 516

# Springer International Publishing Switzerland 2015 231


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8
232 Cases

• Burnett, (1815) 4 M. & S. 272, 105 E.R. 835


• C, [2007] E.W.C.A. Crim. 1862, [2007] All E.R. (D) 91
• Caldwell, [1982] A.C. 341, [1981] 1 All E.R. 961, [1981] 2 W.L.R. 509, 73 Cr.
App. Rep. 13, 145 J.P. 211
• Calley v. Callaway, 519 F.2d 184 (5th Cir.1975)
• Campbell v. Wood, 18 F.3d 662 (9th Cir. 1994)
• Carter v. State, 376 P.2d 351 (Okl.Crim.App.1962)
• Carter v. United States, 530 U.S. 255, 120 S.Ct. 2159, 147 L.Ed.2d 203 (2000)
• Chance v. State, 685 A.2d 351 (Del.1996)
• Cheek v. United States, 498 U.S. 192, 111 S.Ct. 604, 112 L.Ed.2d 617 (1991)
• Childs v. State, 109 Nev. 1050, 864 P.2d 277 (1993)
• Chisholm v. Doulton, (1889) 22 Q.B.D. 736
• Chrystal v. Commonwealth, 72 Ky. 669, 9 Bush. 669 (1873)
• City of Chicago v. Mayer, 56 Ill.2d 366, 308 N.E.2d 601 (1974)
• Coal & C.R. v. Conley, 67 W.Va. 129, 67 S.E. 613 (1910)
• Commonwealth v. Boynton, 84 Mass. 160, 2 Allen 160 (1861)
• Commonwealth v. Fortner L.P. Gas Co., 610 S.W.2d 941 (Ky.App.1980)
• Commonwealth v. French, 531 Pa. 42, 611 A.2d 175 (1992)
• Commonwealth v. Goodman, 97 Mass. 117 (1867)
• Commonwealth v. Green, 477 Pa. 170, 383 A.2d 877 (1978)
• Commonwealth v. Herd, 413 Mass. 834, 604 N.E.2d 1294 (1992)
• Commonwealth v. Hill, 11 Mass. 136 (1814)
• Commonwealth v. Johnson, 412 Mass. 368, 589 N.E.2d 311 (1992)
• Commonwealth v. Leno, 415 Mass. 835, 616 N.E.2d 453 (1993)
• Commonwealth v. Lindsey, 396 Mass. 840, 489 N.E.2d 666 (1986)
• Commonwealth v. McIlwain School Bus Lines, Inc., 283 Pa.Super. 1, 423 A.2d
413 (1980)
• Commonwealth v. Mead, 92 Mass. 398 (1865)
• Commonwealth v. Monico, 373 Mass. 298, 366 N.E.2d 1241 (1977)
• Commonwealth v. New York Cent. & H. River R. Co., 206 Mass.
417, 92 N.E. 766 (1910)
• Commonwealth v. Perl, 50 Mass.App.Ct. 445, 737 N.E.2d 937 (2000)
• Commonwealth v. Pierce, 138 Mass. 165 (1884)
• Commonwealth v. Proprietors of New Bedford Bridge, 68 Mass. 339 (1854)
• Commonwealth v. Shumway, 72 Va.Cir. 481 (2007)
• Commonwealth v. Thompson, 6 Mass. 134, 6 Tyng 134 (1809)
• Commonwealth v. Walensky, 316 Mass. 383, 55 N.E.2d 902 (1944)
• Commonwealth v. Weaver, 400 Mass. 612, 511 N.E.2d 545 (1987)
• Cox v. State, 305 Ark. 244, 808 S.W.2d 306 (1991)
• Crawshaw, (1860) Bell. 303, 169 E.R. 1271
• Cutter v. State, 36 N.J.L. 125 (1873)
• Da Silva, [2006] E.W.C.A. Crim. 1654, [2006] 4 All E.R. 900, [2006] 2 Cr. App.
Rep. 517
• Dalloway, (1847) 2 Cox C.C. 273
• Daniel v. State, 187 Ga. 411, 1 S.E.2d 6 (1939)
Cases 233

• Director of Public Prosecutions v. Kent and Sussex Contractors Ltd., [1944]


K.B. 146, [1944] 1 All E.R. 119
• Dixon, (1814) 3 M. & S. 11, 105 E.R. 516
• Dodd, (1736) Sess. Cas. 135, 93 E.R. 136
• Dotson v. State, 6 Cold. 545 (1869)
• Driver v. State, 2011 Tex. Crim. App. Lexis 4413 (2011)
• Duckett v. State, 966 P.2d 941 (Wyo.1998)
• Dudley and Stephens, [1884] 14 Q.B. D. 273
• Dugdale, (1853) 1 El. & Bl. 435, 118 E.R. 499
• Dusenbery v. Commonwealth, 263 S.E.2d 392 (Va. 1980)
• Dutton v. State, 123 Md. 373, 91 A. 417 (1914)
• Dyke v. Gower, [1892] 1 Q.B. 220
• Elk v. United States, 177 U.S. 529, 20 S.Ct. 729, 44 L.Ed. 874 (1900)
• English, [1999] A.C. 1, [1997] 4 All E.R. 545, [1997] 3 W.L.R. 959, [1998] 1 Cr.
App. Rep. 261, [1998] Crim. L.R. 48, 162 J.P. 1
• Esop, (1836) 7 Car. & P. 456, 173 E.R. 203
• Evans v. Bartlam, [1937] A.C. 473, 479, [1937] 2 All E.R. 646
• Evans v. State, 322 Md. 24, 585 A.2d 204 (1991)
• Fain v. Commonwealth, 78 Ky. 183, 39 Am.Rep. 213 (1879)
• Farmer v. People, 77 Ill. 322 (1875)
• Finney, (1874) 12 Cox C.C. 625
• Firth, (1990) 91 Cr. App. Rep. 217, 154 J.P. 576, [1990] Crim. L.R. 326
• Fitzpatrick v. Kelly, [1873] 8 Q.B. 337
• Fitzpatrick, [1977] N.I. 20
• Forbes, (1835) 7 Car. & P. 224, 173 E.R. 99
• Frey v. United States, 708 So.2d 918 (Fla.1998)
• G., [2003] U.K.H.L. 50, [2004] 1 A.C. 1034, [2003] 3 W.L.R. 1060, [2003] 4 All
E.R. 765, [2004] 1 Cr. App. Rep. 21, (2003) 167 J.P. 621, [2004] Crim. L. R. 369
• G., [2008] U.K.H.L. 37, [2009] A.C. 92
• Gammon (Hong Kong) Ltd. v. Attorney-General of Hong Kong, [1985] 1 A.C. 1,
[1984] 2 All E.R. 503, [1984] 3 W.L.R. 437, 80 Cr. App. Rep. 194, 26 Build
L.R. 159
• Gardiner, [1994] Crim. L.R. 455
• Godfrey v. State, 31 Ala. 323 (1858)
• Government of the Virgin Islands v. Smith, 278 F.2d 169 (3rd Cir.1960)
• Granite Construction Co. v. Superior Court, 149 Cal.App.3d 465, 197 Cal.Rptr.
3 (1983)
• Gray v. Lucas, 710 F.2d 1048 (5th Cir. 1983)
• Great Broughton (Inhabitants), (1771) 5 Burr. 2700, 98 E.R. 418
• Gregg v. Georgia, 428 U.S. 153, S.Ct. 2909, 49 L.Ed.2d 859 (1979)
• Grout, (1834) 6 Car. & P. 629, 172 E.R. 1394
• Hall v. Brooklands Auto Racing Club, [1932] All E.R. 208, [1933] 1 K.B. 205,
101 L.J.K.B. 679, 147 L.T. 404, 48 T.L.R. 546
• Hardcastle v. Bielby, [1892] 1 Q.B. 709
• Hartson v. People, 125 Colo. 1, 240 P.2d 907 (1951)
234 Cases

• Hasan, [2005] U.K.H.L. 22, [2005] 4 All E.R. 685, [2005] 2 Cr. App. Rep.
314, [2006] Crim. L.R. 142, [2005] All E.R. (D) 299
• Heilman v. Commonwealth, 84 Ky. 457, 1 S.W. 731 (1886)
• Henderson v. Kibbe, 431 U.S. 145, 97 S.Ct. 1730, 52 L.Ed.2d 203 (1977)
• Henderson v. State, 11 Ala.App. 37, 65 So. 721 (1914)
• Hentzner v. State, 613 P.2d 821 (Alaska 1980)
• Hern v. Nichols, (1708) 1 Salkeld 289, 91 E.R. 256
• Hobbs v. Winchester Corporation, [1910] 2 K.B. 471
• Holbrook, (1878) 4 Q.B.D. 42
• Howard v. State, 73 Ga.App. 265, 36 S.E.2d 161 (1945)
• Huggins, (1730) 2 Strange 882, 93 E.R. 915
• Hughes v. Commonwealth, 19 Ky.L.R. 497, 41 S.W. 294 (1897)
• Hull, (1664) Kel. 40, 84 E.R. 1072
• Humphrey v. Commonwealth, 37 Va.App. 36, 553 S.E.2d 546 (2001)
• Hunt v. Nuth, 57 F.3d 1327 (4th Cir. 1995)
• Hunt v. State, 753 So.2d 609 (Fla.App.2000)
• Hunter v. State, 30 Tenn. 160, 1 Head 160 (1858)
• I.C.R. Haulage Ltd., [1944] K.B. 551, [1944] 1 All E.R. 691
• Ingram v. United States, 592 A.2d 992 (D.C.App.1991)
• Johnson v. State, 142 Ala. 70 (1904)
• Jones v. Hart, (1699) 2 Salkeld 441, 91 E.R. 382
• Jurco v. State, 825 P.2d 909 (Alaska App.1992)
• K., [2001] U.K.H.L. 41, [2002] 1 A.C. 462
• Kimoktoak v. State, 584 P.2d 25 (Alaska 1978)
• Kingston v. Booth, (1685) Skinner 228, 90 E.R. 105
• Knight, (1828) 1 L.C.C. 168, 168 E.R. 1000
• Kumar, [2004] E.W.C.A. Crim. 3207, [2005] 1 Cr. App. Rep. 566, [2005] Crim.
L.R. 470
• Laaman v. Helgemoe, 437 F.Supp. 269 (1977)
• Lambert v. California, 355 U.S. 225, 78 S.Ct. 240, 2 L.Ed.2d 228 (1957)
• Lambert v. State, 374 P.2d 783 (Okla.Crim.App.1962)
• Lane v. Commonwealth, 956 S.W.2d 874 (Ky.1997)
• Langforth Bridge, (1635) Cro. Car. 365, 79 E.R. 919
• Larsonneur, (1933) 24 Cr. App. R. 74, 97 J.P. 206, 149 L.T. 542
• Lawson, [1986] V.R. 515
• Leach, [1937] 1 All E.R. 319
• Lee v. State, 41 Tenn. 62, 1 Cold. 62 (1860)
• Leet v. State, 595 So.2d 959 (1991)
• Lennard’s Carrying Co. Ltd. v. Asiatic Petroleum Co. Ltd., [1915] A.C. 705
• In re Leroy, 285 Md. 508, 403 A.2d 1226 (1979)
• Lester v. State, 212 Tenn. 338, 370 S.W.2d 405 (1963)
• Levett, (1638) Cro. Car. 538
• lifton (Inhabitants), (1794) 5 T.R. 498, 101 E.R. 280
• Liverpool (Mayor), (1802) 3 East 82, 102 E.R. 529
• Long v. Commonwealth, 23 Va.App. 537, 478 S.E.2d 324 (1996)
Cases 235

• Long v. State, 44 Del. 262, 65 A.2d 489 (1949)


• Longbottom, (1849) 3 Cox C. C. 439
• Lutwin v. State, 97 N.J.L. 67, 117 A. 164 (1922)
• Manser, (1584) 2 Co. Rep. 3, 76 E.R. 392
• Marshall, (1830) 1 Lewin 76, 168 E.R. 965
• Martin v. State, 90 Ala. 602, 8 So. 858 (1891)
• Mason v. State, 603 P.2d 1146 (Okl.Crim.App.1979)
• Matudi, [2004] E.W.C.A. Crim. 697
• Mavji, [1987] 2 All E.R. 758, [1987] 1 W.L.R. 1388, [1986] S.T.C. 508, Cr. App.
Rep. 31, [1987] Crim. L.R. 39
• Maxey v. United States, 30 App. D.C. 63, 80 (App. D.C. 1907)
• McClain v. State, 678 N.E.2d 104 (Ind.1997)
• McGrowther, (1746) 18 How. St. Tr. 394
• McMillan v. City of Jackson, 701 So.2d 1105 (Miss.1997)
• McNeil v. United States, 933 A.2d 354 (2007)
• Meade, [1909] 1 K.B. 895
• Meakin, (1836) 7 Car. & P. 297, 173 E.R. 131
• Mendez v. State, 575 S.W.2d 36 (Tex.Crim.App.1979)
• Michael, (1840) 2 Mood. 120, 169 E.R. 48
• Middleton v. Fowler, (1699) 1 Salkeld 282, 91 E.R. 247
• Mildmay, (1584) 1 Co. Rep. 175a, 76 E.R. 379
• Miller v. State, 3 Ohio St. Rep. 475 (1854)
• Minor v. State, 326 Md. 436, 605 A.2d 138 (1992)
• Mitchell v. State, 114 Nev. 1417, 971 P.2d 813 (1998)
• M’Naghten, (1843) 10 Cl. & Fin. 200, 8 E.R. 718
• Montgomery v. Commonwealth, 189 Ky. 306, 224 S.W. 878 (1920)
• Moore v. State, 25 Okl.Crim. 118, 218 P. 1102 (1923)
• Mouse, (1608) 12 Co. Rep. 63, 77 E.R. 1341
• Myers v. State, 1 Conn. 502 (1816)
• Nelson v. State, 597 P.2d 977 (Alaska 1979)
• New York & G.L.R. Co. v. State, 50 N.J.L. 303, 13 A. 1 (1888)
• New York Cent. & H.R.R. v. United States, 212 U.S. 481, 29 S.Ct. 304, 53 L.Ed.
613 (1909)
• Nutt, (1728) 1 Barn. K.B. 306, 94 E.R. 208
• O’Flaherty, [2004] E.W.C.A. Crim. 526, [2004] 2 Cr. App. Rep. 315
• Oxford, (1840) 9 Car. & P. 525, 173 E.R. 941
• Parish, (1837) 8 Car. & P. 94, 173 E.R. 413
• Parnell v. State, 912 S.W.2d 422, 424 (Ark. 1996)
• Pearson, (1835) 2 Lewin 144, 168 E.R. 1108
• Peebles v. State, 101 Ga. 585, 28 S.E. 920 (1897)
• People v. Bailey, 451 Mich. 657, 549 N.W.2d 325 (1996)
• People v. Brubaker, 53 Cal.2d 37, 346 P.2d 8 (1959)
• People v. Cabaltero, 31 Cal.App.2d 52, 87 P.2d 364 (1939)
• People v. Cherry, 307 N.Y. 308, 121 N.E.2d 238 (1954)
• People v. Clark, 8 N.Y.Cr. 169, 14 N.Y.S. 642 (1891)
236 Cases

• People v. Cooper, 194 Ill.2d 419, 252 Ill.Dec. 458, 743 N.E.2d 32 (2000)
• People v. Craig, 78 N.Y.2d 616, 578 N.Y.S.2d 471, 585 N.E.2d 783 (1991)
• People v. Daugherty, 40 Cal.2d 876, 256 P.2d 911 (1953)
• People v. Davis, 33 N.Y.2d 221, 351 N.Y.S.2d 663, 306 N.E.2d 787 (1973)
• People v. Decina, 2 N.Y.2d 133, 157 N.Y.S.2d 558, 138 N.E.2d 799 (1956)
• People v. Disimone, 251 Mich.App. 605, 650 N.W.2d 436 (2002)
• People v. Ferguson, 134 Cal.App. 41, 24 P.2d 965 (1933)
• People v. Freeman, 61 Cal.App.2d 110, 142 P.2d 435 (1943)
• People v. Handy, 198 Colo. 556, 603 P.2d 941 (1979)
• People v. Haney, 30 N.Y.2d 328, 333 N.Y.S.2d 403, 284 N.E.2d 564 (1972)
• People v. Harris, 29 Cal. 678 (1866)
• People v. Heitzman, 9 Cal.4th 189, 37 Cal.Rptr.2d 236, 886 P.2d 1229 (1994)
• People v. Henry, 239 Mich.App. 140, 607 N.W.2d 767 (1999)
• People v. Higgins, 5 N.Y.2d 607, 186 N.Y.S.2d 623, 159 N.E.2d 179 (1959)
• People v. Howk, 56 Cal.2d 687, 16 Cal.Rptr. 370, 365 P.2d 426 (1961)
• People v. Kemp, 150 Cal.App.2d 654, 310 P.2d 680 (1957)
• People v. Kessler, 57 Ill.2d 493, 315 N.E.2d 29 (1974)
• People v. Kirst, 168 N.Y. 19, 60 N.E. 1057 (1901)
• People v. Larkins, 2010 Mich. App. Lexis 1891 (2010)
• People v. Leonardi, 143 N.Y. 360, 38 N.E. 372 (1894)
• People v. Lisnow, 88 Cal.App.3d Supp. 21, 151 Cal.Rptr. 621 (1978)
• People v. Little, 41 Cal.App.2d 797, 107 P.2d 634 (1941)
• People v. Marshall, 362 Mich. 170, 106 N.W.2d 842 (1961)
• People v. Merhige, 212 Mich. 601, 180 N.W. 418 (1920)
• People v. Michalow, 229 N.Y. 325, 128 N.E. 228 (1920)
• People v. Minifie, 13 Cal.4th 1055, 56 Cal.Rptr.2d 133, 920 P.2d 1337 (1996)
• People v. Monks, 133 Cal. App. 440 (Cal. Dist. Ct. App. 1933)
• People v. Mutchler, 140 N.E. 820, 823 (Ill. 1923)
• People v. Newton, 8 Cal.App.3d 359, 87 Cal.Rptr. 394 (1970)
• People v. Pantano, 239 N.Y. 416, 146 N.E. 646 (1925)
• People v. Prettyman, 14 Cal.4th 248, 58 Cal.Rptr.2d 827, 926 P.2d 1013 (1996)
• People v. Richards, 269 Cal.App.2d 768, 75 Cal.Rptr. 597 (1969)
• People v. Sakow, 45 N.Y.2d 131, 408 N.Y.S.2d 27, 379 N.E.2d 1157 (1978)
• People v. Smith, 57 Cal. App. 4th 1470, 67 Cal. Rptr. 2d 604 (1997)
• People v. Sommers, 200 P.3d 1089 (2008)
• People v. Townsend, 214 Mich. 267, 183 N.W. 177 (1921)
• People v. Vogel, 46 Cal.2d 798, 299 P.2d 850 (1956)
• People v. Weiss, 256 App.Div. 162, 9 N.Y.S.2d 1 (1939)
• People v. Whipple, 100 Cal.App. 261, 279 P. 1008 (1929)
• People v. Williams, 56 Ill.App.2d 159, 205 N.E.2d 749 (1965)
• People v. Wilson, 66 Cal.2d 749, 59 Cal.Rptr. 156, 427 P.2d 820 (1967)
• People v. Young, 11 N.Y.2d 274, 229 N.Y.S.2d 1, 183 N.E.2d 319 (1962)
• Pierson v. State, 956 P.2d 1119 (Wyo.1998)
• Pigman v. State, 14 Ohio 555 (1846)
• Polston v. State, 685 P.2d 1 (Wyo.1984)
Cases 237

• Pope v. United States, 372 F.2d 710 (8th Cir.1970)


• Powell v. Texas, 392 U.S. 514, 88 S.Ct. 2145, 20 L.Ed.2d 1254 (1968)
• Price v. State, 50 Tex.Crim.R. 71, 94 S.W. 901 (1906)
• Proctor v. State, 15 Okl.Cr. 338, 176 P. 771 (1918)
• Provenzano v. Moore, 744 So.2d 413 (Fla. 1999)
• Provincial Motor Cab Company Ltd. v. Dunning, [1909] 2 K.B. 599
• Pugliese v. Commonwealth, 16 Va.App. 82, 428 S.E.2d 16 (1993)
• Quick, [1973] Q.B. 910, [1973] 3 All E.R. 347, [1973] 3 W.L.R. 26, 57 Cr. App.
Rep. 722, 137 J.P. 763
• R.I. Recreation Center v. Aetna Cas. & Surety Co., 177 F.2d 603 (1st Cir.1949)
• Rangel v. State, 2009 Tex.App. 1555 (2009)
• Ratzlaf v. United States, 510 U.S. 135, 114 S.Ct. 655, 126 L.Ed.2d 615 (1994)
• Read v. People, 119 Colo. 506, 205 P.2d 233 (1949)
• Redmond v. State, 36 Ark. 58 (1880)
• Reed v. State, 693 N.E.2d 988 (Ind.App.1998)
• Reniger v. Fogossa, (1551) 1 Plowd. 1, 75 E.R. 1
• Rice v. State, 8 Mo. 403 (1844)
• Richards, [2004] E.W.C.A. Crim. 192
• Richardson v. State, 697 N.E.2d 462 (Ind.1998)
• Ricketts v. State, 291 Md. 701, 436 A.2d 906 (1981)
• Roberts v. People, 19 Mich. 401 (1870)
• Robinson v. California, 370 U.S. 660, 82 S.Ct. 1417, 8 L.Ed.2d 758 (1962)
• Rollins v. State, 2009 Ark. 484, 347 S.W.3d 20 (2009)
• Roy v. United States, 652 A.2d 1098 (D.C.App.1995)
• Saik, [2006] U.K.H.L. 18, [2007] 1 A.C. 18
• Saintiff, (1705) 6 Mod. 255, 87 E.R. 1002
• Sam v. Commonwealth, 13 Va.App. 312, 411 S.E.2d 832 (1991)
• Sanders v. State, 466 N.E.2d 424 (Ind.1984)
• Scales v. United States, 367 U.S. 203, 81 S.Ct. 1469, 6 L.Ed.2d 782 (1961)
• Schmidt v. United States, 133 F. 257 (9th Cir.1904)
• Schuster v. State, 48 Ala. 199 (1872)
• Scott v. State, 71 Tex.Crim.R. 41, 158 S.W. 814 (1913)
• Seaboard Offshore Ltd. v. Secretary of State for Transport, [1994] 2 All E.R. 99,
[1994] 1 W.L.R. 541, [1994] 1 Lloyd’s Rep. 593
• Seaman v. Browning, (1589) 4 Leonard 123, 74 E.R. 771
• Severn and Wye Railway Co., (1819) 2 B. & Ald. 646, 106 E.R. 501
• Ex parte Smith, 135 Mo. 223, 36 S.W. 628 (1896)
• Smith v. California, 361 U.S. 147, 80 S.Ct. 215, 4 L.Ed.2d 205 (1959)
• Smith v. State, 83 Ala. 26, 3 So. 551 (1888)
• Spiers & Pond v. Bennett, [1896] 2 Q.B. 65
• Squire v. State, 46 Ind. 459 (1874)
• State Philbrick, 402 A.2d 59 (Me.1979)
• State v. Aaron, 4 N.J.L. 269 (1818)
• State v. Anderson, 141 Wash.2d 357, 5 P.3d 1247 (2000)
• State v. Anthuber, 201 Wis.2d 512, 549 N.W.2d 477 (App.1996)
238 Cases

• State v. Asher, 50 Ark. 427, 8 S.W. 177 (1888)


• State v. Audette, 149 Vt. 218, 543 A.2d 1315 (1988)
• State v. Ayer, 136 N.H. 191, 612 A.2d 923 (1992)
• State v. Barker, 128 W.Va. 744, 38 S.E.2d 346 (1946)
• State v. Barrett, 768 A.2d 929 (R.I.2001)
• State v. Blakely, 399 N.W.2d 317 (S.D.1987)
• State v. Bono, 128 N.J.Super. 254, 319 A.2d 762 (1974)
• State v. Bowen, 118 Kan. 31, 234 P. 46 (1925)
• State v. Brosnan, 221 Conn. 788, 608 A.2d 49 (1992)
• State v. Brown, 389 So.2d 48 (La.1980)
• State v. Bunkley, 202 Conn. 629, 522 A.2d 795 (1987)
• State v. Burrell, 135 N.H. 715, 609 A.2d 751 (1992)
• State v. Cain, 9 W. Va. 559 (1874)
• State v. Cameron, 104 N.J. 42, 514 A.2d 1302 (1986)
• State v. Campbell, 536 P.2d 105 (Alaska 1975)
• State v. Carrasco, 122 N.M. 554, 928 P.2d 939 (1996)
• State v. Case, 672 A.2d 586 (Me.1996)
• State v. Caswell, 771 A.2d 375 (Me.2001)
• State v. Champa, 494 A.2d 102 (R.I.1985)
• State v. Chicago, M. & St.P.R. Co., 130 Minn. 144, 153 N.W. 320 (1915)
• State v. Clottu, 33 Ind. 409 (1870)
• State v. Coffin, 128 N.M. 192, 991 P.2d 477 (1999)
• State v. Cram, 157 Vt. 466, 600 A.2d 733 (1991)
• State v. Crocker, 431 A.2d 1323 (Me.1981)
• State v. Crocker, 506 A.2d 209 (Me.1986)
• State v. Cude, 14 Utah 2d 287, 383 P.2d 399 (1963)
• State v. Curry, 45 Ohio St.3d 109, 543 N.E.2d 1228 (1989)
• State v. Daniels, 236 La. 998, 109 So.2d 896 (1958)
• State v. Dansinger, 521 A.2d 685 (Me.1987)
• State v. Daoud, 141 N.H. 142, 679 A.2d 577 (1996)
• State v. Dillon, 93 Idaho 698, 471 P.2d 553 (1970)
• State v. Dubina, 164 Conn. 95, 318 A.2d 95 (1972)
• State v. Ehlers, 98 N.J.L. 263, 119 A. 15 (1922)
• State v. Ellis, 232 Or. 70, 374 P.2d 461 (1962)
• State v. Elsea, 251 S.W.2d 650 (Mo.1952)
• State v. Etzweiler, 125 N.H. 57, 480 A.2d 870 (1984)
• State v. Evans, 134 N.H. 378, 594 A.2d 154 (1991)
• State v. Farley, 225 Kan. 127, 587 P.2d 337 (1978)
• State v. Fee, 126 N.H. 78, 489 A.2d 606 (1985)
• State v. Finnell, 101 N.M. 732, 688 P.2d 769 (1984)
• State v. Fletcher, 322 N.C. 415, 368 S.E.2d 633 (1988)
• State v. Follin, 263 Kan. 28, 947 P.2d 8 (1997)
• State v. Foster, 202 Conn. 520, 522 A.2d 277 (1987)
• State v. Foster, 91 Wash.2d 466, 589 P.2d 789 (1979)
• State v. Gallagher, 191 Conn. 433, 465 A.2d 323 (1983)
Cases 239

• State v. Gartland, 304 Mo. 87, 263 S.W. 165 (1924)


• State v. Garza, 259 Kan. 826, 916 P.2d 9 (1996)
• State v. George, 20 Del. 57, 54 A. 745 (1902)
• State v. Gish, 17 Idaho 341, 393 P.2d 342 (1964)
• State v. Goodall, 407 A.2d 268 (Me.1979)
• State v. Goodenow, 65 Me. 30 (1876)
• State v. Gray, 221 Conn. 713, 607 A.2d 391 (1992)
• State v. Great Works Mill. & Mfg. Co., 20 Me. 41, 37 Am.Dec.38 (1841)
• State v. Hadley, 65 Utah 109, 234 P. 940 (1925)
• State v. Harris, 222 N.W.2d 462 (Iowa 1974)
• State v. Hastings, 118 Idaho 854, 801 P.2d 563 (1990)
• State v. Havican, 213 Conn. 593, 569 A.2d 1089 (1990)
• State v. Herro, 120 Ariz. 604, 587 P.2d 1181 (1978)
• State v. Hinkle, 200 W.Va. 280, 489 S.E.2d 257 (1996)
• State v. Hobbs, 252 Iowa 432, 107 N.W.2d 238 (1961)
• State v. Hooker, 17 Vt. 658 (1845)
• State v. Hopkins, 147 Wash. 198, 265 P. 481 (1928)
• State v. Howley, 128 Idaho 874, 920 P.2d 391 (1996)
• State v. I. & M. Amusements, Inc., 10 Ohio App.2d 153, 226 N.E.2d 567 (1966)
• State v. J.P.S., 135 Wash.2d 34, 954 P.2d 894 (1998)
• State v. Jackson, 137 Wash.2d 712, 976 P.2d 1229 (1999)
• State v. Jackson, 346 Mo. 474, 142 S.W.2d 45 (1940)
• State v. Jacobs, 371 So.2d 801 (La.1979)
• State v. Jenner, 451 N.W.2d 710 (S.D.1990)
• State v. Johnson, 233 Wis. 668, 290 N.W. 159 (1940)
• State v. Kaiser, 260 Kan. 235, 918 P.2d 629 (1996)
• State v. Kee, 398 A.2d 384 (Me.1979)
• State v. Labato, 7 N.J. 137, 80 A.2d 617 (1951)
• State v. Lawrence, 97 N.C. 492, 2 S.E. 367 (1887)
• State v. Linscott, 520 A.2d 1067 (Me.1987)
• State v. Lockhart, 208 W.Va. 622, 542 S.E.2d 443 (2000)
• State v. Lucas, 55 Iowa 321, 7 N.W. 583 (1880)
• State v. Marley, 54 Haw. 450, 509 P.2d 1095 (1973)
• State v. Martin, 119 N.J. 2, 573 A.2d 1359 (1990)
• State v. McDowell, 312 N.W.2d 301 (N.D. 1981)
• State v. Mendoza, 709 A.2d 1030 (R.I.1998)
• State v. Mishne, 427 A.2d 450 (Me.1981)
• State v. Molin, 288 N.W.2d 232 (Minn.1979)
• State v. Moore, 158 N.J. 292, 729 A.2d 1021 (1999)
• State v. Murphy, 674 P.2d 1220 (Utah.1983)
• State v. Nargashian, 26 R.I. 299, 58 A. 953 (1904)
• State v. Nelson, 329 N.W.2d 643 (Iowa 1983)
• State v. Neuzil, 589 N.W.2d 708 (Iowa 1999)
• State v. Nickelson, 45 La.Ann. 1172, 14 So. 134 (1893)
• State v. Pereira, 72 Conn. App. 545, 805 A.2d 787 (2002)
240 Cases

• State v. Pincus, 41 N.J.Super. 454, 125 A.2d 420 (1956)


• State v. Reed, 205 Neb. 45, 286 N.W.2d 111 (1979)
• State v. Reese, 272 N.W.2d 863 (Iowa 1978)
• State v. Robinson, 132 Ohio App.3d 830, 726 N.E.2d 581 (1999)
• State v. Robinson, 20 W.Va. 713, 43 Am.Rep. 799 (1882)
• State v. Rocheville, 310 S.C. 20, 425 S.E.2d 32 (1993)
• State v. Rocker, 52 Haw. 336, 475 P.2d 684 (1970)
• State v. Runkles, 605 A.2d 111, 121 (Md. 1992)
• State v. Sargent, 156 Vt. 463, 594 A.2d 401 (1991)
• State v. Sasse, 6 S.D. 212, 60 N.W. 853 (1894)
• State v. Sawyer, 95 Conn. 34, 110 A. 461 (1920)
• State v. Schulz, 55 Ia. 628 (1881)
• State v. Sexton, 160 N.J. 93, 733 A.2d 1125 (1999)
• State v. Sheedy, 125 N.H. 108, 480 A.2d 887 (1984)
• State v. Silva-Baltazar, 125 Wash.2d 472, 886 P.2d 138 (1994)
• State v. Silveira, 198 Conn. 454, 503 A.2d 599 (1986)
• State v. Smith, 170 Wis.2d 701, 490 N.W.2d 40 (App.1992)
• State v. Smith, 219 N.W.2d 655 (Iowa 1974)
• State v. Smith, 260 Or. 349, 490 P.2d 1262 (1971)
• State v. Stepniewski, 105 Wis.2d 261, 314 N.W.2d 98 (1982)
• State v. Stewart, 624 N.W.2d 585 (Minn.2001)
• State v. Stoehr, 134 Wis.2d 66, 396 N.W.2d 177 (1986)
• State v. Striggles, 202 Iowa 1318, 210 N.W. 137 (1926)
• State v. Strong, 294 N.W.2d 319 (Minn.1980)
• State v. Thomas, 619 S.W.2d 513, 514 (Tenn. 1981)
• State v. Torphy, 78 Mo.App. 206 (1899)
• State v. Torres, 495 N.W.2d 678 (1993)
• State v. Totman, 80 Mo.App. 125 (1899)
• State v. VanTreese, 198 Iowa 984, 200 N.W. 570 (1924)
• State v. Warshow, 138 Vt. 22, 410 A.2d 1000 (1979)
• State v. Welsh, 8 Wash.App. 719, 508 P.2d 1041 (1973)
• State v. Wenger, 58 Ohio St.2d 336, 390 N.E.2d 801 (1979)
• State v. Whitman, 116 Fla. 196, 156 So. 705 (1934)
• State v. Whitoomb, 52 Iowa 85, 2 N.W. 970 (1879)
• State v. Wilchinski, 242 Conn. 211, 700 A.2d 1 (1997)
• State v. Wilson, 267 Kan. 550, 987 P.2d 1060 (1999)
• State v. Wyatt, 198 W.Va. 530, 482 S.E.2d 147 (1996)
• Stein v. State, 37 Ala. 123 (1861)
• Stephens, [1866] 1 Q.B. 702
• Stratford-upon-Avon Corporation, (1811) 14 East 348, 104 E.R. 636
• Studstill v. State, 7 Ga. 2 (1849)
• Sweet v. Parsley, [1970] A.C. 132, [1969] 1 All E.R. 347, [1969] 2 W.L.R. 470,
133 J.P. 188, 53 Cr. App. Rep. 221, 209 E.G. 703, [1969] E.G.D. 123
• Tate v. Commonwealth, 258 Ky. 685, 80 S.W.2d 817 (1935)
• Taylor v. State, 158 Miss. 505, 130 So. 502 (1930)
Cases 241

• Texaco Inc. v. Short, 454 U.S. 516, 102 S.Ct. 781, 70 L.Ed.2d 738 (1982)
• Thomas, (1837) 7 Car. & P. 817, 173 E.R. 356
• Thompson v. State, 44 S.W.3d 171 (Tex.App.2001)
• Thompson v. United States, 348 F.Supp.2d 398 (2005)
• Tift v. State, 17 Ga.App. 663, 88 S.E. 41 (1916)
• Treacy v. Director of Public Prosecutions, [1971] A.C. 537, 559, [1971] 1 All
E.R. 110, [1971] 2 W.L.R. 112, 55 Cr. App. Rep. 113, 135 J.P. 112
• Tully v. State, 730 P.2d 1206 (Okl.Crim.App.1986)
• Turberwill v. Stamp, (1697) Skinner 681, 90 E.R. 303
• In re Tyvonne, 211 Conn. 151, 558 A.2d 661 (1989)
• United States v. Alaska Packers’ Association, 1 Alaska 217 (1901)
• United States v. Albertini, 830 F.2d 985 (9th Cir.1987)
• United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988)
• United States v. Andrews, 75 F.3d 552 (9th Cir.1996)
• United States v. Arthurs, 73 F.3d 444 (1st Cir.1996)
• United States v. Bailey, 444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980)
• United States v. Bakhtiari, 913 F.2d 1053 (2nd Cir.1990)
• United States v. Bryan, 483 F.2d 88, 92 (3d Cir. 1973)
• United States v. Buber, 62 M.J. 476 (2006)
• United States v. Calley, 48 C.M.R. 19, 22 U.S.C.M.A. 534 (1973)
• United States v. Campbell, 675 F.2d 815 (6th Cir.1982)
• United States v. Carter, 311 F.2d 934 (6th Cir.1963)
• United States v. Chandler, 393 F.2d 920 (4th Cir.1968)
• United States v. Contento-Pachon, 723 F.2d 691 (9th Cir.1984)
• United States v. Currens, 290 F.2d 751 (3rd Cir.1961)
• United States v. Doe, 136 F.3d 631 (9th Cir.1998)
• United States v. Dominguez-Ochoa, 386 F.3d 639 (2004)
• United States v. Dorrell, 758 F.2d 427 (9th Cir.1985)
• United States v. Dye Construction Co., 510 F.2d 78 (10th Cir.1975)
• United States v. Freeman, 25 Fed. Cas. 1208 (1827)
• United States v. Freeman, 357 F.2d 606 (2nd Cir.1966)
• United States v. Gomez, 81 F.3d 846 (9th Cir.1996)
• United States v. Greer, 467 F.2d 1064 (7th Cir.1972)
• United States v. Hanousek, 176 F.3d 1116 (9th Cir.1999)
• United States v. Heredia, 483 F.3d 913 (2006)
• United States v. Holmes, 26 F. Cas. 360, 1 Wall. Jr. 1 (1842)
• United States v. Jewell, 532 F.2d 697 (9th Cir.1976)
• United States v. John Kelso Co., 86 F. 304 (Cal.1898)
• United States v. Johnson, 956 F.2d 894 (9th Cir.1992)
• United States v. Kabat, 797 F.2d 580 (8th Cir.1986)
• United States v. Ladish Malting Co., 135 F.3d 484 (7th Cir.1998)
• United States v. LaFleur, 971 F.2d 200 (9th Cir.1991)
• United States v. Lampkins, 4 U.S.C.M.A. 31, 15 C.M.R. 31 (1954)
• United States v. Lee, 694 F.2d 649 (11th Cir.1983)
• United States v. Mancuso, 139 F.2d 90 (3rd Cir.1943)
242 Cases

• United States v. Maxwell, 254 F.3d 21 (1st Cir.2001)


• United States v. Meyers, 906 F. Supp. 1494 (1995)
• United States v. Moore, 486 F.2d 1139 (D.C.Cir.1973)
• United States v. Oakland Cannabis Buyers’ Cooperative, 532 U.S. 483, 121 S.Ct.
1711, 149 L.Ed.2d 722 (2001)
• United States v. Paolello, 951 F.2d 537 (3rd Cir.1991)
• United States v. Pomponio, 429 U.S. 10, 97 S.Ct. 22, 50 L.Ed.2d 12 (1976)
• United States v. Powell, 929 F.2d 724 (D.C.Cir.1991)
• United States v. Quaintance, 471 F. Supp.2d 1153 (2006)
• United States v. Ramon-Rodriguez, 492 F.3d 930 (2007)
• United States v. Randall, 104 Wash.D.C.Rep. 2249 (D.C.Super.1976)
• United States v. Randolph, 93 F.3d 656 (9th Cir.1996)
• United States v. Robertson, 33 M.J. 832 (1991)
• United States v. Ruffin, 613 F.2d 408 (2d Cir. 1979)
• United States v. Shapiro, 383 F.2d 680 (7th Cir.1967)
• United States v. Smith, 404 F.2d 720 (6th Cir.1968)
• United States v. Spinney, 65 F.3d 231 (1st Cir.1995)
• United States v. Sued-Jimenez, 275 F.3d 1 (1st Cir.2001)
• United States v. Thompson-Powell Drilling Co., 196 F.Supp. 571 (N.D.
Tex.1961)
• United States v. Tobon-Builes, 706 F.2d 1092 (11th Cir. 1983)
• United States v. Torres, 977 F.2d 321 (7th Cir.1992)
• United States v. Warner, 28 Fed. Cas. 404, 6 W.L.J. 255, 4 McLean 463 (1848)
• United States v. Wert-Ruiz, 228 F.3d 250 (3rd Cir.2000)
• United States v. Youts, 229 F.3d 1312 (10th Cir.2000)
• Vantandillo, (1815) 4 M. & S. 73, 105 E.R. 762
• Vaux, (1613) 1 Blustrode 197, 80 E.R. 885
• Virgin Islands v. Joyce, 210 F. App. 208 (2006)
• Walter, (1799) 3 Esp. 21, 170 E.R. 524
• Webb, [2006] E.W.C.A. Crim. 2496, [2007] All E.R. (D) 406
• In re Welfare of C.R.M., 611 N.W.2d 802 (Minn.2000)
• Wheatley v. Commonwealth, 26 Ky.L.Rep. 436, 81 S.W. 687 (1904)
• Wieland v. State, 101 Md. App. 1, 643 A.2d 446 (1994)
• Wilkerson v. Utah, 99 U.S. (9 Otto) 130, 25 L.Ed. 345 (1878)
• Willet v. Commonwealth, 76 Ky. 230 (1877)
• Williams v. State, 70 Ga.App. 10, 27 S.E.2d 109 (1943)
• Williamson, (1807) 3 Car. & P. 635, 172 E.R. 579
• Wilson v. State, 24 S.W. 409 (Tex.Crim.App.1893)
• Wilson v. State, 777 S.W.2d 823 (Tex.App.1989)
• In re Winship, 397 U.S. 358, 90 S.Ct. 1068, 25 L.Ed.2d 368 (1970)
• Woodrow, (1846) 15 M. & W. 404, 153 E.R. 907
Bibliography

HARRY E. ALLEN, ERIC W. CARLSON AND EVALYN C. PARKS, CRITICAL ISSUES IN ADULT PROBATION
(1979)
Susan M. Allan, No Code Orders v. Resuscitation: The Decision to Withhold Life-Prolonging
Treatment from the Terminally Ill, 26 WAYNE L. REV. 139 (1980)
Peter Alldridge, The Doctrine of Innocent Agency, 2 CRIM. L. F. 45 (1990)
FRANCIS ALLEN, THE DECLINE OF THE REHABILITATIVE IDEAL (1981)
Marc Ancel, The System of Conditional Sentence or Sursis, 80 L. Q. REV. 334 (1964)
MARC ANCEL, SUSPENDED SENTENCE (1971)
Johannes Andenaes, The General Preventive Effects of Punishment, 114 U. PA. L. REV. 949 (1966)
Johannes Andenaes, The Morality of Deterrence, 37 U. CHI. L. REV. 649 (1970)
Susan Leigh Anderson, Asimov’s “Three Laws of Robotics” and Machine Metaethics, 22 AI SOC.
477 (2008)
Edward B. Arnolds and Norman F. Garland, The Defense of Necessity in Criminal Law: The Right
to Choose the Lesser Evil, 65 J. CRIM. L. & CRIMINOLOGY 289 (1974)
Andrew Ashworth, The Scope of Criminal Liability for Omissions, 84 L. Q. REV. 424 (1989)
Andrew Ashworth, Testing Fidelity to Legal Values: Official Involvement and Criminal Justice,
63 MOD. L. REV. 663 (2000)
ANDREW ASHWORTH, PRINCIPLES OF CRIMINAL LAW (5th ed., 2006)
Andrew Ashworth, Rehabilitation, PRINCIPLED SENTENCING: READINGS ON THEORY AND POLICY
1 (Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009)
ISAAC ASIMOV, I, ROBOT (1950)
ISSAC ASIMOV, THE REST OF ROBOTS (1964)
Tom Athanasiou, High-Tech Politics: The Case of Artificial Intelligence, 92 SOCIALIST REVIEW
7 (1987)
James Austin and Barry Krisberg, The Unmet Promise of Alternatives, 28 JOURNAL OF RESEARCH IN
CRIME AND DELINQUENCY (1982)
JOHN AUSTIN, THE PROVINCE OF JURISPRUDENCE DETERMINED (1832, 2000)
BERNARD BAARS, IN THE THEATRE OF CONSCIOUSNESS (1997)
John S. Baker Jr., State Police Powers and the Federalization of Local Crime, 72 TEMP. L. REV.
673 (1999)
STEPHEN BAKER, FINAL JEOPARDY: MAN VS. MACHINE AND THE QUEST TO KNOW EVERYTHING (2011)
CESARE BECCARIA, TRAITÉ DES DÉLITS ET DES PEINES (1764)
Hugo Adam Bedau, Abolishing the Death Penalty Even for the Worst Murderers, THE KILLING
STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE 40 (Austin Sarat ed., 1999)
RICHARD E. BELLMAN, AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE: CAN COMPUTERS THINK? (1978)
JEREMY BENTHAM, AN INTRODUCTION TO THE PRINCIPLES OF MORALS AND LEGISLATION (1789, 1996)
Jeremy Bentham, Punishment and Deterrence, PRINCIPLED SENTENCING: READINGS ON THEORY AND
POLICY (Andrew von Hirsch, Andrew Ashworth and Julian Roberts eds., 3rd ed., 2009)
JOHN BIGGS, THE GUILTY MIND (1955)

# Springer International Publishing Switzerland 2015 243


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8
244 Bibliography

Ned Block, What Intuitions About Homunculi Don’t Show, 3 BEHAVIORAL & BRAIN SCI. 425 (1980)
ROBERT M. BOHM, DEATHQUEST: AN INTRODUCTION TO THE THEORY AND PRACTICE OF CAPITAL PUNISH-
MENT IN THE UNITED STATES (1999)
Addison M. Bowman, Narcotic Addiction and Criminal Responsibility under Durham, 53 GEO.
L. J. 1017 (1965)
STEVEN BOX, POWER, CRIME AND MYSTIFICATION (1983)
RICHARD B. BRANDT, ETHICAL THEORY (1959)
Kathleen F. Brickey, Corporate Criminal Accountability: A Brief History and an Observation,
60 WASH. U. L. Q. 393 (1983)
Bruce Bridgeman, Brains + Programs ¼ Minds, 3 BEHAVIORAL & BRAIN SCI. 427 (1980)
WALTER BROMBERG, FROM SHAMAN TO PSYCHOTHERAPIST: A HISTORY OF THE TREATMENT OF MENTAL
ILLNESS (1975)
Andrew G. Brooks and Ronald C. Arkin, Behavioral Overlays for Non-Verbal Communication
Expression on a Humanoid Robot, 22 AUTON. ROBOTS 55 (2007)
Timothy L. Butler, Can a Computer Be an Author – Copyright Aspects of Artificial Intelligence,
4 COMM. ENT. L.S. 707 (1982)
Kenneth L. Campbell, Psychological Blow Automatism: A Narrow Defence, 23 CRIM. L. Q.
342 (1981)
W. G. Carson, Some Sociological Aspects of Strict Liability and the Enforcement of Factory
Legislation, 33 MOD. L. REV. 396 (1970)
W. G. Carson, The Conventionalisation of Early Factory Crime, 7 INT’L J. OF SOCIOLOGY OF LAW
37 (1979)
Derrick Augustus Carter, Bifurcations of Consciousness: The Elimination of the Self-Induced
Intoxication Excuse, 64 MO. L. REV. 383 (1999)
MICHAEL CAVADINO AND JAMES DIGNAN, THE PENAL SYSTEM: AN INTRODUCTION (2002)
EUGENE CHARNIAK AND DREW MCDERMOTT, INTRODUCTION TO ARTIFICIAL INTELLIGENCE (1985)
Russell L. Christopher, Deterring Retributivism: The Injustice of “Just” Punishment, 96 NW. U. L.
REV. 843 (2002)
ARTHUR C. CLARKE, 2001: A SPACE ODYSSEY (1968)
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry into the
Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981)
SIR EDWARD COKE, INSTITUTIONS OF THE LAWS OF ENGLAND – THIRD PART (6th ed., 1681, 1817, 2001)
Dana K. Cole, Expending Felony-Murder in Ohio: Felony-Murder or Murder-Felony, 63 OHIO
ST. L. J. 15 (2002)
ROBERTA C. CRONIN, BOOT CAMPS FOR ADULT AND JUVENILE OFFENDERS: OVERVIEW AND UPDATE
(1994)
George R. Cross and Cary G. Debessonet, An Artificial Intelligence Application in the Law:
CCLIPS, A Computer Program that Processes Legal Information, 1 HIGH TECH. L.J. 329 (1986)
Homer D. Crotty, The History of Insanity as a Defence to Crime in English Common Law, 12 CAL.
L. REV. 105 (1924)
MICHAEL DALTON, THE COUNTREY JUSTICE (1618, 2003)
Donald Davidson, Turing’s Test, MODELLING THE MIND (1990)
Michael J. Davidson, Feminine Hormonal Defenses: Premenstrual Syndrome and Postpartum
Psychosis, 2000 ARMY LAWYER 5 (2000)
Richard Delgado, Ascription of Criminal States of Mind: Toward a Defense Theory for the
Coercively Persuaded (“Brainwashed”) Defendant, 63 MINN. L. REV. 1 (1978)
DANIEL C. DENNETT, BRAINSTORMS (1978)
DANIEL C. DENNETT, THE INTENTIONAL STANCE (1987)
Daniel C. Dennett, Evolution, Error, and Intentionality, THE FOUNDATIONS OF ARTIFICIAL INTELLI-
GENCE 190 (Derek Partridge and Yorick Wilks eds., 1990, 2006)
René Descartes, Discours de la Méthode pour Bien Conduire sa Raison et Chercher La Vérité
dans Les Sciences (1637)
Anthony M. Dillof, Unraveling Unknowing Justification, 77 NOTRE DAME L. REV. 1547 (2002)
Bibliography 245

Dolores A. Donovan and Stephanie M. Wildman, Is the Reasonable Man Obsolete? A Critical
Perspective on Self-Defense and Provocation, 14 LOY. L. A. L. REV. 435, 441 (1981)
AAGE GERHARDT DRACHMANN, THE MECHANICAL TECHNOLOGY OF GREEK AND ROMAN ANTIQUITY:
A STUDY OF THE LITERARY SOURCES (1963)
Joshua Dressler, Professor Delgado’s “Brainwashing” Defense: Courting a Determinist Legal
System, 63 MINN. L. REV. 335 (1978)
Joshua Dressler, Rethinking Heat of Passion: A Defense in Search of a Rationale, 73 J. CRIM. L. &
CRIMINOLOGY 421 (1982)
Joshua Dressler, Battered Women Who Kill Their Sleeping Tormenters: Reflections on
Maintaining Respect for Human Life while Killing Moral Monsters, CRIMINAL LAW THEORY –
DOCTRINES OF THE GENERAL PART 259 (Stephen Shute and A. P. Simester eds., 2005)
G. R. DRIVER AND JOHN C. MILES, THE BABYLONIAN LAWS, VOL. I: LEGAL COMMENTARY (1952)
ANTONY ROBIN DUFF, CRIMINAL ATTEMPTS (1996)
Fernand N. Dutile and Harold F. Moore, Mistake and Impossibility: Arranging Marriage Between
Two Difficult Partners, 74 NW. U. L. REV. 166 (1980)
Justice Ellis, Criminal Law as an Instrument of Social Control, 17 VICTORIA U. WELLINGTON
L. REV. 319 (1987)
GERTRUDE EZORSKY, PHILOSOPHICAL PERSPECTIVES ON PUNISHMENT (1972)
Judith Fabricant, Homicide in Response to a Threat of Rape: A Theoretical Examination of the
Rule of Justification, 11 GOLDEN GATE U. L. REV. 945 (1981)
DAVID P. FARRINGTON AND BRANDON C. WELSH, PREVENTING CRIME: WHAT WORKS FOR CHILDREN,
OFFENDERS, VICTIMS AND PLACES (2006)
EDWARD A. FEIGENBAUM AND PAMELA MCCORDUCK, THE FIFTH GENERATION: ARTIFICIAL INTELLIGENCE
AND JAPAN’S COMPUTER CHALLENGE TO THE WORLD (1983)
S. Z. Feller, Les Délits de Mise en Danger, 40 REV. INT. DE DROIT PÉNAL 179 (1969)
JAMIE FELLNER AND JOANNE MARINER, COLD STORAGE: SUPER-MAXIMUM SECURITY CONFINEMENT IN
INDIANA (1997)
Robert P. Fine and Gary M. Cohen, Is Criminal Negligence a Defensible Basis for Criminal
Liability?, 16 BUFF. L. REV. 749 (1966)
PAUL JOHANN ANSELM FEUERBACH, LEHRBUCH DES GEMEINEN IN DEUTSCHLAND GÜLTIGEN PEINLICHEN
RECHTS (1812, 2007)
Stuart Field and Nico Jorg, Corporate Liability and Manslaughter: Should We Be Going Dutch?,
[1991] Crim. L.R. 156 (1991)
Herbert Fingarette, Addiction and Criminal Responsibility, 84 YALE L. J. 413 (1975)
ARTHUR E. FINK, CAUSES OF CRIME: BIOLOGICAL THEORIES IN THE UNITED STATES, 1800–1915 (1938)
JOHN FINNIS, NATURAL LAW AND NATURAL RIGHTS (1980)
Brent Fisse and John Braithwaite, The Allocation of Responsibility for Corporate Crime: Individ-
ualism, Collectivism and Accountability, 11 SYDNEY L. REV. 468 (1988)
Peter Fitzpatrick, “Always More to Do”: Capital Punishment and the (De)Composition of Law,
THE KILLING STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE 117 (Austin Sarat ed.,
1999)
OWEN J. FLANAGAN, JR., THE SCIENCE OF THE MIND (2nd ed., 1991)
GEORGE P. FLETCHER, RETHINKING CRIMINAL LAW (1978, 2000)
George P. Fletcher, The Nature of Justification, ACTION AND VALUE IN CRIMINAL LAW 175 (Stephen
Shute, John Gardner and Jeremy Horder eds., 2003)
JERRY A. FODOR, MODULES, FRAMES, FRIDGEONS, SLEEPING DOGS AND THE MUSIC OF THE SPHERES, THE
ROBOT’S DILEMMA: THE FRAME PROBLEM IN ARTIFICIAL INTELLIGENCE (Zenon W. Pylyshyn ed.,
1987)
Keith Foren, Casenote: In Re Tyvonne M. Revisited: The Criminal Infancy Defense in Connecticut,
18 Q. L. REV. 733 (1999)
MICHEL FOUCAULT, MADNESS AND CIVILIZATION (1965)
MICHEL FOUCAULT, DISCIPLINE AND PUNISH: THE BIRTH OF THE PRISON (1977)
Sue Frank, Oklahoma Camp Stresses Structure and Discipline, 53 CORRECTIONS TODAY 102 (1991)
246 Bibliography

Lionel H. Frankel, Criminal Omissions: A Legal Microcosm, 11 WAYNE L. REV. 367 (1965)
Lionel H. Frankel, Narcotic Addiction, Criminal Responsibility and Civil Commitment, 1966 UTAH
L. REV. 581 (1966)
Robert M. French, Subcognition and the Limits of the Turing Test, 99 MIND 53 (1990)
K. W. M. Fulford, Value, Action, Mental Illness, and the Law, ACTION AND VALUE IN CRIMINAL LAW
279 (Stephen Shute, John Gardner and Jeremy Horder eds., 2003)
Gail S. Funke, The Economics of Prison Crowding, 478 ANNALS OF THE AMERICAN ACADEMY OF
POLITICAL AND SOCIAL SCIENCES 86 (1985)
JONATHAN M.E. GABBAI, COMPLEXITY AND THE AEROSPACE INDUSTRY: UNDERSTANDING EMERGENCE BY
RELATING STRUCTURE TO PERFORMANCE USING MULTI-AGENT SYSTEMS (Ph.D. Thesis, University of
Manchester, 2005)
Crystal A. Garcia, Using Palmer’s Global Approach to Evaluate Intensive Supervision Programs:
Implications for Practice, 4 CORRECTION MANAGEMENT QUARTERLY 60 (2000)
HOWARD GARDNER, THE MIND’S NEW SCIENCE: A HISTORY OF THE COGNITIVE REVOLUTION (1985)
DAVID GARLAND, THE CULTURE OF CONTROL: CRIME AND SOCIAL ORDER IN CONTEMPORARY SOCIETY
(2002)
Chas E. George, Limitation of Police Powers, 12 LAW. & BANKER & S. BENCH & B. REV.
740 (1919)
Jack P. Gibbs, A Very Short Step toward a General Theory of Social Control, 1985 AM. B. FOUND
RES. J. 607 (1985)
SANDER L. GILMAN, SEEING THE INSANE (1982)
P. R. Glazebrook, Criminal Omissions: The Duty Requirement in Offences Against the Person,
55 L. Q. REV. 386 (1960)
ROBERT M. GLORIOSO AND FERNANDO C. COLON OSORIO, ENGINEERING INTELLIGENT SYSTEMS: CONCEPTS
AND APPLICATIONS (1980)
Sheldon Glueck, Principles of a Rational Penal Code, 41 HARV. L. REV. 453 (1928)
SIR GERALD GORDON, THE CRIMINAL LAW OF SCOTLAND (1st ed., 1967)
GERHARDT GREBING, THE FINE IN COMPARATIVE LAW: A SURVEY OF 21 COUNTRIES (1982)
Kent Greenawalt, The Perplexing Borders of Justification and Excuse, 84 COLUM. L. REV. 1897
(1984)
Kent Greenawalt, Distinguishing Justifications from Excuses, 49 LAW & CONTEMP. PROBS.
89 (1986)
David F. Greenberg, The Corrective Effects of Corrections: A Survey of Evaluation, CORRECTIONS
AND PUNISHMENT 111 (David F. Greenberg ed., 1977)
Judith A. Greene, Structuring Criminal Fines: Making an ‘Intermediate Penalty’ More Useful and
Equitable, 13 JUSTICE SYSTEM JOURNAL 37 (1988)
Richard Gruner, To Let the Punishment Fit the Organization: Sanctioning Corporate Offenders
Through Corporate Probation, 16 AM. J. CRIM. L. 1 (1988)
JEROME HALL, GENERAL PRINCIPLES OF CRIMINAL LAW (2nd ed., 1960, 2005)
Jerome Hall, Intoxication and Criminal Responsibility, 57 HARV. L. REV. 1045 (1944)
Jerome Hall, Negligent Behaviour Should Be Excluded from Penal Liability, 63 COLUM. L. REV.
632 (1963)
Seymour L. Halleck, The Historical and Ethical Antecedents of Psychiatric Criminology, PSYCHI-
ATRIC ASPECTS OF CRIMINOLOGY 8 (Halleck and Bromberg eds., 1968)
Gabriel Hallevy, The Recidivist Wants to Be Punished – Punishment as an Incentive to Re-offend,
5 INT’L J. PUNISHMENT & SENTENCING 124 (2009)
GABRIEL HALLEVY, A MODERN TREATISE ON THE PRINCIPLE OF LEGALITY IN CRIMINAL LAW (2010)
Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to
Legal Social Control, 4 AKRON INTELL. PROP. J. 171 (2010)
Gabriel Hallevy, Unmanned Vehicles – Subordination to Criminal Law under the Modern Concept
of Criminal Liability, 21 J. L. INF. & SCI. 311 (2011)
Gabriel Hallevy, Therapeutic Victim-Offender Mediation within the Criminal Justice Process –
Sharpening the Evaluation of Personal Potential for Rehabilitation while Righting Wrongs
Bibliography 247

under the Alternative-Dispute-Resolution (ADR) Philosophy, 16 HARV. NEGOT. L. REV.


65 (2011)
GABRIEL HALLEVY, THE MATRIX OF DERIVATIVE CRIMINAL LIABILITY (2012)
GABRIEL HALLEVY, THE RIGHT TO BE PUNISHED – MODERN DOCTRINAL SENTENCING (2013)
GABRIEL HALLEVY, WHEN ROBOTS KILL – ARTIFICIAL INTELLIGENCE UNDER CRIMINAL LAW (2013)
John Harding, The Development of the Community Service, ALTERNATIVE STRATEGIES FOR COPING
WITH CRIME 164 (Norman Tutt ed., 1978)
HERBERT L. A. HART, PUNISHMENT AND RESPONSIBILITY: ESSAYS IN THE PHILOSOPHY OF LAW (1968)
Frank E. Hartung, Trends in the Use of Capital Punishment, 284(1) ANNALS OF THE AMERICAN
ACADEMY OF POLITICAL AND SOCIAL SCIENCE 8 (1952)
John Haugeland, Semantic Engines: An Introduction to Mind Design, MIND DESIGN 1 (John
Haugeland ed., 1981)
JOHN HAUGELAND, ARTIFICIAL INTELLIGENCE: THE VERY IDEA (1985)
PAMELA RAE HEATH, THE PK ZONE: A CROSS-CULTURAL REVIEW OF PSYCHOKINESIS (PK) (2003)
HERMANN VON HELMHOLTZ, THE FACTS OF PERCEPTION (1878)
John Lawrence Hill, A Utilitarian Theory of Duress, 84 IOWA L. REV. 275 (1999)
SALLY T. HILLSMAN AND SILVIA S. G. CASALE, ENFORCEMENT OF FINES AS CRIMINAL SANCTIONS: THE
ENGLISH EXPERIENCE AND ITS RELEVANCE TO AMERICAN PRACTICE (1986)
Harold L. Hirsh and Richard E. Donovan, The Right to Die: Medico-Legal Implications of In re
Quinlan, 30 RUTGERS L. REV. 267 (1977)
ANDREW VON HIRSCH, DOING JUSTICE: THE CHOICE OF PUNISHMENT (1976)
Andrew von Hirsch, Proportionate Sentences: A Desert Perspective, PRINCIPLED SENTENCING:
READINGS ON THEORY AND POLICY 115 (Andrew von Hirsch, Andrew Ashworth and Julian
Roberts eds., 3rd ed., 2009)
W. H. Hitchler, Necessity as a Defence in Criminal Cases, 33 DICK. L. REV. 138 (1929)
THOMAS HOBBES, LEVIATHAN OR THE MATTER, FORME AND POWER OF A COMMON WEALTH
ECCLESIASTICALL AND CIVIL (1651)
DOUGLAS R. HOFSTADTER, GÖDEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID (1979, 1999)
William Searle Holdsworth, English Corporation Law in the 16th and 17th Centuries, 31 YALE
L. J. 382 (1922)
WILLIAM SEARLE HOLDSWORTH, A HISTORY OF ENGLISH LAW (1923)
Winifred H. Holland, Automatism and Criminal Responsibility, 25 CRIM. L. Q. 95 (1982)
Clive R. Hollin, Treatment Programs for Offenders, 22 INT’L J. OF LAW & PSYCHIATRY 361 (1999)
OLIVER W. HOLMES, THE COMMON LAW (1881, 1923)
Oliver W. Holmes, Agency, 4 HARV. L. REV. 345 (1891)
HENRY HOLT, TELEKINESIS (2005)
Morton J. Horwitz, The Rise and Early Progressive Critique of Objective Causation, THE POLITICS
OF LAW: A PROGRESSIVE CRITIQUE 471 (David Kairys ed., 3rd ed., 1998)
JOHN HOWARD, THE STATE OF PRISONS IN ENGLAND AND WALES (1777, 1996)
FENG-HSIUNG HSU, BEHIND DEEP BLUE: BUILDING THE COMPUTER THAT DEFEATED THE WORLD CHESS
CHAMPION (2002)
BARBARA HUDSON, UNDERSTANDING JUSTICE: AN INTRODUCTION TO IDEAS, PERSPECTIVES AND
CONTROVERSIES IN MODERN PENAL THEORY (1996, 2003)
Graham Hughes, Criminal Omissions, 67 YALE L. J. 590 (1958)
BISHOP CARLETON HUNT, THE DEVELOPMENT OF THE BUSINESS CORPORATION IN ENGLAND 1800–1867
(1963)
Douglas Husak, Holistic Retribution, 88 CAL. L. REV. 991 (2000)
Douglas Husak, Retribution in Criminal Theory, 37 SAN DIEGO L. REV. 959 (2000)
Douglas Husak and Andrew von Hirsch, Culpability and Mistake of Law, ACTION AND VALUE IN
CRIMINAL LAW 157 (Stephen Shute, John Gardner and Jeremy Horder eds., 2003)
Peter Barton Hutt and Richard A. Merrill, Criminal Responsibility and the Right to Treatment for
Intoxication and Alcoholism, 57 GEO. L. J. 835 (1969)
RAY JACKENDOFF, CONSCIOUSNESS AND THE COMPUTATIONAL MIND (1987)
248 Bibliography

WILLIAM JAMES, THE PRINCIPLES OF PSYCHOLOGY (1890)


PHILLIP N. JOHNSON-LAIRD, MENTAL MODELS (1983)
Matthew Jones, Overcoming the Myth of Free Will in Criminal Law: The True Impact of the
Genetic Revolution, 52 DUKE L. J. 1031 (2003)
Sanford Kadish, Respect for Life and Regard for Rights in the Criminal Law, 64 CAL. L. REV.
871 (1976)
Sanford H. Kadish, Excusing Crime, 75 CAL. L. REV. 257 (1987)
Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and Theoretical Relevance of
Hypersexual Desire, 47 INT’L J. OF OFFENDER THERAPY AND COMPARATIVE CRIMINOLOGY
439 (2003)
IMMANUEL KANT, OUR DUTIES TO ANIMALS (1780)
A.W.G. Kean, The History of the Criminal Liability of Children, 53 L. Q. REV. 364 (1937)
VOJISLAV KECMAN, LEARNING AND SOFT COMPUTING, SUPPORT VECTOR MACHINES, NEURAL NETWORKS
AND FUZZY LOGIC MODELS (2001)
Edwin R. Keedy, Ignorance and Mistake in the Criminal Law, 22 HARV. L. REV. 75 (1909)
Paul W. Keve, The Professional Character of the Presentence Report, 26 FEDERAL PROBATION
51 (1962)
ANTONY KENNY, WILL, FREEDOM AND POWER (1975)
ANTONY KENNY, WHAT IS FAITH? (1992)
Roy D. King, The Rise and Rise of Supermax: An American Solution in Search of a Problem?,
1 PUNISHMENT AND SOCIETY 163 (1999)
RAYMOND KURZWEIL, THE AGE OF INTELLIGENT MACHINES (1990)
NICOLA LACEY AND CELIA WELLS, RECONSTRUCTING CRIMINAL LAW – CRITICAL PERSPECTIVES ON CRIME
AND THE CRIMINAL PROCESS (2nd ed. 1998)
NICOLA LACEY, CELIA WELLS AND OLIVER QUICK, RECONSTRUCTING CRIMINAL LAW (3rd ed., 2003,
2006)
J. G. LANDELS, ENGINEERING IN THE ANCIENT WORLD (rev. ed., 2000)
William S. Laufer, Corporate Bodies and Guilty Minds, 43 EMORY L. J. 647 (1994)
GOTTFRIED WILHELM LEIBNIZ, CHARACTERISTICA UNIVERSALIS (1676)
Julie Leibrich, Burt Galaway and Yvonne Underhill, Community Sentencing in New Zealand:
A Survey of Users, 50 FEDERAL PROBATION 55 (1986)
LAWRENCE LESSIG, CODE AND OTHER LAWS OF CYBERSPACE (1999)
DAVID LEVY, ROBOTS UNLIMITED: LIFE IN A VIRTUAL AGE (2006)
DAVID LEVY, LOVE AND SEX WITH ROBOTS: THE EVOLUTION OF HUMAN-ROBOT RELATIONSHIPS (2007)
DAVID LEVY AND MONTY NEWBORN, HOW COMPUTERS PLAY CHESS (1991)
K. W. Lidstone, Social Control and the Criminal Law, 27 BRIT. J. CRIMINOLOGY 31 (1987)
DOUGLAS S. LIPTON, ROBERT MARTINSON AND JUDITH WILKS, THE EFFECTIVENESS OF CORRECTIONAL
TREATMENT: A SURVEY OF TREATMENT EVALUATION STUDIES (1975)
Frederick J. Ludwig, Rationale of Responsibility for Young Offenders, 29 NEB. L. REV. 521 (1950)
GEORGE F. LUGER, ARTIFICIAL INTELLIGENCE: STRUCTURES AND STRATEGIES FOR COMPLEX PROBLEM
SOLVING (2001)
GEORGE F. LUGER AND WILLIAM A. STUBBLEFIELD, ARTIFICIAL INTELLIGENCE: STRUCTURES AND
STRATEGIES FOR COMPLEX PROBLEM SOLVING (6th ed., 2008)
William G. Lycan, Introduction, MIND AND COGNITION 3 (William G. Lycan ed., 1990)
Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Misconduct, 60 LAW &
CONTEMP. PROBS. 23 (1997)
Peter Lynch, The Origins of Computer Weather Prediction and Climate Modeling, 227 JOURNAL OF
COMPUTATIONAL PHYSICS 3431 (2008)
DAVID LYONS, FORMS AND LIMITS OF UTILITARIANISM (1965)
David Lyons, Open Texture and the Possibility of Legal Interpretation, 18 LAW PHIL. 297 (1999)
DORRIS LAYTON MACKANZIE AND EUGENE E. HEBERT, CORRECTIONAL BOOT CAMPS: A TOUGH INTERME-
DIATE SANCTION (1996)
BRONISLAW MALINOWSKI, CRIME AND CUSTOM IN SAVAGE SOCIETY (1959, 1982)
Bibliography 249

DAVID MANNERS AND TSUGIO MAKIMOTO, LIVING WITH THE CHIP (1995)
Dan Markel, Are Shaming Punishments Beautifully Retributive? Retributivism and the
Implications for the Alternative Sanctions Debate, 54 VAND. L. REV. 2157 (2001)
Robert Martinson, What Works? Questions and Answers about Prison Reform, 35 PUBLIC INTEREST
22 (1974)
Thomas Mathiesen, The Viewer Society: Michel Foucault’s ‘Panopticon’ Revisited, 1 THEORETICAL
CRIMINOLOGY 215 (1997)
Peter McCandless, Liberty and Lunacy: The Victorians and Wrongful Confinement, MADHOUSES,
MAD-DOCTORS, AND MADMEN: THE SOCIAL HISTORY OF PSYCHIATRY IN THE VICTORIAN ERA (Scull
ed., 1981)
Aileen McColgan, In Defence of Battered Women who Kill, 13 OXFORD J. LEGAL STUD. 508 (1993)
Sean McConville, The Victorian Prison: England 1865–1965, THE OXFORD HISTORY OF THE PRISON
131 (Norval Morris and David J. Rothman eds., 1995)
J. R. MCDONALD, G. M. BURT, J. S. ZIELINSKI AND S. D. J. MCARTHUR, INTELLIGENT KNOWLEDGE
BASED SYSTEM IN ELECTRICAL POWER ENGINEERING (1997)
COLIN MCGINN, THE PROBLEM OF CONSCIOUSNESS: ESSAYS TOWARDS A RESOLUTION (1991)
KARL MENNINGER, MARTIN MAYMAN AND PAUL PRUYSER, THE VITAL BALANCE (1963)
Alan C. Michaels, Imposing Constitutional Limits on Strict Liability: Lessons from the American
Experience, APPRAISING STRICT LIABILITY 218 (A. P. Simester ed., 2005)
DONALD MICHIE AND RORY JOHNSTON, THE CREATIVE COMPUTER (1984)
Justine Miller, Criminal Law – An Agency for Social Control, 43 YALE L. J. 691 (1934)
MARVIN MINSKY, THE SOCIETY OF MIND (1986)
JESSICA MITFORD, KIND AND USUAL PUNISHMENT: THE PRISON BUSINESS (1974)
Patrick Montague, Self-Defense and Choosing Between Lives, 40 PHIL. STUD. 207 (1981)
MICHAEL MOORE, LAW AND PSYCHIATRY: RETHINKING THE RELATIONSHIP (1984)
George Mora, Historical and Theoretical Trends in Psychiatry, 1 COMPREHENSIVE TEXTBOOK OF
PSYCHIATRY 1 (Alfred M. Freedman, Harold Kaplan and Benjamin J. Sadock eds., 2nd ed.,
1975)
HANS MORAVEC, ROBOT: MERE MACHINE TO TRANSCENDENT MIND (1999)
Norval Morris, Somnambulistic Homicide: Ghosts, Spiders, and North Koreans, 5 RES JUDICATAE
29 (1951)
TIM MORRIS, COMPUTER VISION AND IMAGE PROCESSING (2004)
Gerhard O. W. Mueller, Mens Rea and the Corporation – A Study of the Model Penal Code
Position on Corporate Criminal Liability, 19 U. PITT. L. REV. 21 (1957)
Michael A. Musmanno, Are Subordinate Officials Penally Responsible for Obeying Superior
Orders which Direct Commission of Crime?, 67 DICK. L. REV. 221 (1963)
MONTY NEWBORN, DEEP BLUE (2002)
ALLEN NEWELL AND HERBERT A. SIMON, HUMAN PROBLEM SOLVING (1972)
EDWARD NORBECK, RELIGION IN PRIMITIVE SOCIETY (1961)
Anne Norton, After the Terror: Mortality, Equality, Fraternity, THE KILLING STATE – CAPITAL
PUNISHMENT IN LAW, POLITICS, AND CULTURE 27 (Austin Sarat ed., 1999)
Scott T. Noth, A Penny for Your Thoughts: Post-Mitchell Hate Crime Laws Confirm a Mutating
Effect upon Our First Amendment and the Government’s Role in Our Lives, 10 REGENT U. L.
REV. 167 (1998)
DAVID ORMEROD, SMITH & HOGAN CRIMINAL LAW (11th ed., 2005)
N. P. PADHY, ARTIFICIAL INTELLIGENCE AND INTELLIGENT SYSTEMS (2005, 2009)
WILLIAM PALEY, A TREATISE ON THE LAW OF PRINCIPAL AND AGENT (2nd ed., 1847)
DAN W. PATTERSON, INTRODUCTION TO ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS (1990)
Monrad G. Paulsen, Intoxication as a Defense to Crime, 1961 U. ILL. L. F. 1 (1961)
Rollin M. Perkins, Negative Acts in Criminal Law, 22 IOWA L. REV. 659 (1937)
Rollin M. Perkins, Ignorance and Mistake in Criminal Law, 88 U. PA. L. REV. 35 (1940)
Rollin M. Perkins, “Knowledge” as a Mens Rea Requirement, 29 HASTINGS L. J. 953 (1978)
Rollin M. Perkins, Impelled Perpetration Restated, 33 HASTINGS L. J. 403 (1981)
250 Bibliography

ANTHONY M. PLATT, THE CHILD SAVERS: THE INVENTION OF DELINQUENCY (2nd ed., 1969, 1977)
Anthony Platt and Bernard L. Diamond, The Origins of the “Right and Wrong” Test of Criminal
Responsibility and Its Subsequent Development in the United States: An Historical Survey,
54 CAL. L. REV. 1227 (1966)
FREDERICK POLLOCK AND FREDERICK WILLIAM MAITLAND, THE HISTORY OF ENGLISH LAW BEFORE THE
TIME OF EDWARD I (rev. 2nd ed., 1898)
Stanislaw Pomorski, On Multiculturalism, Concepts of Crime, and the “De Minimis” Defense,
1997 B.Y.U. L. REV. 51 (1997)
JAMES COWLES PRICHARD, A TREATISE ON INSANITY AND OTHER DISORDERS AFFECTING THE MIND (1835)
GUSTAV RADBRUCH, DER HANDLUNGSBEGRIFF IN SEINER BEDEUTUNG FÜR DAS STRAFRECHTSSYSTEM
(1904)
LEON RADZINOWICZ, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRATION FROM 1750 VOL. 1:
THE MOVEMENT FOR REFORM (1948)
LEON RADZINOWICZ AND ROGER HOOD, A HISTORY OF ENGLISH CRIMINAL LAW AND ITS ADMINISTRATION
FROM 1750 VOL. 5: THE EMERGENCE OF PENAL POLICY (1986)
Craig W. Reynolds, Herds and Schools: A Distributed Behavioral Model, 21 COMPUT. GRAPH.
(1987)
ELAINE RICH AND KEVIN KNIGHT, ARTIFICIAL INTELLIGENCE (2nd ed., 1991)
FIORI RINALDI, IMPRISONMENT FOR NON-PAYMENT OF FINES (1976)
Edwina L. Rissland, Artificial Intelligence and Law: Stepping Stones to a Model of Legal
Reasoning, 99 YALE L. J. 1957 (1990)
CHASE RIVELAND, SUPERMAX PRISONS: OVERVIEW AND GENERAL CONSIDERATIONS (1999)
OLIVIA F. ROBINSON, THE CRIMINAL LAW OF ANCIENT ROME (1995)
Paul H. Robinson, A Theory of Justification: Societal Harm as a Prerequisite for Criminal
Liability, 23 U.C.L.A. L. REV. 266 (1975)
Paul H. Robinson and John M. Darley, The Utility of Desert, 91 NW. U. L. REV. 453 (1997)
Paul H. Robinson, Testing Competing Theories of Justification, 76 N.C. L. REV. 1095 (1998)
P. ROGERS, LAW ON THE BATTLEFIELD (1996)
Vashon R. Rogers Jr., De Minimis Non Curat Lex, 21 ALBANY L. J. 186 (1880)
GEORGE ROSEN, MADNESS IN SOCIETY: CHAPTERS IN THE HISTORICAL SOCIOLOGY OF MENTAL ILLNESS
(1969)
Laurence H. Ross, Deterrence Regained: The Cheshire Constabulary’s “Breathalyser Blitz”,
6 J. LEGAL STUD. 241 (1977)
DAVID J. ROTHMAN, CONSCIENCE AND CONVENIENCE: THE ASYLUM AND ITS ALTERNATIVES IN PROGRES-
SIVE AMERICA (1980)
David J. Rothman, For the Good of All: The Progressive Tradition in Prison Reform, HISTORY AND
CRIME 271 (James A. Inciardi and Charles E. Faupel eds., 1980)
CLAUS ROXIN, STRAFRECHT – ALLGEMEINER TEIL I (4 Auf., 2006)
STUART J. RUSSELL AND PETER NORVIG, ARTIFICIAL INTELLIGENCE: A MODERN APPROACH (2002)
WILLIAM OLDNALL RUSSELL, A TREATISE ON CRIMES AND MISDEMEANORS (1843, 1964)
Cheyney C. Ryan, Self-Defense, Pacificism, and the Possibility of Killing, 93 ETHICS 508 (1983)
GILBERT RYLE, THE CONCEPT OF MIND (1954)
Francis Bowes Sayre, Criminal Responsibility for the Acts of Another, 43 HARV. L. REV.
689 (1930)
Francis Bowes Sayre, Mens Rea, 45 HARV. L. REV. 974 (1932)
Francis Bowes Sayre, Public Welfare Offenses, 33 COLUM. L. REV. 55 (1933).
ROBERT J. SCHALKOFF, ARTIFICIAL INTELLIGENCE: AN ENGINEERING APPROACH (1990)
Roger C. Schank, What is AI, Anyway?, THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE 3 (Derek
Partridge and Yorick Wilks eds., 1990, 2006)
Samuel Scheffler, Justice and Desert in Liberal Theory, 88 CAL. L. REV. 965 (2000)
G. Schoenfeld, In Defence of Retribution in the Law, 35 PSYCHOANALYTIC Q. 108 (1966)
FRANK SCHMALLEGER, CRIMINAL JUSTICE TODAY: AN INTRODUCTORY TEXT FOR THE 21ST CENTURY
(2003)
Bibliography 251

WILLIAM ROBERT SCOTT, THE CONSTITUTION AND FINANCE OF ENGLISH, SCOTTISH AND IRISH JOINT-STOCK
COMPANIES TO 1720 (1912).
John R. Searle, Minds, Brains & Programs, 3 BEHAVIORAL & BRAIN SCI. 417 (1980)
JOHN R. SEARLE, MINDS, BRAINS AND SCIENCE (1984)
JOHN R. SEARLE, THE REDISCOVERY OF MIND (1992)
LEE SECHREST, SUSAN O. WHITE AND ELIZABETH D. BROWN, THE REHABILITATION OF CRIMINAL
OFFENDERS: PROBLEMS AND PROSPECTS (1979)
Richard P. Seiter and Karen R. Kadela, Prisoner Reentry: What Works, What Does Not, and What
Is Promising, 49 CRIME AND DELINQUENCY 360 (2003)
THORSTEN J. SELLIN, SLAVERY AND THE PENAL SYSTEM (1976)
Robert N. Shapiro, Of Robots, Persons, and the Protection of Religious Beliefs, 56 S. CAL. L. REV.
1277 (1983)
ROSEMARY SHEEHAN, GILL MCLVOR AND CHRIS TROTTER, WHAT WORKS WITH WOMEN OFFENDERS
(2007)
LAWRENCE W. SHERMAN, DAVID P. FARRINGTON, DORIS LEYTON MACKENZIE AND BRANDON C. WELSH,
EVIDENCE-BASED CRIME PREVENTION (2006)
Nancy Sherman, The Place of the Emotions in Kantian Morality, IDENTITY, CHARACTER, AND
MORALITY 145 (Owen Flanagan & Amelie O. Rotry eds., 1990)
Stephen Shute, Knowledge and Belief in the Criminal Law, CRIMINAL LAW THEORY – DOCTRINES OF
THE GENERAL PART 182 (Stephen Shute and A.P. Simester eds., 2005)
R. U. Singh, History of the Defence of Drunkenness in English Criminal Law, 49 LAW Q. REV.
528 (1933)
VIEDA SKULTANS, ENGLISH MADNESS: IDEAS ON INSANITY, 1580–1890 (1979)
Aaron Sloman, Motives, Mechanisms, and Emotions, THE PHILOSOPHY OF ARTIFICIAL INTELLIGENCE
231 (Margaret A. Boden ed., 1990)
JOHN J.C. SMART AND BERNARD WILLIAMS, UTILITARIANISM – FOR AND AGAINST (1973)
RUDOLPH SOHM, THE INSTITUTES OF ROMAN LAW (3rd ed., 1907)
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231 (1992)
MILAN SONKA, VACLAV HLAVAC AND ROGER BOYLE, IMAGE PROCESSING, ANALYSIS, AND MACHINE
VISION (2008)
WALTER W. SOROKA, ANALOG METHODS IN COMPUTATION AND SIMULATION (1954)
John R. Spencer and Antje Pedain, Approaches to Strict and Constructive Liability in Continental
Criminal Law, APPRAISING STRICT LIABILITY 237 (A. P. Simester ed., 2005)
Jane Stapelton, Law, Causation and Common Sense, 8 OXFORD J. LEGAL STUD. 111 (1988)
G.R. Sullivan, Knowledge, Belief, and Culpability, CRIMINAL LAW THEORY – DOCTRINES OF THE
GENERAL PART 207 (Stephen Shute and A.P. Simester eds., 2005)
G. R. Sullivan, Strict Liability for Criminal Offences in England and Wales Following
Incorporation into English Law of the European Convention on Human Rights, APPRAISING
STRICT LIABILITY 195 (A. P. Simester ed., 2005)
ROGER J. SULLIVAN, IMMANUEL KANT’S MORAL THEORY (1989)
Victor Tadors, Recklessness and the Duty to Take Care, CRIMINAL LAW THEORY – DOCTRINES OF THE
GENERAL PART 227 (Stephen Shute and A.P. Simester eds., 2005)
STEVEN L. TANIMOTO, ELEMENTS OF ARTIFICIAL INTELLIGENCE: AN INTRODUCTION USING LISP (1987)
Lawrence Taylor and Katharina Dalton, Premenstrual Syndrome: A New Criminal Defense?,
19 CAL. W. L. REV. 269 (1983)
JUDITH JARVIS THOMSON, RIGHTS, RESTITUTION AND RISK: ESSAYS IN MORAL THEORY (1986)
BENJAMIN THORPE, ANCIENT LAWS AND INSTITUTES OF ENGLAND (1840, 2004)
Lawrence P. Tiffany and Carl A. Anderson, Legislating the Necessity Defense in Criminal Law,
52 DENV. L. J. 839 (1975)
Janet A. Tighe, Francis Wharton and the Nineteenth Century Insanity Defense: The Origins of a
Reform Tradition, 27 AM. J. LEGAL HIST. 223 (1983)
Jackson Toby, Is Punishment Necessary? 55 J. CRIM. L. CRIMINOLOGY & POLICE SCI. 332 (1964)
252 Bibliography

MICHAEL H. TONRY AND KATHLEEN HATLESTAD, SENTENCING REFORM IN OVERCROWDED TIMES:


A COMPARATIVE PERSPECTIVE (1997)
Richard H. S. Tur, Subjectivism and Objectivism: Towards Synthesis, ACTION AND VALUE IN
CRIMINAL LAW 213 (Stephen Shute, John Gardner and Jeremy Horder eds., 2003)
Alan Turing, Computing Machinery and Intelligence, 59 MIND 433 (1950)
AUSTIN TURK, CRIMINALITY AND LEGAL ORDER (1969)
HORSFALL J. TURNER, THE ANNALS OF THE WAKEFIELD HOUSE OF CORRECTIONS FOR THREE HUNDRED
YEARS (1904)
ALAN TYREE, EXPERT SYSTEMS IN LAW (1989)
Mark S. Umbreit, Community Service Sentencing: Jail Alternatives or Added Sanction?, 45 FED-
ERAL PROBATION 3 (1981)
Max L. Veech and Charles R. Moon, De Minimis non Curat Lex, 45 MICH. L. REV. 537 (1947)
RUSS VERSTEEG, EARLY MESOPOTAMIAN LAW (2000)
John Barker Waite, The Law of Arrest, 24 TEX. L. REV. 279 (1946)
NIGEL WALKER AND NICOLA PADFIELD, SENTENCING: THEORY, LAW AND PRACTICE (1996)
Andrew Walkover, The Infancy Defense in the New Juvenile Court, 31 U.C.L.A. L. REV.
503 (1984)
Steven Walt and William S. Laufer, Why Personhood Doesn’t Matter: Corporate Criminal
Liability and Sanctions, 18 AM. J. CRIM. L. 263 (1991)
Mary Anne Warren, On the Moral and Legal Status of Abortion, ETHICS IN PRACTICE (Hugh
Lafollette ed., 1997)
DONALD A. WATERMAN, A GUIDE TO EXPERT SYSTEMS (1986)
MAX WEBER, ECONOMY AND SOCIETY: AN OUTLINE OF INTERPRETIVE SOCIOLOGY (1968)
HENRY WEIHOFEN, MENTAL DISORDER AS A CRIMINAL DEFENSE (1954)
Paul Weiss, On the Impossibility of Artificial Intelligence, 44 REV. METAPHYSICS 335 (1990)
Celia Wells, Battered Woman Syndrome and Defences to Homicide: Where Now?, 14 LEGAL STUD.
266 (1994)
Yueh-Hsuan Weng and Chien-Hsun Chen and Chuen-Tsai Sun, The Legal Crisis of Next Genera-
tion Robots: On Safety Intelligence, PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON
ARTIFICIAL INTELLIGENCE AND LAW 205 (2007)
Yueh-Hsuan Weng, Chien-Hsun Chen and Chuen-Tsai Sun, Toward the Human-Robot Co-Exis-
tence Society: On Safety Intelligence for Next Generation Robots, 1 INT. J. SOC. ROBOT.
267 (2009)
FRANCIS ANTONY WHITLOCK, CRIMINAL RESPONSIBILITY AND MENTAL ILLNESS (1963)
GLANVILLE WILLIAMS, CRIMINAL LAW: THE GENERAL PART (2nd ed., 1961)
Glanville Williams, Oblique Intention, 46 CAMB. L. J. 417 (1987)
Glanville Williams, The Draft Code and Reliance upon Official Statements, 9 LEGAL STUD.
177 (1989)
Glanville Williams, Innocent Agency and Causation, 3 CRIM. L. F. 289 (1992)
Ashlee Willis, Community Service as an Alternative to Imprisonment: A Cautionary View,
24 PROBATION JOURNAL 120 (1977)
EDWARD O. WILSON, SOCIOBIOLOGY: THE NEW SYNTHESIS (1975)
JAMES Q. WILSON, THINKING ABOUT CRIME (2nd ed., 1985)
TERRY WINOGRAD AND FERNANDO C. FLORES, UNDERSTANDING COMPUTERS AND COGNITION: A NEW
FOUNDATION FOR DESIGN (1986, 1987)
Terry Winograd, Thinking Machines: Can There Be? Are We?, THE FOUNDATIONS OF ARTIFICIAL
INTELLIGENCE 167 (Derek Partridge and Yorick Wilks eds., 1990, 2006)
PATRICK HENRY WINSTON, ARTIFICIAL INTELLIGENCE (3rd ed., 1992)
Edward M. Wise, The Concept of Desert, 33 WAYNE L. REV. 1343 (1987)
LUDWIG WITTGENSTEIN, PHILOSOPHISCHE UNTERSUCHUNGEN (1953)
Steven J. Wolhandler, Voluntary Active Euthanasia for the Terminally Ill and the Constitutional
Right to Privacy, 69 CORNELL L. REV. 363 (1984)
Bibliography 253

Kam C. Wong, Police Powers and Control in the People’s Republic of China: The History of
Shoushen, 10 COLUM. J. ASIAN L. 367 (1996)
Ledger Wood, Responsibility and Punishment, 28 AM. INST. CRIM. L. & CRIMINOLOGY 630 (1938)
ANDREW WRIGHT, GWYNETH BOSWELL AND MARTIN DAVIES, CONTEMPORARY PROBATION PRACTICE
(1993)
Andrew J. Wu, From Video Games to Artificial Intelligence: Assigning Copyright Ownership to
Works Generated by Increasingly Sophisticated Computer Programs, 25 AIPLA Q.J. 131
(1997)
REUVEN YARON, THE LAWS OF ESHNUNNA (2nd ed., 1988)
MASOUD YAZDANI AND AJIT NARAYANAN, ARTIFICIAL INTELLIGENCE: HUMAN EFFECTS (1985)
PETER YOUNG, PUNISHMENT, MONEY AND THE LEGAL ORDER: AN ANALYSIS OF THE EMERGENCE OF
MONETARY SANCTIONS WITH SPECIAL REFERENCE TO SCOTLAND (1987)
Rachel S. Zahniser, Morally and Legally: A Parent’s Duty to Prevent the Abuse of a Child as
Defined by Lane v. Commonwealth, 86 KY. L. J. 1209 (1998)
REINHARD ZIMMERMANN, THE LAW OF OBLIGATIONS – ROMAN FOUNDATIONS OF THE CIVILIAN TRADITION
(1996)
Franklin E. Zimring, The Executioner’s Dissonant Song: On Capital Punishment and American
Legal Values, THE KILLING STATE – CAPITAL PUNISHMENT IN LAW, POLITICS, AND CULTURE
137 (Austin Sarat ed., 1999)
Index

A Corporations, 4, 24–26, 39–45, 103, 105, 106,


Abuse of power, 26, 40 130, 131, 133, 142, 143, 151, 152,
Accessory, 34, 59, 82, 114 171–173, 183, 184, 190, 211–216, 218,
Accessoryship, 34, 49, 58, 59, 63, 81, 82 219, 222–227
Act, 7, 8, 10, 11, 15, 19, 20, 35, 42, 48, 59–63, Creativity, 9, 11, 12
71, 74, 75, 77–79, 90, 93, 97, 98, 100, Criminal attempt, 49–51, 61, 70–74, 189
101, 108, 113, 116, 122, 126, 140, 141, Culpability, 31, 33, 34, 36, 37, 68, 69, 74, 75,
148, 149, 153, 159, 170, 172–176, 82, 92, 165, 166
178–181, 187, 193, 207, 208, 214, 218, Curiosity, 5, 12
220
Actio libera in causa, 154, 170, 175, 178
Actus reus, 35, 60 D
Asimov, Isaac, 18–20, 171–173 Decision-making, 6, 98–101, 131, 143, 201,
Awareness, 37–39, 57, 58, 68, 69, 73–93, 212, 213
96–98, 100–102, 104, 105, 109, 110, Delinquent, 14, 16, 23, 24, 35, 48, 51, 53, 54,
121–125, 129, 132, 133, 140, 141, 144, 58, 70, 71, 75–79, 113–115, 119, 133,
147, 152, 155, 160–165, 205 144, 189, 196, 200, 206–209, 212, 213,
218, 222–224
De minimis, 149, 182–184
C Deterrence, 114, 185, 188–199, 203–206,
Capital penalty, 43, 192, 203, 206, 212, 213, 208–210, 212, 217, 220–222, 224, 226
216–219, 221 Duress, 149, 168, 177–180
Causation, 57, 65–66
Circumstances, 11, 19, 22, 26, 29, 36, 38, 44,
49–51, 57–60, 63–65, 69, 73, 74, 76, 79, E
83–86, 89, 92, 93, 109, 116, 117, 123, Euthanasia, 30, 102, 103
124, 126, 138, 139, 153, 159, 171, 173, Expert systems, 3–5, 10, 13, 15, 128
174, 177, 186, 196–199, 215, 219 External knowledge, 9, 12, 27
Cognition, 14, 16, 37, 38, 68, 69, 71, 82–84,
86–93, 123, 164
Communication, 3, 9, 12, 23, 28 F
Commutative experience, 10 Factual data, 10, 64, 84, 87–91, 93, 94, 98,
Conduct, 9–12, 31–36, 38, 47, 49–54, 57–66, 124–126, 128, 133, 162, 164, 167, 187,
69, 74, 76, 77, 79–81, 83–86, 89, 92–98, 191, 211
100–103, 107, 110, 114, 121–124, 126, Factual mistake, 109, 147, 149, 162–166
127, 138, 139, 143, 157, 165, 177, Fine, 43–45, 186, 191–194, 212–216, 225–227
207, 221 Foreseeability, 72, 79, 96–101, 116, 134,
Conspiracy, 34, 52–54, 58, 75–77 135, 146

# Springer International Publishing Switzerland 2015 255


G. Hallevy, Liability for Crimes Involving Artificial Intelligence Systems,
DOI 10.1007/978-3-319-10124-8
256 Index

G 139, 141, 142, 144, 147, 155, 163,


General intent, 37, 38, 42, 69–71, 73, 74, 165–167
76–121, 124, 125, 129, 130, 132–135, Motive, 59, 61, 72, 80, 82, 84, 94, 95, 102, 200
137–142, 144–147, 157, 158, 160,
164, 177
Generalization, 12, 13, 127 N
General problem solver (GPS), 3, 7 Necessity, 149, 152, 168, 173–177, 179, 180,
Goal-driven conduct, 9–12 186, 187
Negligence, 37, 38, 42, 69–71, 78, 82, 106,
109, 112, 116, 117, 120–146, 152, 164,
I 173
Imprisonment, 43–45, 187, 191, 192, 196, 200, Nullum crimen sine actu, 32, 35, 48
206, 207, 212–216, 219–221, 223, Nullum crimen sine culpa, 33, 36, 68
225–227
Incapacitation, 26, 185, 188, 189, 193, 196,
198, 203–213, 218, 220–224 O
Incitement, 34, 49, 57, 58, 63, 79, 80, 205 Object-offense, 50, 51, 58, 70, 73–79, 81
Indexing, 12 Omission, 38, 42, 60, 62, 63, 69, 121–123,
Indifference, 71, 74, 85, 94, 99, 100 125
Industry, 4, 5, 14–16, 20, 21, 24, 27
Infancy, 149–153
Inference, 4, 10, 87, 182 P
In personam, 34–39, 148–150, 159, 166, 185, Perpetration-through-another, 34, 55–57,
198, 203, 209 77–79, 106, 108, 109, 111, 112, 118,
In rem, 30–34, 148–151, 161, 168–185, 198, 132–134, 144, 145, 167
203, 209 Personal liability, 31, 34, 196
Insanity, 33, 37, 68, 73, 102, 109, 147, 149, Prediction, 12, 176, 179, 205–207
150, 153, 156–159 Probation, 43, 44, 199, 201, 202, 213–216,
Internal knowledge, 9–10, 12 221–225
Intoxication, 109, 149, 153, 159–162 Public order, 27
Public service, 187, 216, 219, 223–227
Purpose, 8, 19, 23, 30, 37, 57–59, 68, 70–73,
J 75, 76, 79–82, 94–97, 104, 130, 131,
Joint-perpetration, 34, 49, 51–55, 63, 74–76, 108 134, 138, 142, 143, 145, 148, 186–190,
194, 195, 197–205, 208–212, 218, 219,
222–224, 226, 227
L
Legality, 31, 32, 34–36, 48, 172, 182
Legal mistake, 149, 165–167 R
Legitimate duty, 62, 63 Rape, 22, 28, 44, 50, 51, 63–65, 83–85, 149,
Loss of self-control, 60, 153–156 163, 165, 167, 170, 181
Rashness, 37, 68, 71, 74, 85, 94, 99–101, 121
Reasoning, 3, 5, 9, 27, 28
M Recklessness, 33, 36–38, 68, 70, 74, 76, 79–81,
Maturity, 148–150, 190 85, 86, 94, 99, 100, 120, 121
Mens rea, 35, 42, 69, 82, 159 Rehabilitation, 148, 188, 189, 196, 198–204,
Mental element, 35–39, 41, 42, 47, 49, 58, 60, 206, 208–213, 217, 220–222, 224
61, 63, 65, 67–83, 88, 94, 99, 101, 102, Results, 36, 38, 49–51, 57, 59, 64–66, 69, 72,
104, 106, 109, 111–115, 117, 118, 74, 76, 79, 80, 82–86, 88, 89, 92–98,
120–122, 124, 129, 131–133, 135–137, 109, 122–124, 126, 127, 131, 139, 143
Index 257

Retribution, 185–190, 195, 197, 198, 200–203, U


208–210, 212, 217, 219, 220, 222, 224, 226 Ultra vires, 42, 108

S
V
Self-defense, 33, 102, 147–149, 168–177,
179, 180 Victim, 26, 27, 50, 51, 61, 64, 65, 74, 83, 85,
95, 96, 98, 123, 174, 187, 188, 199, 201,
Societas delinquere non potest, 41
207, 210, 225
Specific intent, 70–74, 76, 79–82, 85, 94–96
Volition, 37, 38, 68, 69, 71, 72, 82–86, 93–101,
Stimulations, 87, 162
Strict liability, 37, 38, 42, 69, 82, 112, 123
Voluntas reputabitur pro facto, 71, 74,
135–146, 164, 166
77, 79
Substantive immunity, 149, 167–168

T W
Tangible robot, 17, 19, 21 White collar crimes, 29
Thinking machine, 2, 6, 8, 14, 16, 20–24, 92, 99 Willful blindness, 74, 76, 79, 89, 92, 93
Turing test, 7–9

Vous aimerez peut-être aussi