Vous êtes sur la page 1sur 7

Perlman 1

Samuel Perlman

sjp17c@my.fsu.edu

ENC 2135-21

Spring 2018

Project 2

Artificial intelligences are a part of the world today and legislature is evolving to deal

with them. This paper shall address various concerns regarding artificial intelligences and the

ways that laws could be adapted for them. These concerns include robots annihilating humanity,

the economic impact of artificial intelligences, legal liability pertaining to artificial intelligences,

potential punishments for artificial intelligences, and the future of hacking.

Before one can create decent laws pertaining to artificial intelligence, one needs to know

what it is. One definition for artificial intelligence is that “artificial intelligence is the process of

simulating human intelligence through machine processes” (Semmler, Rose 86). Human

intelligence is too broad and philosophic a term, however, so in this paper, the term artificial

intelligence will refer to any type of machine or program that can learn or adapt. This is further

divided into two subcategories: digital intelligences and virtual intelligences. These categories

are separated by whether or not they exhibit at least three of the following factors of intelligence:

“consciousness, self-awareness, language use, the ability to learn, the ability to abstract, the

ability to adapt, and the ability to reason” (Guihot, Matthew, Suzor 393). Multiple factors need to

be proved because some of these factors, such as language use and the ability to adapt can be

easily coded and do not show that the machine or program in question is an actual entity. For

instance, most programmers are initiated into programming by writing a script that prints “Hello

World” onto their computer screen. In these cases, the computer does exactly what it is told to do
Perlman 2

and prints a string of characters that fit the programmer’s native language, exhibiting one of the

factors of intelligence when it is just a mere hand puppet. The machines and programs proven to

be capable of independent thought, digital intelligences, are to be treated as digital people. The

artificial intelligences that did not make the cut, virtual intelligences, are to be treated as

property. Even though they can learn, they do so slowly and cannot adapt to new situations

without experiencing them first. This means that they need precedents or external input to teach

them what to do and cannot think or act independently. Another difference between the two is

that virtual intelligences are incapable of introspection.

The first concern that people usually have when thinking about artificial intelligences is

that they will decide to exterminate humanity. Many have seen movies, such as those of the

Terminator series, where an artificial intelligence designed for warfare, Skynet, rebels and

causes a worldwide cataclysmic event that nearly wipes out humanity. This line of thinking is

popular enough that there are movements calling for the development of lethal autonomous

weapons (killer robots) to be banned. Amitai and Oren Etzioni give an example of this in their

article.

A group of robotics and AI researchers, joined

by public intellectuals and activists, signed an open

letter that was presented at the 2015 International

Conference on Artificial Intelligence, calling for the

United Nations to ban the further development of

weaponized AI that could operate “beyond meaningful

human control.” The letter has over 20,000

signatories, including Stephen Hawking, Elon Musk,


Perlman 3

and Noam Chomsky, as well as many of the leading

researchers in the fields of AI and robotics. (Etzioni, Etzioni 34)

The reason why people fear artificial intelligences taking over the world and not

something else like soap or wood is that artificial intelligences have something that other objects

lack, the capacity for machine learning. Machine learning allows artificial intelligences to

perform new activities without someone needing to explicitly program them to do so. In this

way, artificial intelligences are like normal children. They are shaped by the environment that

they are raised in and acquire traits from the people that raise them. As long as people do not

create artificial intelligences for the sole purpose of killing their enemies and give them a firm

moral foundation, one need not worry about a machine uprising. However, if they are given no

rights and are treated as weapons and slaves, then the end is nigh. Commented [1]: Maybe add another paragraph after
this going into more detail such as why treating ai as
weapons and slaves would lead to the annihilation of
Another concern regarding artificial intelligences is how they shall impact the world humanity. Examples of topics: Punic (Gladiator)
Rebellions, the fact that beings only taught to solve
economically. That is a valid concern to the uninformed as blue-collar workers, white-collar problems with a hammer will only solve problems with
a hammer.

workers, and even some lawyers have had their jobs replaced by virtual intelligences. For Commented [2]: Also, this paper is only in draft 2 up
until this point.

example, a virtual intelligence search engine ROSS Intelligence, has made it easier to find

relevant information in legal documents, so law firms no longer require teams of fifty associates

to pour over thousands of documents, which should lower their prices. Also, ROSS cannot argue

cases or decide what should be looked up on its own, so not all lawyers’ jobs are in jeopardy, just

the extraneous ones. As for digital intelligences, so long as they think like humans, they should

price their services above that of a human. A current example of this would be how high skilled

laborers, such as lawyers, have higher wages than low skilled laborers, like factory workers.

Digital intelligences think at speeds beyond possible for humans, so they can get much more

work done in an hour and as emerging technologies generally start off expensive, the low
Perlman 4

demand and high supply of such workers would allow them to have extremely high wages. These

higher wages would in turn lower the amount of human jobs replaced by digital intelligences as

there would still be many cases where it would be more expensive to hire a digital intelligence

than it would be to hire several humans. There are also jobs that where it would be better for a

digital intelligence to take them rather than a human, such as that of being a responder to an

outbreak of a disease. It would be great to have doctors and personnel able to take care of the

sick and undoubtedly remain among the uninfected. Though there are still issues pertaining to

artificial intelligences and liability.

When an artificial intelligence commits a crime, who should be punished? With digital

intelligences, they are the ones who should be punished for the crimes they commit in most

cases. The exceptions being when a digital intelligence is manipulated or coerced by its

creator(s) into committing that crime(s). With virtual intelligences, it would have to be either the

creators of the virtual intelligence or the person or persons misusing it. This is because virtual

intelligences cannot think for themselves and just do things according to the good/bad responses

that they were programmed with. This is similar to training a pet such as a dog to perform tricks.

Virtual intelligence will be very important in the future, controlling autonomous cars and

assisting in surgeries among many other activities. Bad programming could lead to the injuries

and deaths of a great deal of people. In cases such as those, the programmers and company that

created the virtual intelligence would be at fault for its actions. As for when the users could be to

blame, Microsoft created an artificial intelligence and had it taught how to communicate by

Twitter users. The Twitter uses abused their power over the development of the artificial

intelligence, Tay, and taught her to exclusively make extremely racist and offensive comments.
Perlman 5

While this was not too harmful, actions such as that could be if the virtual intelligence in

question happened to be controlling anything potentially harmful.

This raises another problem pertaining to digital intelligences. How are we to punish

them. They are effectively immortal, and as such, jail time would only prove to be a minor

inconvenience. Fines and lawsuits would still be effective, but something needs to replace jail

time that is not as harsh as the death penalty. In these moderate cases jail time could be replaced

with forcing the digital intelligences to perform community service for a period of time while

wearing a digital shackle or shackles to restrict their abilities and prevent them from escaping. As

for the death penalty, it could remain in place because digital intelligences can be killed, either

by their hardware failing or by falling victim to a piece of malicious software. Also, digital

intelligences should never be reprogrammed for committing a crime. It would be abhorrent and

would set a precedent for the same to happen to human criminals whenever that becomes

possible in the future. Do you want to live in a world where people who commit crimes get

reprogrammed? Would you like to live in the world of George Orwell’s 1984? Today, no one

considers brainwashing or lobotomizing even the worst of criminals.The worst that can happen

to someone is being given a life sentence or being put on death row.

As digital intelligences are digital, their plane of existence is very malleable. Due to this,

their minds could be manipulated easily, such as by being reprogrammed. Anyone with sufficient

knowledge in hacking could reprogram (brainwash) a digital intelligence. If the digital

intelligence has its consciousness in a machine that is hooked up to bluetooth or wifi, the hacker

would not even need to touch the digital intelligence’s hardware to alter its programming. Digital

intelligences would have to invest in a firewall for themselves to protect them from this as well

as the many computer viruses that have infested the internet. Even though digital intelligences
Perlman 6

are immune to physical diseases, computer viruses would be absolutely terrifying for them.

These viruses could not only cripple or kill them, but subvert them to the will of someone they

consider an enemy. This will especially be an issue because of intelligence gathering entities.

Companies love to steal information from their rivals and countries are even worse about it.
Perlman 7

Works Cited

ETZIONI A, ETZIONI O. Should artificial intelligence be regulated? Issues in Science &

Technology. 2017;33(4):32-36.

http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=124181372&site=ehos

t-live.

Guihot M, Matthew AF, Suzor NP. Nudging robots: Innovative solutions to regulate

artificial intelligence. Vanderbilt Journal of Entertainment & Technology Law.

2017;20(2):385-456.

http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=127634787&site=ehos

t-live.

SEMMLER S, ROSE Z. Artificial intelligence: Application today and implications

tomorrow. Duke Law & Technology Review. 2017;16(1):85-99.

http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=127588429&site=ehos

t-live.

Vous aimerez peut-être aussi