Académique Documents
Professionnel Documents
Culture Documents
Jacob Peterson
Ms. Torgerson
ELA 12
4/10/18
“Artificial intelligence can be more dangerous than nukes.” This is quite a provocative
and audacious statement made by Tesla and SpaceX founder Elon Musk (1). Although it seems
absurd to think that artificial intelligence (A.I.) can cause such devastating destruction to
humanity such as a nuclear bomb, there is potential for this to happen. Many critics and scientists
say that the possibility of this happening is centuries from occurring, although, others argue that
the time to protect and regulate A.I. is now. As a result of this controversy, the question: “should
we limit artificial intelligence” is asked as possible federal regulations are discussed and as
As an avid computer enthusiast with extensive research in this particular field, I feel that
the time to regulate and limit A.I. is now. Without fair and just federal regulations to virtually
and physically restrict the specific advancements of artificial intelligence, there is the potential
for: job loss on an international scale, fleets of physically autonomous robots that maliciously
long power over humanity, and unethical, ill-intent use of automatic and seemingly infinite
intellectual power. So, that all sounds pretty horrifying, which merits all the more reason to
regulate and limit this virtual entity from becoming uncontrollable and ultimately unstoppable.
First, let’s define artificial intelligence. It is, simply put, the idea that the intelligence
displayed by humans can be modeled within a computer. Current A.I. technologies are not
Jacob Peterson 2
capable of modelling a human conscience, but that is not to say that there is no progression being
made towards achieving a virtual model of the human brain. Large technology corporations and
even small research businesses are investing large amounts of time and money into hiring
researchers and developers to advance this technology all while there is a complete lack of
regulation (CNBC 2). This is a major problem as without a limit, there is potential for
autonomously rapidly which may ultimately lead to human replacement, insignificance, and
many places around the world. The article “Should artificial intelligence be regulated?”
published by the National Academy of Sciences states, “There is strong evidence that the cyber
revolution, beginning with the large-scale use of computers and now accelerated by the
introduction of stronger AI, is destroying many jobs: first blue-collar jobs (robots on the
assembly line), then white-collar ones (banks reducing their back office staff), and now
professional ones (legal research)” (Etzioni 3). As A.I. and automated machines continue to
advance, jobs originally performed by humans are disappearing in favor of a more cost-effective
replacement. The article then later references a statistic given by the Bureau of Labor Statistics
informing that about 1.1 million secretarial jobs, 500,000 jobs for accounting and auditing clerks,
and several thousand jobs for travel agencies and data entry positions have all disappeared within
the last decade. These very statistics demonstrate just how massive current A.I. and machine
technology have already changed the job market. These numbers will only increase and more
and more jobs will be victim to the more efficient and effective machine automation.
Jacob Peterson 3
If you have ever seen the movie “iRobot” or “Terminator” then you likely are familiar of
the classic sci-fi movie plot of robots attempting to overpower humanity, but eventually
humanity fights back and comes out on top. Although we know these to be fictional and
unrealistic at times, these scenarios are perfectly possible if we let A.I. advance too quickly and
unsafely. But this is impossible right? Well, current developments of A.I. may sound harmless
now as computers don’t have exactly have physical counterparts to their intellectual systems
such as using opposable thumbs or moving a pair of eyes… or do they? We have seen many
advancements in mechanical and autonomous robots as mountains of money are poured into the
research and development of automated systems. Although many of these investments are made
towards automating monotonous and mundane tasks typically executed by people in a warehouse
and such, there have been millions of dollars poured into the research and development
mechanically human-modeled robots such as the Honda ASIMO (New Atlas 4). In spite of the
premature and seemingly primitive technology that is emerging, A.I. is closer to occupying an
AI has the potential to create minds of its own. Without limitation such as ‘Super
Intelligence Protocol’, AI has the ability to override instructions and create ill-intentioned AI’s of
its own. ‘Super Intelligence Protocols’ are instructions and rulesets that an AI must follow and
abide by for every task executed. Without this, a person may, for example, ask the AI to do
something malicious and the AI will follow without using a ‘moral compass’ of some sorts to
prevent it. There is another advanced concept of AI that is called machine learning. With the use
of the expanding and potentially indefinite advancements and training of machine learning, AI
can become rapidly intelligence at an uncontrollable pace. Machine learning is the concept of
Jacob Peterson 4
machines learning and training from simulations or previous actions to further its intelligent
status. This process can be modelled as the snowball effect. So machine learning is another
There are many arguments that are against the limitations of A.I. as some may say that an
artificial intelligence apocalypse is impossible. These arguers may say that there is no need for
such discussion right now as research and development is slow and many journalists
overestimate the power and capability this technological revolution currently has. These people
are correct to some extent as technology progression as a whole is not very close to creating a
human conscience in a computer. Current research in the field of psychology and neuroscience is
quite comprehensive and progressed, but researchers still do not have a full understanding of
how the human brain functions (NYTimes 5). As a result of this lack of research and
understanding, advanced and autonomous systems are unlikely to “take over” in the near future.
Although understanding of the human brain is limited and research isn’t rapid, there is
still cause for concern. Primite A.I. systems exist already and although not nearly as advanced as
those machines featured in movies such as “iRobot” and “Terminator”, they still are learning and
training. With technology and specifically the internet, information and research is shared across
many different platforms with many different people. Research papers, lab results, test, and
studies are more available now more than ever on this seemingly infinite public and accessible
domain. As more and more studies are published and open source works are created, researchers
and developers can piece this information and past work together and make developing A.I.
One possible solution to limiting A.I. is to federally regulate this technology closely. The
article “Limiting the downsides of artificial intelligence” in the Financial Times states, “We
cannot uninvent scientific discovery. But we should, at least, do everything possible to restrain
its most immediate and obvious downsides” (FT 6). This article quote shows that researchers,
developers, and inventors will not uninvent what they have created. Discovery is inevitable and
so are mistakes. By limited the “immediate and obvious downsides” we can greatly lower the
risk attached with creating such advanced A.I. systems and the benefits would greatly impact our
society altogether.
Another solution to limiting A.I. is to prevent its research and development altogether.
Although an unpopular opinion, this would be a guaranteed way to prevent any harm or
downside from emerging out of A.I. machines. This would prevent the loss of millions of jobs
and completely prevent the development of malicious acts criminally using artificial intelligence
as an harmful tool. A possible way to prevent any efforts being put towards artificial intelligence,
the federal government may have to monitor research companies and their activities. This
solution would have many downsides and impeded the progression of technology, but would
should take place now. With the power of discussion and collaboration, more problems can be
solved with creative, fair, and just solutions. A.I. is a very powerful tool with only more room to
be useful and with proper limitations, humanity can potentially be happier, healthier and more
civilized.
Jacob Peterson 6
Works Cited
(1) Augenbraun, Eliene. “Elon Musk: Artificial Intelligence May Be ‘More Dangerous than
www.cbsnews.com/news/elon-musk-artificial-intelligence-may-be-more-dangerous-than-
nukes/.
(2) Novet, Jordan. “Google Will Invest in AI Startups and Send Its Engineers to Help Them out
www.cnbc.com/2017/07/10/google-launches-gradient-ventures-to-invest-in-a-i-start-ups.
html.
(3) Etzioni, Amitai, and Oren Etzioni. "Should artificial intelligence be regulated?" Issues in
Science and Technology, vol. 33, no. 4, 2017, p. 32+. Science In Context,
http://link.galegroup.com/apps/doc/A499863892/SCIC?u=pioneer&sid=SCIC&xid=acd9
(4) Robarts, Stu. “Honda's New ASIMO Robot Is All Grown Up.” New Atlas - New Technology
newatlas.com/new-honda-asimo-robot/32977/.
(5) Gorman, James. “Learning How Little We Know About the Brain.” The New York Times,
(6) “Limiting the Downsides of Artificial Intelligence.” Financial Times, Financial Times, 22