Vous êtes sur la page 1sur 13

1

To: Professor Charlene Keeler


From: Rushi Sharma
CC: Duy Nguyen, William Au
Date: October 30, 2018
Subject: Memo about the Formal Analytical Report
Dear Professor,
We are writing to inform you about our proposal. The topic that we have chosen is Artificial
intelligence and Security. AI is a human-made intelligence designed to help computers automate
intelligent human behaviors. Increasingly, artificial intelligence has a more significant impact on
human life. Artificial intelligence in recent years has become part of our daily topics. Many
experts worry that when artificial intelligence reaches a certain level of evolution, it is also the
time when humankind is destroyed. Artificial intelligence is helping to shape the world in a
better way. However, there are worries that someday the machines will control people, especially
when uncontrolled forms of AI are put into the security field. AI will be used to monitor people's
lives continuously throughout the day. In many countries, this fear is becoming a reality when in
many cities, facial recognition and artificial intelligence are used by authorities to reduce crime.
To expand more on the matter of security-related concerns and application of AI, we will
research on the consequences of security breaches in AI-based assistance always being present in
the immediacy of our private life as most individuals these days are always in contact with
artificial support from our phones to smart speakers. The second aspect we will consider is the
truth about the assumption that machine learning can lead AI to focus away from its pre-defined
purpose. We humans have always seen innovations in robotics and artificial intelligence with an
unnecessary suspicious eye, but machine learning is also a somewhat new subject of study in
computer science. Therefore, after looking at the aspects mentioned above and discussing some
potential security contraventions, we will analyze the effectiveness of AI as a security
administrative entity. As for the research already performed, we have started analyzing data from
a report on making AI safe for the future. For our report, we will be searching for information
from the IEEE and ACM databases as they are two very credible organizations in the field of
computers and engineering. For more information, we will be looking at some legitimate
websites for specific statistics and insights of other people in the subject of AI and security.
Here is the timeline for our project:
 Week of October 29, 2018: We will start researching on this day. Each member will
research a specific topic from the questions already mentioned. The medium of research
does not matter until the data acquired is legitimate and detailed in the area of that topic.
 Week of November 5, 2019: Research will continue on a further level. During class
Tuesday, research will be done in the library. This is also the time when we start writing
our first draft from the data obtained from primary research.
 Week of November 15, 2018: Do presentation with PowerPoint and get feedback from
the audience.
 Week of November 25, 2018: We will have analyzed all the research data gained from all
the different sources and finish writing the final draft.
2

 Week of December 3, 2018: Review final draft and check for errors. Making sure the
final draft is up to our usual high-quality standards.
 December 6, 2018: Submitting the report on Titanium and Turnitin.
As being a computer science major at Cal State Fullerton, we have a significant background in
subjects related to computer science. Also, being juniors in our major, we have already had taken
basic English classes, and thus we have experience with many kinds of writing such as essay,
memo, and report since high-school. Based on that, we are confident to write this report. Today,
Artificial intelligence is present in all areas of life, helping people save labor and improve work
efficiency. In daily life, people use and interact with different type of AI such as TV, or
smartphone. Thus, AI is one of the interesting topics that we want to discover more about and
find out the applications and influence of it in the future.
Finally, in recent years, AI has progressed much faster than before, and today we use Artificial
Intelligence several times a day and often, without even realizing it. We are not machines, and
our memory cannot store large amounts of data and remember them for use when necessary. AI
technology is gradually becoming invisible to our eyes and vice versa. Exactly, the effects and
activities of AI are becoming increasingly difficult to recognize with humans. Even professionals
do not always understand how the AI system works. As the influence of the AI is widespread,
our ability to understand these effects is limited. As more and more sophisticated AI systems
become available, it becomes increasingly difficult for human operators to know how they will
operate in the future. Moreover, once AI systems can learn and control themselves faster than
humans, the inevitable occurrences will overshadow human development forever. Just like any
previous technologies, AI has evolved from a simple curiosity-based idea to the most prominent
innovation of the 21st century with overwhelming capabilities. In the future, the most valuable
things in the world will be high-level AI systems that no one can understand or control. Thus, we
request for your approval to continue to research and go deep inside about this topic.
Thank you for providing the time to read through our proposal. We appreciate the opportunity
you have given us and are confident in our mutual understanding. AI is an amazing technological
advancement that has provided us with many benefits throughout the years. If it continues to
grow, however, AI can pose a threat with which we are unprepared to deal. We thank you again
for your time and look forward to our possible future together.

Sincerely,
Duy Nguyen, Rushi Sharma, Willam Au
3

Artificial Intelligence
and
Security
Rushi Sharma, Duy Nguyen, and William Au
California State University, Fullerton
4

Table of Contents
Artificial Intelligence and Security

Introduction 5

History of the problem 5

Goals 6

Short-term goals 6

Long-term goals 6

Previous attempts at solving 7

Possible solutions 7

Plan of action 9

Other Issues 10

Conclusion 10

References 11

Appendix 11
5

I. Introduction
Artificial Intelligence is the stimulated intelligence in contrast to the natural intelligence
displayed by all animals. It involves three distinct areas of research: Artificial Narrow
Intelligence (ANI) focused on one narrow task, Artificial General Intelligence (AGI) involving
the intelligence of a machine that can perform any intellectual task that a human can. Artificial
Super Intelligence (ASI) which according to Oxford philosopher Nick Bostrom, “an intellect that
is much smarter than the best human brains in practically every field, including scientific
creativity, general wisdom and social skills.” With the help of innovations and developments in
these areas of scientific innovation like AI, we can envision the future while admiring the
potential unpredictable possibilities. Even though the prospects of AI and algorithm-driven
systems usually seem to have a very positive effect on the coming generations, it is something of
a double-edged sword.

Modern day science fiction movies have often portrayed the concept of Robots taking
over humans, but as far-fetched that idea may be, we should always be prepared for any potential
complications. Security is the freedom from potential harm from external forces. The 21st
century is an Informational age and techno-ethical aspects like the protection of sensitive data
and maintaining data integrity have become crucial to security administration. These automated,
data-driven algorithm procedures have already found a place in the data-sensitive security
industry as algorithms analyzing a large dataset makes it easy for security analysts to focus more
on high priority issues. Let it be the present day Algorithms that are inconsequential on a large
scale or super intelligent AI systems of the future with intellect vastly overpowering humans; we
have to be precautious about the consequences of relying upon a self-learning intelligent entity
that grows through the data we provide it. In 2017, the Cylane survey found that 62% of InfoSec
experts at an information security event organized by Black Hat USA, believed that Artificial
Intelligence would be used for cyber-attacks in the coming years.

Cylane survey conducted at Black Hat USA


2017

38% Experts who consider AI as an


immediate threat.
62%
Those who do not consider AI a
threat yet.

II. History
The research field of Artificial Intelligence and Algorithms is entirely novel for it to get
developed into highly sophisticated and efficient systems. However, that does not mean that the
primitive form of this futuristic entity has not caused any problems. Examples of a few threats AI
6

has caused and will cause in the future include: a.) Criminals attempting to hack into systems
using AI bots which is much easier than manual hacking performed by individuals. In the year
2016, the AI bot named SNAP_R who was taught to study social media users’ behavior and was
tasked to lure the users into phishing baits in which it easily outperformed Forbes staff writer
Thomas Fox-Brewster. b.) People feeding false or fake data to AI to learn from which can create
Algorithm systems that are biased toward a specific group of people or ideologies. Beauty.AI is
the first beauty contest judged by robots. Six thousand people from 100 countries from around
the world submitted their photos out of which 44 winners were announced in separated in five
different age groups. The only issue here was out of all the winners, only one person had a darker
skin tone. It was later revealed that it was not the AI’s fault but the fact that the data provided to
the AI to learn from included decidedly fewer people from minority groups.

Even though the events mentioned above were successful in conveying the need for
necessary security measures and laws, they weren’t personally harmful to any individuals
mentally or physically. However, there have already been some cases that affected the lives of
some individuals adversely. In 2014, Government used an algorithm “COMPAS” that helped to
decide the sentence for criminals as it calculated the risk level of defendants. Controversially, the
algorithm assigned most black defendants as high risk due to which those defendants ended up
getting longer sentences compared to individuals from other ethnicities. In 2016, at a science fair
in South China an educational robot meant for children went rampant injuring a visitor non-
fatally.

III. Goals
A. Short term goals
Today, AI is still in development and formation, so it still has some bugs and
incomplete. When there are errors in the production process will lead to the AI will not control
themselves. Then, they will act as a species new smart, and the intellectual ability of them may
be higher than humans. Thus, people who responsible for developing and creating AI need to
detect and solve the usual problems with AI. For instance, conducting pioneering research
addressing the challenges in creating the proper AI system, credible and reliable. As a result,
they can fix and handle problem quickly. Also, update the AI system to prevent similar issues
that may occur in the future. In addition, AI is programmed to do something beneficial, but it
develops a destructive method to achieve its purpose. This can happen whenever we do not
entirely align the goals of the AI with us. Therefore, AI developers should strictly manage and
control AI while maximizing its benefits. They should develop effective methods for human and
AI collaboration. Rather than replace people, most AI systems collaborate with humans for
optimal performance. They need to conduct research and create effective interaction between
human and AI system.
B. Long term goal
Create and develop an AI for safety to ensure security. Before the widespread use of AI, ensure
that the system operates safely and securely, in a controlled, well-defined and understandable
manner. AI is now being trained by humans to excel a specific task through data loading to
accumulate. As such, AI has learned how to do something, but apparently, it can not understand
7

what it does. Proactive research is needed to address the challenges of creating a sound, reliable,
and reliable AI system. Security experts around the world are increasingly turning to artificial
intelligence and machine learning to guard against malicious threats. According to “Why to Use
Artificial Intelligence in Your Cybersecurity Strategy,” it said that create AI with following of
these function such as prediction, termination, and detection, so it can anticipate, predict and
have a plan to stop the attack before they occur. Although it was created with good intentions
and a bright future, AI has raised concerns about the use of this technology as well as the
morality of artificial intelligence for humankind. The prospect of replacing humanity on Earth is
not new and is a favorite subject of filmmakers and novelists. Furthermore, similar in
cybercrime, AI can become a tool to attack or target of criminal attacks AI. As AI integrates into
all aspects of human life, the consequences of cybercrime become more serious. Also, some
people will use AI with other’s harmful purpose, and AI devices can easily cause massive
casualties if it is in the hand of these people. Thus, AI should be protected and prevent them from
criminal activities. A long-term investment for AI research and prioritize investment in the next
generation of AI to promote the discovery and understanding of AI.

IV. Previous attempts at solving


In the article “Elon Musk donates $10M to keep AI beneficial”, it is mentioned that in January
2015, Elon Musk invested $ 10 million in research into an AI safety control system. In December
of that same year, the Institute for Threat Research and the Future of Life received an investment
of £ 10 million from the Leverhulme Foundation, which aims to develop a future control center
for the growth of artificial intelligence. Alan Turing is the father of computer science and
artificial intelligence, said that "The study of artificial intelligence safety is now considered part
of public science." According to “State of California Endorses Asilomar AI Principles,” in
January 2017, more than 1000 robotic and AI researchers signed up for a 23-principle censored
by the Institute for the Future of Life. These principles all point to the fact that the rapidly
evolving artificial intelligence research now needs to be subject to ethical and safety
considerations. The speed of AI development is breakneck. We need to find a way to ensure that
the development of this digital intellect coexists with humanity.

V. Possible solutions
The first solution is to create a management and research organization. Hiring or
gathering ethicist’s AI developers who have experienced with AI problem, so they should adhere
to the following guidelines to prevent potential exploitation. First, assumes the possibility of
vandalism to the property with all AI applications. Because AI is used everywhere, developers
must always think that their application will be on the attack list for criminals to provide
defensive measures. AI vulnerabilities can be far more damaging than human error because they
are automated and replicated. Implement risk assessment before development. Next, developers
need to evaluate from the beginning and look at threats throughout the life cycle of the
application development process. It should be assumed that an attacker can use indirect methods
to generate misleading data: they can collect an alternative data set from the training data source
used to optimize the neural network.
Moreover, creating negative examples in the AI training process. Developers should
investigate studies of ways to modify data to deceive neural networks. Data scientists should
make use of open source tools on GitHub and other systems to generate test data for
8

vulnerabilities in neural networks and other AI networks. Furthermore, they should test
algorithms with various types of input data to determine fault tolerance. AI developers should
check to see how their neural network can handle a variety of input data and produce reliable
results. According to the chart “Growth in AI Safety Spending,” there are not currently many
technical and software developer who is working on an AI problem. Thus, to implement this
solution, it takes a long time and a significant investment to find and gather a group of software
developers who have experienced with artificial intelligence. However, with the growth in AI
safety spending in the chart, there will be an organization will be established and operational in

the shortest time.


Another solution is to build a system to monitor and evaluate AI. It can observe and control the
behavior of AI. Also, this system will alert to administration when something wrong happens to
AI such as being attacked or weird behavior of AI. It can measure and assess AI technology
through standards. Thus, the AI problem can be detected and resolved quickly. Even Elon Musk,
the owner of Tesla and SpaceX, also has the same view. According to “Artificial Intelligence:
What Can Go Wrong?”, Elon Musk said that he usually is not a supporter of regulation or
oversight, but in this case, the public is facing a severe threat. The development of AI needs to be
regulated, and formal verification is a tool for building secure systems. Therefore, programmer
and software developer should build and develop a safe approach and create a useful system to
monitor AI activities. The advantages of this solution are detecting and solving the AI issues
quickly. To build this system, it takes a lot of time and money because the AI developers need to
do many research and experiment about AI, so they can get a data and have a plan to build the
system which works perfectly and effectively. Also, it is possible to use the community to
control artificial intelligence, typically the OpenAI project - which plays the role of developing
artificial intelligence techniques and products, but also manages the Artificial intelligence, so it
does not harm human beings. As the name suggests, products made by OpenAI will be free and
open to all users.
The last solution is to limit the ability and isolate Artificial Intelligence. Currently, AI surpassed
human intelligence in specific areas. However, AI machines have not yet reached the level of
general intelligence, which is not capable of handling all situations in the real world like humans
or animals. Since AI is likely to become smarter than any human, we have no way of predicting
how it will behave. AI machines can perform human-like behaviors because they are equipped
with AI software systems. With the ability to respond, think close to human beings, or further
create their language, spontaneously generates the next generation smarter, AI is out of the
original purpose of creation. Thus, limited the ability or function of AI is one of a possible
9

solution that AI developer should consider because they can easy to control them and AI does
not have too much power. In the article “Security and AI alignment,” it mentioned that “We
could try to isolate AI systems, preventing them from having unintended influences on the
outside world or learning unintended data about the outside world.” Also, if AI is being attacked,
the consequence that it brings is not too significant. So, AI developers can handle and stop the
problem quickly. In order to minimize the risks involved in the exploitation and use of AI
machines, the three rules of AI operation should be adhered to: AI does no harm to humans and
should act accordingly when humans are harmed; obey the human command and accept the
command that cause injury to human; know how to protect itself. This solution does not solve
and prevent a problem completely because AI still can be attacked.

VI. Plan of Action/ Recommendations


Our first recommendation is to assemble a group of experienced AI developers to write
code that not only produces solutions to the given problems, but also produces solutions in an
ethical manner. When given a task, a computer will solve that task in the way it was programmed
to, usually the fastest and most efficient way possible. This is great for small tasks, however in a
larger scale when AI are dealing with the protection of humans, the code it uses has to also be
ethically correct to ensure the safety of those people. This team of experienced developers would
have to be either on sight or available at all times to combat all possible issues that the AI might
face, whether it’s a software issue, mechanical, or ethical. For this reason, multiple teams would
have to be produced to ensure the AI is always working to its full potential. This team however
would take resources to produce and assemble. It is urged that governmental schools require
computer courses for students starting as early as junior high or high school to increase the
amount of programmers to create and maintain the AI in the future. According to Dan Wang, 1.9
million people graduated with bachelors degrees in 2015. 55,000 of those graduated with
Computer Science degrees (Wang, 1). This is less than 3% of all graduates, and with technology
and AI on the rise, it is essential that this number rises. Internships could also be offered to help
those who are striving to get a degree in computer science as well as benefiting the company and
possibly having that student work there in the future. The code produced by this team of
experienced programmers must be very high quality to ensure the safety of the mass amount of
people using the AI in the future. It is highly likely that the code to improve the AI farther in the
future will be based on the original code, so we must ensure that the original code is the best
quality possible. Producing this code at this high of a standard will take time and many workers.
According to Ryan Daws, Google in the UK pays their DeepMind staff an average of $345,000
per employee, and there are 400 of them. The average cost to run this facility is 138 million per
year (Daws, 1). Needless to say, this is not an easy budget to match and producing an efficient
and ethically correct code will have heavy costs. Luckily, AI does not have to be large scale to be
effective. Smaller visions of this are able to be produced without as ethically strong code. This
will not endanger people because the AI would not have access to anything that could damage or
affect humans. For example, Google’s Alexa, A good personal assistant is very sought after and
minimal ethically code needs to be produced. This saves a large amount of revenue and would be
able to be a starter option until funds for a more ethically demanding AI can be created for
larger-scale operations. There is another upside. According to this graph, the revenues for having
10

AI are increasing significantly in the years to come. It is the age of AI and consumers know it.

A second recommendation we have is to hire people who try to break into our AI’s code.
Undoubtedly, there will be people who will attempt to hack into our AI’s database and abuse it
and its power. With our team of advanced developers constantly updating the AI’s code, it will
be difficult for people to hack into the system, however, eventually, it will happen. If we hire
those to hack into our system and they succeed, we can offer them compensation for access to
the files and methods they used to break into our system, therefore strengthen our security
further. This system has worked very well for many companies, and we believe it will work very
well in our case as well.

VII. Other Issues


There are many benefits in the development of AI. Simple tasks can be completed, so people can
focus on other things, giving them much more time in their own lives. Although these benefits
are great, there are also many possible risks. If someone hacks into the AI and it becomes
corrupted, people relying on it to complete tasks might be irritated that it cannot do so.
Information might be leaked which will be a significant liability issue for the company. AI, like
all technology, are also at risk of malfunctioning. Usually, when technology malfunctions, it is
something small such as not turning on or freezing, however, there is a possibility that the AI
will send information to a person it was not intended to which would also be a significant
liability issue. These risks can be combated by continually updating the system to reduce the
chance of people hacking into the system, and paying those who successfully hack in to give the
company the information in how they did so. The chances of technology malfunctioning can be
significantly reduced by having a code that is very well made and efficient. Unless the AI’s
hardware is not intact or is broken, malfunctions are usually in the systems itself. Constant
updates are also able to combat this issue, which is why having a good AI development team is
essential for the success of AI.

VIII. Conclusion
Although there are some risks in the development and use of AI, the benefits of using and
exploring this new technology are endless. People will be able to focus on the crucial matters in
life, while their AI handles minor tasks that would take up valuable time from their lives.
Imagine having a personal assistant that keeps a perfect calendar for you, so nothing slips your
11

mind, reminding you before every event, so events such as birthdays or meetings are never a
surprise. Imagine in the future, being able to work on a presentation before work as your self-
driving car safely transports you to your workplace. Even though these examples may seem
minor, the standard of life would increase significantly. Although AI is at a lower scale, if we
continue to develop the technology, the possibilities AI can have on our lives are beyond
anything imaginable.

IX. List of References


Christiano, P. (2016, October 15). Security and AI alignment – AI Alignment. Retrieved
December 6, 2018, from https://ai-alignment.com/security-and-ai-control-675ace05ce31/.
Daws, Ryan. “AI Engineers Are Worth Six Figures.” Developer Tech, www.developer-
tech.com/news/2017/oct/25/ai-engineers-are-worth-six-figures/.

FLI. (2018, August 31). State of California Endorses Asilomar AI Principles. Retrieved from
https://futureoflife.org/2018/08/31/state-of-california-endorses-asilomar-ai-principles/?cn-
reloaded=1
Tegmark, M. (2016, February 09). Elon Musk donates $10M to keep AI beneficial. Retrieved
from https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial/
Sears, A. (2018). Why to Use Artificial Intelligence in Your Cybersecurity Strategy. [online]
Blog.capterra.com. Available at: https://blog.capterra.com/artificial-intelligence-in-
cybersecurity/ [Accessed 5 Dec. 2018].
Wang, Dan. “Why Do so Few People Major in Computer Science?” Dan Wang, 27 Aug. 2017,
danwang.co/why-so-few-computer-science-majors/.
Wangerin, Michael. “Artificial Intelligence: What Can Go Wrong? - InfoSpace.” Information
Space,22Aug.2017,ischool.syr.edu/infospace/2017/08/23/artificial-intelligence-what-can-go-
wrong/
Bostrom, N. (n.d.). HOW LONG BEFORE SUPERINTELLIGENCE? Retrieved December 6,
2018, from https://nickbostrom.com/superintelligence.html

Black Hat USA 2017. (n.d.). Retrieved from https://www.blackhat.com/us-17/


12

X. Appendix

Cylane survey conducted at Black Hat USA


2017

38% Experts who consider AI as an


immediate threat.
62%
Those who do not consider AI a
threat yet.

(1.)

(2.)
13

(3.)

Vous aimerez peut-être aussi