Vous êtes sur la page 1sur 3

Open Letter on Artificial Intelligence

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial
intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from
artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the
potential to eradicate disease and poverty, but researchers must not create something which cannot be controlled.[1] The four-
paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed
research priorities in an accompanying twelve-page document.

Contents
Background
Purpose
Concerns raised by the letter
Short-term concerns
Long-term concerns
Signatories
Notes
External links

Background
By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman
artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously (see Existential
risk from advanced artificial intelligence). Hawking and Musk both sit on the scientific advisory board for theFuture of Life Institute,
an organization working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI
research community,[2] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015.
[3] The

letter was made public on January 12.[4]

Purpose
The letter highlights both the positive and negative effects of artificial intelligence.[5] According to Bloomberg Business, Professor
Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider superintelligent AI a
significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-
sided media focus on the alleged risks.[4] The letter contends that:

The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human
intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may
provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is
[6]
important to research how to reap its benefits while avoiding potential pitfalls.

One of the signatories, Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay
more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not
alarmist.[2] Another signatory, Professor Francesca Rossi, stated that "I think it's very important that everybody knows that AI
[7]
researchers are seriously thinking about these concerns and ethical issues".
Concerns raised by the letter
The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain
[1] The required research is interdisciplinary, drawing from areas
in control of AI; our AI systems must "do what we want them to do".
ranging from economics and law to various branches of computer science, such as computer security and formal verification.
Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security,
[8]
and control ("OK, I built the system wrong, can I fix it?")

Short-term concerns
Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car
may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other
concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined?
If not, how should culpability for any misuse or malfunction be apportioned?

Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best
[2]
manage the economic impact of jobs displaced by AI.

Long-term concerns
The document closes by echoingMicrosoft research director Eric Horvitz's concerns that:

we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with
human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If
so, how might these situations arise? ...What kind of investments in research should be made to better understand and
to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"?

Existing tools for harnessing AI, such as reinforcement learning and simple utility functions, are inadequate to solve this; therefore
[8]
more research is necessary to find and validate a robust solution to the "control problem".

Signatories
Signatories include physicist Stephen Hawking, business magnate Elon Musk, the co-founders of DeepMind, Vicarious, Google's
director of research Peter Norvig,[1] Professor Stuart J. Russell of the University of California Berkeley,[9] and other AI experts,
robot makers, programmers, and ethicists.[10] The original signatory count was over 150 people,[11] including academics from
.[12]
Cambridge, Oxford, Stanford, Harvard, and MIT

Notes
1. Sparkes, Matthew (13 January 2015)."Top scientists call for caution over artificial intelligence" (https://www.telegrap
h.co.uk/technology/news/11342200/T
op-scientists-call-for-caution-over-artificial-intelligence.html) . The Telegraph
(UK). Retrieved 24 April 2015.
2. Chung, Emily (13 January 2015)."AI must turn focus to safety, Stephen Hawking and other researchers say"(http://
www.cbc.ca/m/touch/news/story/1.2899067). Canadian Broadcasting Corporation. Retrieved 24 April 2015.
3. McMillan, Robert (16 January 2015)."AI Has Arrived, and That Really Worries the World's Brightest Minds" (https://w
ww.wired.com/2015/01/ai-arrived-really-worries-worlds-brightest-minds/). Wired. Retrieved 24 April 2015.
4. Dina Bass; Jack Clark (4 February 2015)."Is Elon Musk Right About AI? Researchers Don't Think So"(https://www.b
loomberg.com/news/articles/2015-02-04/is-elon-musk-right-about-ai-researchers-don-t-think-so)
. Bloomberg
Business. Retrieved 24 April 2015.
5. Bradshaw, Tim (12 January 2015)."Scientists and investors warn on AI"(http://www.ft.com/intl/cms/s/0/3d2c2f12-99
e9-11e4-93c1-00144feabdc0.html). The Financial Times. Retrieved 24 April 2015. "Rather than fear-mongering, the
letter is careful to highlight both the positive and negative ef
fects of artificial intelligence."
6. "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter"
(http://futureoflife.org/misc/open
_letter). Future of Life Institute. Retrieved 24 April 2015.
7. "Big science names sign open letter detailing AI danger"(https://www.newscientist.com/article/mg22530044.000-big-
science-names-sign-open-letter-detailing-ai-danger.html). New Scientist. 14 January 2015. Retrieved 24 April 2015.
8. "Research priorities for robust and beneficial artificial intelligence"(http://futureoflife.org/static/data/documents/resear
ch_priorities.pdf) (PDF). Future of Life Institute. 23 January 2015. Retrieved 24 April 2015.
9. Wolchover, Natalie (21 April 2015)."Concerns of an Artificial Intelligence Pioneer"(https://www.quantamagazine.org/
20150421-concerns-of-an-artificial-intelligence-pioneer/)
. Quanta magazine. Retrieved 24 April 2015.
10. "Experts pledge to rein in AI research"(https://www.bbc.com/news/technology-30777834). BBC News. 12 January
2015. Retrieved 24 April 2015.
11. Hern, Alex (12 January 2015)."Experts including Elon Musk call for research to avoid AI 'pitfalls
' " (https://www.thegu
ardian.com/technology/2015/jan/12/elon-musk-ai-artificial-intelligence-pitfalls)
. The Guardian. Retrieved 24 April
2015.
12. Griffin, Andrew (12 January 2015)."Stephen Hawking, Elon Musk and others call for research to avoid dangers of
artificial intelligence" (https://www.independent.co.uk/life-style/gadgets-and-tech/news/stephen-hawking-elon-musk-a
nd-others-call-for-research-to-avoid-dangers-of-artificial-intelligence-9972660.html) . The Independent. Retrieved
24 April 2015.

External links
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter

Retrieved from "https://en.wikipedia.org/w/index.php?title=Open_Letter_on_Artificial_Intelligence&oldid=846854777


"

This page was last edited on 21 June 2018, at 08:38(UTC).

Text is available under theCreative Commons Attribution-ShareAlike License ; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of theWikimedia
Foundation, Inc., a non-profit organization.

Vous aimerez peut-être aussi