Vous êtes sur la page 1sur 10

Can AI eliminate jobs in the workplace?

Experts weigh in Philippine setting


image: https://www.philstar.com/uploaded/sponsored-box.png

Euden Valdez (philstar.com) - December 28, 2018 - 2:52pm


MANILA, Philippines — With the way it is sweeping industries that invest and rely in technology, two
experts have likened Artificial Intelligence to a tsunami—a tsunami that brings forth waves of change.
When industries are involved, so is the workforce, which is predicted to be affected most by AI in the
near future.
“Technology and advancements have always insistently eliminated jobs for many countries,” said Ambe
Tierro who leads the Advanced Technology Center and the Global Artificial Intelligence Capability and
Delivery Lead at Accenture Philippines.
Together with Ayala Corp. Education Chief Executive Officer Alfredo Ayala, Tierro tackled how AI will
reshape—rather than replace—jobs of the future during the recently held “Towards a Digital Future.”<
>
Understanding AI
From being a Bachelor of Science degree holder to becoming the leader of AI innovations and
technologies for one of the leading BPO companies in the country, Tierro has seen the continuous
advancement of AI, which she believes is “one of the most profound revolutions that we will see in our
lifetime.”
To share her understanding, she described AI as systems that mimic human intelligence by exhibiting
four characteristics: Sense, comprehend, act and learn.
By sensing, AI perceives the world around it through images, sounds and speech. And then it
understands and gives meaning to these data through comprehension. AI then acts based on what is
comprehended. Its actions can be manifested in the physical world.
Most important of all is that AI continues to learn and improve its performance overtime from its
experiences and data it digests, she said.The perfect example of AI based on these characteristics are self-
driving cars, Tierro added.
“They sense the world around. They can see signs, traffic lights, other vehicles. They can comprehend so
they know what traffic lights mean. They also act based on this information. It would stop at red, and go
at green. Everyday it would learn hundreds of thousands of data from vehicles in obstacles, different
images of signs and so on,” she explained.
Autonomous driving was first in the limelight in 2007 when the United States’ Defense Advanced
Research Projects Agency held a competition on driverless cars. Today, leading motoring companies have
joined the bandwagon, all trying to develop the safest and most advanced AI on their vehicles. In Dallas,
Texas, a company has just launched its AI-driven, ride-hailing service.
AI will continue to expand its reach in societies. In Accenture’s annual report, bold predictions regarding
AI had been made.
"By 2028, AI will show its creativity and produce a full-length movie, a blockbuster film,” Tierro said.
Earlier AIs have been reported to write fan fiction. One included a new chapter for the boy who lived
titled “Harry Potter and the Portrait of what Looked like a Large Pile of Ash.” It’s hilarious.
AI in the workplace
From the same Accenture research, it has been predicted that 85 percent of customer interactions will be
managed without humans by 2020. AI-led customer service will run round-the-clock. This is only at the
tip of the iceberg for BPO- and IT-related industries.
As such, it has been found that 80 percent of interviewed executives were agreeable on integrating AI in
their workplaces.
“And close to 50 percent of execs we've talked to locally and globally believed that their job descriptions
are obsolete. [But] in the same survey, only 3 percent of these executives is investing in skills,” Tierro
revealed.
What can cause alarm is the fact that low-skilled workers are the most susceptible to automation. The
world has seen industries opting to use machinery over humans for scale and efficiency.
“Unfortunately, these workers need to have training but they don't have access to the training which is
compounding their disadvantage,” the AI expert said.
How then, can companies keep up? Tierro poised three solutions namely speeding up experiential
learning, empowering the vulnerable and shifting focus to creativity.
“Speed up experiential learning. Learn by immersions. There is an opportunity to do this in much more
speed using technology, virtual reality, and other forms of AI itself,” she said.
“Part of what we want to achieve is to encourage individuals and groups to empower these groups of
learners, [who are] mostly adults, older people, or those who work in small companies. Their companies
don't have the capacity to invest in these trainings,” Tierro said.
And not just focus on technical skills in these trainings but also on creative skills.
“We should guide them. It's easy for us to say upscale these skills but in the real world, they need to be
guided. Let’s use more modular courses where there are more flexible learning options.In our company
rolled down learning boards,bite-size trainings for our employees to take on the go,” Tierro said.
Facing the AI tsunami
The younger generation, who will fill the new workforce, must also be prepared to face the AI tsunami.
While education is still key, it need not be in the classroom anymore.
Citing a commentary from CNCB, Ayala said, “We now see the death of the diplomas or degrees. More
and more companies are saying that we don't need college degrees in [hiring] people.”
Companies will be looking for job skills.
College students may take inspiration from freelancers who are among the first to recognize the trend. In
the survey Freelancing in America 2018, released only in October, freelancers now put more value on
skills training.
From the 6,000 surveyed, 93 percent with a four-year college degree said skills training was useful versus
only 79 percent who said their college education was useful to the work they do now. Moreover, 70
percent of full-time freelancers participated in skills training in the past six months compared to only 49
percent of full-time non-freelancers.
On the other hand, a 2016 World Economic Forum report discovered that "in many industries and
countries, the most in-demand occupations or specialties did not exist 10 or even five years ago, and the
pace of change is set to accelerate."
AI may take over, but humans can too, especially if they go beyond traditional education. It’s not just
about knowledge but actual skills.
“Critical thinking skill is no longer prioritized for humans because computers can do it better. AI can tell
what a cancer cell is or not. What is hard for computer s to design a new approach to cancer because it
requires imagination,” Ayala said.
Decision making is another skill to master. “Computers are not so good with that. This is an example. AI
is made to decide how to tackle crime on San Francisco. AI's answer was to lock up all the African-
American in San Francisco. [It was] lacking in context.
“I think these are all examples we educators need to really understand,” Ayala, who is also the CEO for
Livelt Investments, a BPO investments company.
Still for his fellow educators, Ayala pushed for blended classrooms where learning online and offline are
integrated. Online learning offers access to the best teachers in the world, while offline learning provides
personal interaction and guidance from someone.
Online education tends to be more competitive in pricing—especially when compared to college
education. An example is Coursera, which has allowed its students to basically pick as much courses that
they want. They can take 10 industry certified certifications for only $200 dollars a year.
In the country, the IT and Business Process Association of the Philippines is pushing for the government
to fund the industry’s National Skills Upgrading program, which according to Ayala may cost P5 billion a
year.
He said that via DepEd, CHED, DICT and DOF, the fund will be used to retrain teachers, enabling access
to e-books and online courses, and educational vouchers.
“These vouchers will be given to citizens to go out and take responsibility for their life-long learning,”
Ayala said.
“Hope we can work together to create a workforce that is hungry to learn. And that we will raise not just
change but constant change,” Tierro ended.
At the “Jobs of the Future” session, Tierro and Ayala then went on a panel discussion with Sameer
Khatiwada, economist from the Asian Development Bank; Teng Alday, Mercer Philippines CEO; Justo
Ortiz, Blockchain Association of the Philippines chairman; and moderator Ramon Dimacali, president
and CEO of FPG Insurance.
Held last November at the Shangri-La at The Fort in Taguig City, Towards a Digital Future was
organized by The Romulo Foundation and supported by PLDT Enterprise.

Read more at https://www.philstar.com/business/technology/2018/12/28/1880286/can-ai-eliminate-jobs-


workplace-experts-weigh-philippine-
setting?fbclid=IwAR1YMb4UsWyP_Q7i9PhG7PGAA5SreZ2cs3fF33wS5s72r3fzs8l2ijRSQi8#fiFem2cSvx
40oCJg.99

Read more at https://www.philstar.com/business/technology/2018/12/28/1880286/can-ai-eliminate-jobs-


workplace-experts-weigh-philippine-
setting?fbclid=IwAR1YMb4UsWyP_Q7i9PhG7PGAA5SreZ2cs3fF33wS5s72r3fzs8l2ijRSQi8#fiFem2cSvx
40oCJg.99

Robot Priest Will Officiate Your Funeral


September 10, 2018 YellRobot.com 0 Comments

TwitterLinkedInFacebookGoogle+
photo credit: Kim Kyung-Roon/Reuters
In Japan, robots are seemingly everywhere, from hanging out with your grandparents to making your ice
cream. SoftBank’s Pepper is a humanoid robot that can perform a number of tasks and has found work in
places like banks, nursing homes, and restaurants.
Robot Priest Sending You to Afterlife
Well thanks to molding company Nissei Eco and their programming, Pepper is looking to find work in
funeral homes as a Buddhist robot priest. Yes, you read that correctly, this robot priest wants to officiate
your funeral. During the ceremony, he will chant, recite prayers, and tap a drum as you are sent off into
the next life. For full authenticity, Pepper can even be properly dressed in a full Buddhist robe.
We know what you thinking. Why? Well dying in Japan is actually really expensive as funerals can cost
over $25,000. Plus with the aging and shrinking population, priests are sometimes hard to find. When
you do find one they cost on average $2100. Nissei Eco is looking offer the Pepper robot priest at just
$450 per funeral.

credit: Kim Kyung-Roon/Reuters


Pepper the Robot Priest Will Live Stream Your Funeral
The four-foot-tall bot was originally designed to be the first robot capable of perceiving and responding
to our emotions. Thus it can be programmed to show empathy during difficult events like a funeral.
In case your loved ones can’t make the ceremony, Pepper can even live stream so perhaps they can play
Fortnite and pay their respects at the same time. Unfortunately(or fortunately), Pepper has not yet been
hired to officiate a funeral.
Going too Far?
So we are pretty open-minded here on Yell Robot and admittedly some of the tech stuff we report on is a
bit out there. In our opinion, something like a robot priest is going a bit too far. Funerals are a somber
occasion and meant for people to pray, mourn and honor the person who’s recently passed. Having a
robotic priest there could easily be a distraction and take the seriousness out of the occasion. We get that
funerals can get pricey, but couldn’t a friend or relative perform the ceremony instead of a robot? Then
again maybe the person who died was a massive tech geek and would get a kick out of a robot officiating
his or her send off.

Source : https://yellrobot.com/robot-priest-will-officiate-your-funeral/?fbclid=IwAR0di-
KJJOe1Do1UOie8zf33rV6vdtlovryVIAY6_Wy5bW01v8ssa0QZEI0

Why we need both science and humanities for a Fourth Industrial Revolution education
he potential automation of many jobs raises some big and tricky questions, but one of these hasn’t
received sufficient attention: what is the true purpose of education at a time when machines are getting
smarter and smarter?
I’ve spent my career working with some of the brightest engineers in technology and greatest
humanitarians at the UN thinking about how we could bring the benefits of innovation to our customers
and society worldwide. The latest and most powerful of these is the impending launch of the fifth
generation (5G) wireless network, which can handle 1,000 times more data volume than the systems in
place today.
As technology evolves, it’s become increasingly clear to me that our education systems are not preparing
people for the opportunities that 5G and other Fourth Industrial Revolution breakthroughs will present.
Educators, policy-makers, non-profits and the business community need to confront this fact – even if
(especially if) this means questioning long-standing practices and trendy assumptions.
As more computers equal or surpass human cognitive capacities, I see three broad purposes for
education:

 Most obviously, to instil the quality STEM skills needed to adequately meet the needs of our

ever-more-technological society;

 Just as importantly, to instil the civic and ethical understanding that will allow human beings to

wield these powerful technologies with wisdom, perspective and due regard for the wellbeing of

others;

 To find much more creative and compelling ways to meet these first two needs across a far wider

range of ages and life situations than has traditionally been the case in our education systems.
Quite understandably, the education-for-the-future discussion has focused on STEM (science, tech,
engineering, math). Indeed, STEM education is a major priority for our own company; our Verizon
Innovative Learning programme provides free connectivity, state-of-the-art equipment, a STEM
curriculum and practical training to help low-income kids bridge the digital divide.
The logic for this is straightforward: as noted above, the value of these subjects in a tech-driven era is
indisputable. If anything, our society must significantly improve its STEM education across all income
levels and age groups and among both genders.

Yet there’s a case to be made that our society’s growing focus on STEM – while both laudable and
necessary – has spawned an either/or mentality that undervalues the very subjects that might help us
become the best stewards of technology. Those subjects include such core humanities as history,
philosophy, literature and the arts.
The idea here is not to privilege some subjects over others; rather, it’s to yank us out of the increasingly
pointless dichotomy between sciences and humanities. To master this new epoch, we need both – and
we need to integrate them as never before.
What we really need, in short, are genetic engineers who have deeply absorbed Brave New World and
historians who are capable of sophisticated data analysis. The sciences have ever more to give to the
humanities and vice versa.
The case for such integration springs directly from the headlines. Time and again in the past few years,
we’ve seen tech-savvy executives commit unforced errors as they interact with broader society on
complex and sensitive issues like consumer privacy and the integrity of the political system.
The lesson is clear: for technology to deliver on its promise of human betterment, it needs a cultural and
moral compass. For too long now, the disciplines that instil such a compass – the humanities – have been
dismissed as an anachronism; whereas, on the contrary, they may be precisely what enables us to make
the best use of increasingly potent technologies.
Have you read?

 Employment and education policies in the era of AI and robotics


 Why teaching humanities improves innovation
 To succeed in a changing job market, we must embrace lifelong learning
There’s something else we need from a Fourth-Industrial-Revolution education system: the full embrace
of the concept of lifelong learning.
I realize I’m hardly the first to espouse the lifelong-learning ideal but we need to be more emphatic about
making it a reality. Rather than a nice add-on to our current formal education system, it should be the
concept around which the entire system is understood and organized.
The idea that our formal education should end at 22 or 25 (much less 18) is now completely outdated. As
technology changes more rapidly – and as humans live longer lives, with more people working well past
traditional retirement ages – the need for flexible, responsive schooling and training models is acute.
For example, we can and should stop reflexively associating “college years” with one’s late teens and early
20s. The universities of the future will increasingly see students in their 40s or 60s pursuing new degrees
– and probably also a few precocious adolescents who have already demonstrated subject-matter mastery
through online courses.
This is just a tiny sample of the changes that school systems – from primary to graduate-school level –
will need to absorb in order to ensure that people are prepared for the Fourth Industrial Revolution.
As an eternal tech optimist, I strongly believe we can remain captains of our own destiny, even as
technology becomes more powerful, if we educate our people to meet the challenge.

Source: https://www.weforum.org/agenda/2018/09/why-we-need-both-science-and-humanities-for-a-
fourth-industrial-revolution-education

Should we be afraid ofAi?


Luciano Floridi
is professor of philosophy and ethics of information at the University of Oxford, and a Distinguished
Research Fellow at the Uehiro Centre for Practical Ethics. His latest book is The Fourth Revolution: How the
Infosphere Is Reshaping Human Reality (2014).
Edited by Nigel Warburton
Suppose you enter a dark room in an unknown building. You might panic about monsters that could be
lurking in the dark. Or you could just turn on the light, to avoid bumping into furniture. The dark room is
the future of artificial intelligence (AI). Unfortunately, many people believe that, as we step into the
room, we might run into some evil, ultra-intelligent machines. This is an old fear. It dates to the 1960s,
when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with
Alan Turing, made the following observation:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities
of any man however clever. Since the design of machines is one of these intellectual activities, an
ultraintelligent machine could design even better machines; there would then unquestionably be an
‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultra-
intelligent machine is the last invention that man need ever make, provided that the machine is docile
enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of
science fiction. It is sometimes worthwhile to take science fiction seriously.
Once ultraintelligent machines become a reality, they might not be docile at all but behave like
Terminator: enslave humanity as a sub-species, ignore its rights, and pursue their own ends, regardless of
the effects on human lives.
If this sounds incredible, you might wish to reconsider. Fast-forward half a century to now, and the
amazing developments in our digital technologies have led many people to believe that Good’s
‘intelligence explosion’ is a serious risk, and the end of our species might be near, if we’re not careful. This
is Stephen Hawking in 2014:
The development of full artificial intelligence could spell the end of the human race.
Last year, Bill Gates was of the same view:
I am in the camp that is concerned about superintelligence. First the machines will do a lot of jobs for us
and not be superintelligent. That should be positive if we manage it well. A few decades after that,
though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on
this, and don’t understand why some people are not concerned.
And what had Musk, Tesla’s CEO, said?
We should be very careful about artificial intelligence. If I were to guess what our biggest existential
threat is, it’s probably that… Increasingly, scientists think there should be some regulatory oversight
maybe at the national and international level, just to make sure that we don’t do something very foolish.
With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with
the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.
The reality is more trivial. This March, Microsoft introduced Tay – an AI-based chat robot – to Twitter.
They had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted
with humans. Instead, it quickly became an evil Hitler-loving, Holocaust-denying, incestual-sex-
promoting, ‘Bush did 9/11’-proclaiming chatterbox. Why? Because it worked no better than kitchen
paper, absorbing and being shaped by the nasty messages sent to it. Microsoft apologised.
This is the state of AI today. After so much talking about the risks of ultraintelligent machines, it is time
to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges, in
order to avoid making painful and costly mistakes in the design and use of our smart technologies.
Let me be more specific. Philosophy doesn’t do nuances well. It might fancy itself a model of precision
and finely honed distinctions, but what it really loves are polarisations and dichotomies. Internalism or
externalism, foundationalism or coherentism, trolley left or right, zombies or not zombies, observer-
relative or observer-independent, possible or impossible worlds, grounded or ungrounded … Philosophy
might preach the inclusive vel (‘girls or boys may play’) but too often indulges in the exclusive aut
aut (‘either you like it or you don’t’).
The current debate about AI is a case in point. Here, the dichotomy is between those who
believe in true AI and those who do not. Yes, the real thing, not Siri in your iPhone, Roomba in your living
room, or Nest in your kitchen (I am the happy owner of all three). Think instead of the false Maria
in Metropolis (1927); Hal 9000 in 2001: A Space Odyssey (1968), on which Good was one of the consultants;
C3PO in Star Wars (1977); Rachael in Blade Runner (1982); Data in Star Trek: The Next Generation (1987); Agent
Smith in The Matrix (1999) or the disembodied Samantha in Her (2013). You’ve got the picture. Believers in
true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a
better term, I shall refer to the disbelievers as members of the Church of AItheists. Let’s have a look at
both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always
in the boring middle.
Subscribe to our newsletter
Updates on everything new at Aeon.
Daily Weekly
See our newsletter privacy policy here
Singularitarians believe in three dogmas. First, that the creation of some form of artificial
ultraintelligence is likely in the foreseeable future. This turning point is known as a technological
singularity, hence the name. Both the nature of such a superintelligence and the exact timeframe of its
arrival are left unspecified, although Singularitarians tend to prefer futures that are conveniently close-
enough-to-worry-about but far-enough-not-to-be-around-to-be-proved-wrong.
Second, humanity runs a major risk of being dominated by such ultraintelligence. Third, a primary
responsibility of the current generation is to ensure that the Singularity either does not happen or, if it
does, that it is benign and will benefit humanity. This has all the elements of a Manichean view of the
world: Good fighting Evil, apocalyptic overtones, the urgency of ‘we must do something now or it will be
too late’, an eschatological perspective of human salvation, and an appeal to fears and ignorance.
Put all this in a context where people are rightly worried about the impact of idiotic digital technologies
on their lives, especially in the job market and in cyberwars, and where mass media daily report new
gizmos and unprecedented computer-driven disasters, and you have a recipe for mass distraction: a
digital opiate for the masses.
Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by
reason and evidence. It is also implausible, since there is no reason to believe that anything resembling
intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable
understanding of computer science and digital technologies. Let me explain.
Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow
from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to
appear, then we would be in deep trouble (not merely ‘could’, as stated above by Hawking). Correct.
Absolutely. But this also holds true for the following conditional: ifthe Four Horsemen of the Apocalypse
were to appear, then we would be in even deeper trouble.
At other times, Singularitarianism relies on a very weak sense of possibility: some form of artificial
ultraintelligence could develop, couldn’t it? Yes it could. But this ‘could’ is mere logical possibility – as far
as we know, there is no contradiction in assuming the development of artificial ultraintelligence. Yet this
is a trick, blurring the immense difference between ‘I could be sick tomorrow’ when I am already feeling
unwell, and ‘I could be a butterfly that dreams it’s a human being.’
How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required
to park in a tight spot remains unclear
There is no contradiction in assuming that a dead relative you’ve never heard of has left you $10 million.
That could happen. So? Contradictions, like happily married bachelors, aren’t possible states of affairs, but
non-contradictions, like extra-terrestrial agents living among us so well-hidden that we never discovered
them, can still be dismissed as utterly crazy. In other words, the ‘could’ is not the ‘could happen’ of an
earthquake, but the ‘it isn’t true that it couldn’t happen’ of thinking that you are the first immortal
human. Correct, but not a reason to start acting as if you will live forever. Unless, that is, someone
provides evidence to the contrary, and shows that there is something in our current and foreseeable
understanding of computer science that should lead us to suspect that the emergence of artificial
ultraintelligence is truly plausible.
Here Singularitarians mix faith and facts, often moved, I believe, by a sincere sense of apocalyptic
urgency. They start talking about job losses, digital systems at risk, unmanned drones gone awry and
other real and worrisome issues about computational technologies that are coming to dominate human
life, from education to employment, from entertainment to conflicts. From this, they jump to being
seriously worried about their inability to control their next Honda Civic because it will have a mind of its
own. How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills
required to park in a tight spot remains unclear. The truth is that climbing on top of a tree is not a small
step towards the Moon; it is the end of the journey. What we are going to see are increasingly smart
machines able to perform more tasks that we currently perform ourselves.
If all other arguments fail, Singularitarians are fond of throwing in some maths. A favourite reference is
Moore’s Law. This is the empirical claim that, in the development of digital computers, the number of
transistors on integrated circuits doubles approximately every two years. The outcome has so far been
more computational power for less. But things are changing. Technical difficulties in nanotechnology
present serious manufacturing challenges. There is, after all, a limit to how small things can get before
they simply melt. Moore’s Law no longer holds. Just because something grows exponentially for some
time, does not mean that it will continue to do so forever, as The Economist put it in 2014:
Throughout recorded history, humans have reigned unchallenged as Earth’s dominant species. Might
that soon change? Turkeys, heretofore harmless creatures, have been exploding in size, swelling from an
average 13.2lb (6kg) in 1929 to over 30lb today. On the rock-solid scientific assumption that present
trends will persist, The Economist calculates that turkeys will be as big as humans in just 150 years. Within
6,000 years, turkeys will dwarf the entire planet. Scientists claim that the rapid growth of turkeys is the
result of innovations in poultry farming, such as selective breeding and artificial insemination. The
artificial nature of their growth, and the fact that most have lost the ability to fly, suggest that not all is
lost. Still, with nearly 250m turkeys gobbling and parading in America alone, there is cause for concern.
This Thanksgiving, there is but one prudent course of action: eat them before they eat you.
From Turkzilla to AIzilla, the step is small, if it weren’t for the fact that a growth curve can easily be
sigmoid, with an initial stage of growth that is approximately exponential, followed by saturation,
slower growth, maturity and, finally, no further growth. But I suspect that the representation of sigmoid
curves might be blasphemous for Singularitarianists.
Singularitarianism is irresponsibly distracting. It is a rich-world preoccupation, likely to worry people in
leisured societies, who seem to forget about real evils oppressing humanity and our planet. One example
will suffice: almost 700 million people have no access to safe water. This is a major threat to humanity.
Oh, and just in case you thought predictions by experts were a reliable guide, think twice. There are
many staggeringly wrong technological predictions by experts (see some hilarious ones from David
Pogue and on Cracked.com). In 2004 Gates stated: ‘Two years from now, spam will be solved.’ And in
2011 Hawking declared that ‘philosophy is dead’ (so what’s this you are reading?).
The prediction of which I am most fond is by Robert Metcalfe, co-inventor of Ethernet and founder of the
digital electronics manufacturer 3Com. In 1995 he promised to ‘eat his words’ if proved wrong that ‘the
internet will soon go supernova and in 1996 will catastrophically collapse’. A man of his word, in 1997 he
publicly liquefied his article in a food processor and drank it. I wish Singularitarians were as bold and
coherent as him.
Deeply irritated by those who worship the wrong digital gods, and by their unfulfilled Singularitarian
prophecies, disbelievers – AItheists – make it their mission to prove once and for all that any kind of faith
in true AI is totally wrong. AI is just computers, computers are just Turing Machines, Turing Machines
are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End
of story.
This is why there is so much that computers (still) cannot do, loosely the title of several publications
– Ira Wilson (1970); Hubert Dreyfus (1972; 1979); Dreyfus (1992); David Harel (2000); John Searle
(2014) – though what precisely they can’t do is a conveniently movable target. It is also why they are
unable to process semantics (of any language, Chinese included, no matter what Google translation
achieves). This proves that there is absolutely nothing to discuss, let alone worry about. There is no
genuine AI, so a fortiorithere are no problems caused by it. Relax and enjoy all these wonderful electric
gadgets.
AItheists’ faith is as misplaced as the Singularitarians’. Both Churches have plenty of followers in
California, where Hollywood sci-fi films, wonderful research universities such as Berkeley, and some of
the world’s most important digital companies flourish side by side. This might not be accidental. When
there is big money involved, people easily get confused. For example, Google has been buying AI tech
companies as if there were no tomorrow (disclaimer: I am a member of Google’s Advisory Council on the
right to be forgotten), so surely Google must know something about the real chances of developing a
computer that can think, that we, outside ‘The Circle’, are missing? Eric Schmidt, Google’s executive
chairman, fuelled this view, when he told the Aspen Institute in 2013: ‘Many people in AI believe that
we’re close to [a computer passing the Turing Test] within the next five years.’
The Turing test is a way to check whether AI is getting any closer. You ask questions of two agents in
another room; one is human, the other artificial; if you cannot tell the difference between the two from
their answers, then the robot passes the test. It is a crude test. Think of the driving test: if Alice does not
pass it, she is not a safe driver; but even if she does, she might still be an unsafe driver. The Turing test
provides a necessary but insufficient condition for a form of intelligence. This is a really low bar. And yet,
no AI has ever got over it. More importantly, all programs keep failing in the same way, using tricks
developed in the 1960s. Let me offer a bet. I hate aubergine (eggplant), but I shall eat a plate of it if a
software program passes the Turing Test and wins the Loebner Prize gold medal before 16 July 2018. It is
a safe bet.
Both Singularitarians and AItheists are mistaken. As Turing clearly stated in the 1950 article that
introduced his test, the question ‘Can a machine think?’ is ‘too meaningless to deserve discussion’.
(Ironically, or perhaps presciently, that question is engraved on the Loebner Prize medal.) This holds
true, no matter which of the two Churches you belong to. Yet both Churches continue this pointless
debate, suffocating any dissenting voice of reason.
True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to
engineer it, not least because we have very little understanding of how our own brains and intelligence
work. This means that we should not lose sleep over the possible appearance of some ultraintelligence.
What really matters is that the increasing presence of ever-smarter technologies is having huge effects on
how we conceive of ourselves, the world, and our interactions. The point is not that our machines are
conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-
known results that indicate the limits of computation, so-called undecidable problems for which it can
be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.
We know, for example, that our computational machines satisfy the Curry-Howard correspondence,
which indicates that proof systems in logic on the one hand and the models of computation on the other,
are in fact structurally the same kind of objects, and so any logical limit applies to computers as well.
Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz
show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that
sets the limits of what can be done by a computer through its mathematical logic.
Quantum computers are constrained by the same limits, the limits of what can be computed (so-called
computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The
point is that our smart technologies – also thanks to the enormous amount of available data and some
very sophisticated programming – are increasingly able to deal with more tasks better than we do,
including predicting our behaviours. So we are not the only agents able to perform tasks successfully.
This is what I have defined as the Fourth Revolution in our self-understanding. We are not at the centre
of the Universe (Copernicus), of the biological kingdom (Charles Darwin), or of rationality (Sigmund
Freud). And after Turing, we are no longer at the centre of the infosphere, the world of information
processing and smart agency, either. We share the infosphere with digital technologies. These are
ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their
abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe,
which remains unique. We thought we were smart because we could play chess. Now a phone plays
better than a Grandmaster. We thought we were free because we could buy whatever we wished. Now
our spending patterns are predicted by devices as thick as a plank.
What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s
the consequence? That any apocalyptic vision of AI can be disregarded
The success of our technologies depends largely on the fact that, while we were speculating about the
possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors,
applications and data that it became an IT-friendly environment, where technologies can replace us
without having any understanding, mental states, intentions, interpretations, emotional states, semantic
skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense
datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the
office, or discovering the best price for your next fridge.
Digital technologies can do more and more things better than us, by processing increasing amounts of data
and improving their performance by analysing their own output as input for the next operations.
AlphaGo, the computer program developed by Google DeepMind, won the boardgame Go against the
world’s best player because it could use a database of around 30 million moves and play thousands of
games against itself, ‘learning’ how to improve its performance. It is like a two-knife system that can
sharpen itself. What’s the difference? The same as between you and the dishwasher when washing the
dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded. We are and shall
remain, for any foreseeable future, the problem, not our technology. So we should concentrate on the real
challenges. By way of conclusion, let me list five of them, all equally important.
We should make AI environment-friendly. We need the smartest technologies we can build to tackle the
concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from
crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.
We should make AI human-friendly. It should be used to treat people always as ends, never as mere
means, to paraphrase Immanuel Kant.
We should make AI’s stupidity work for human intelligence. Millions of jobs will be disrupted,
eliminated and created; the benefits of this should be shared by all, and the costs borne by society.
We should make AI’s predictive power work for freedom and autonomy. Marketing products,
influencing behaviours, nudging people or fighting crime and terrorism should never undermine human
dignity.
And finally, we should make AI make us more human. The serious risk is that we might misuse our smart
technologies, to the detriment of most of humanity and the whole planet. Winston Churchill said that
‘we shape our buildings and afterwards our buildings shape us’. This applies to the infosphere and its
smart technologies as well.
Singularitarians and AItheists will continue their diatribes about the possibility or impossibility of true
AI. We need to be tolerant. But we do not have to engage. As Virgil suggests in Dante’s Inferno: ‘Speak not
of them, but look, and pass them by.’ For the world needs some good philosophy, and we need to take
care of more pressing problems.
https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible

Vous aimerez peut-être aussi