Vous êtes sur la page 1sur 7

AI (Deep Learning) explained simply | Fabio Ciucci | Pulse | LinkedIn https://www.linkedin.com/pulse/ai-deep-learning-explained-simply...

Back to LinkedIn.com

AI (Deep Learning) explained simply


Published on June 19, 2017

Fabio Ciucci Follow


69 8 94
Founder, CEO at Anfy srl

Sci-fi level Artificial Intelligence (AI) like HAL 9000 it was promised since 1960s, but
PCs and robots kept dumb until recently. Now, tech giants and startups are announcing
the AI revolution: self-driving cars, robo doctors, robo investors, etc. "AI" it's the 2017
buzzword, like "dot com" it was in 1999. Don't be confused by the AI hype. Is this a
marketing bubble or real? What's new from older AI flops?

Machine learning (ML), a subset of AI, make machines learn from experience, from
examples of the real world: the more the data, the more it learns. A machine is said to
learn from experience with respect to a task, if its performance at doing the task
improves with experience.

Artificial Neural Networks (ANN) is only one approach to ML, others include
decision trees, regression etc. Deep learning is an ANN with many levels of
abstraction. Despite the "deep" hype, popular ML methods are "shallow" too.

The old AI it was not learning. It was rules-based, several "if this then that" written by
humans: this can be AI since it solves problems, but not ML since it does not learn from
data. Most of current AI and automation systems still are rule-based code. ML is known
since the 1960s, but like the human brain, it needs billions of computations over lots of
data. To train an ML in 1980s PCs it required months, and digital data was rare.
Handcrafted rule-based code was solving most problems fast, so ML was forgotten. But
with today's hardware (NVIDIA GPUs, Google TPUs etc.) you can train an ML in
minutes, optimal parameters are known, and more digital data is available. Then, after
2010 one AI field after another (vision, speech, language translation, game playing etc.)
it was mastered by MLs, winning over rules-based AIs, and often over humans too. For

1 of 7 6/25/17, 8:53 PM
AI (Deep Learning) explained simply | Fabio Ciucci | Pulse | LinkedIn https://www.linkedin.com/pulse/ai-deep-learning-explained-simply...

Back to LinkedIn.com
Kasparov at chess, it's enough and so best to write a rule-based code the old way. ML is
meant only for intuitive tasks humans can't solve with "if this then that" logic.

(Update:) LinkedIn uses a basic rule-based AI to judge what accounts to block.


After just 2 days I published this article, suddenly I could no more login and my profile,
messages and articles was gone from the internet. Google for "LinkedIn account
restricted" to learn this happens to many people for simply too much activity, even if not
messaging. I had only opened, but quickly, the profiles of many who clicked "I like" on
this article (thanks to you all), and who viewed my profile (both curious), and few old
contacts to talk with. Then, a rule-based AI was so sure I was an AI too, of the "web
bot" kind, that blocked me without any "Slow down, or you'll be banned" warning.

Faulty automation increases (rather than kill) human jobs. A rule of counting "x
pages opened within time y" it catches many bots, but also "false positives": nice guys
in activity peaks. This frustrates the LinkedIn staff too: who to trust, the AI or the user?
After days, I had to send my ID photo too. Rules can be improved by adding
parameters, like the account's age, past activity, what pages it was opened, etc. But will
keep rule-based, never as accurate as a human expert or a fine trained ML. Microsoft
acquired LinkedIn for $26 billion months ago, and may upgrade to real ML. In the
meantime, warning: don't browse too fast! (end of update)

ML automates automation, as long as you prepared correctly the data to train


from. That's unlike manual automation where humans come up with rules to automate a
task, a lot of "if this then that" describing for ex. what e-mail is likely to be spam or not,
or if a medical photo represents a cancer or not. In ML instead we only feed data
samples of the problem to solve: lots of spam and no spam emails, cancer and no cancer
photos etc., all first sorted, polished, and labeled by humans. The ML then figures out
(learns) the rules by itself, magically, but it does not explain these rules. You show a
photo of a cat, the ML says this is a cat, but no indication why.

Most ML is Supervised Learning, where the examples for training are given to ML
along with labels, a description or transcription of each example. You first need a human
to divide the photos of cats from those of dogs, or spam from legitimate emails, etc. If
you label wrong the data, the ML will learn wrongly. Throwing unlabeled data to ML
it's Unsupervised Learning, where the ML discovers patterns and clusters on data, but
often in ways not useful to solve your problems.

In Anomaly Detection you identify unusual things that differ from the norm, for ex.
frauds or cyber intrusions. An ML trained only on old frauds it would miss the always
new fraud ideas. Then, you can teach the normal activity, asking the ML to warn on any
suspicious difference. Governments already rely on ML to detect tax evasion.

Reinforcement Learning is shown in the 1983 movie War Games, where a computer
decides not to start World War III by playing out every scenario at lightspeed, finding
out that all would cause world destruction. The AI discovers through millions of trial
and error, within rules of a game or an environment, which actions yield the greatest

2 of 7 6/25/17, 8:53 PM
AI (Deep Learning) explained simply | Fabio Ciucci | Pulse | LinkedIn https://www.linkedin.com/pulse/ai-deep-learning-explained-simply...

Back to LinkedIn.com
reaching super-human skills. It made surprising moves, never seen before, that humans
would consider as mistakes. But later, these was proven as brilliantly innovative tactics.
The ML became more creative than humans at the Go game. At Poker or other games
with hidden cards, the MLs learns to bluff and deceive too: it does what's best to win.

The "AI effect" is when people argue that an AI it is not real intelligence. A lot of AI
is included in many apps, but once it becomes widely used, it's not labelled AI anymore.
Humans subconsciously need to believe to have a magical spirit and unique role in the
universe. Every time a machine outperforms humans on a new piece of intelligence,
such as playing chess, recognising images, translating etc., always people say: "That's
just brute force computation, not intelligence".

ML now can "see", and master vision jobs, for ex. identify cancer or other sicknesses
from medical images, statistically better than human radiologists, ophthalmologists, etc.
And drive cars, read lips, etc. ML can paint in any style learnt from samples (for ex.,
Picasso or yours), and apply the style on photos. And the inverse: guess a realistic photo
from a painting, hallucinating the missing details. MLs looking at screenshots of web
pages or apps, can write code producing similar pages or apps.

(Style transfer: learn from a photo, apply to another. Credits: Andrej Karpathy)

ML now can "hear": it can compose music in style of the Beatles or yours, imitate the
voice of any person it hears for a while, and so on. The average person can't say what
painting or music is composed by humans or machines, or what voices are spoken by
the human or ML impersonator.

ML trained to win at poker game learned to bluff, handling missing and potentially
fake, misleading information. Bots trained to negotiate and find compromises, learned to
deceive, guessing when you're not telling them the truth, and lying like a sneaky
supervillain.

The idea that computers can't be creative or liars or human-like comes from old rule-
based AI, indeed predictable, but that seems changed with ML. AI is getting as
mysterious as humans: the ammunitions to reduce each new capability mastered by AI
as "not real intelligence" are ending.

To judge if an ML it is correct or not, you can only test its results (errors) on unseen
new data. Unlike some other sciences, you can't verify if an ML is correct using a
logical theory. The ML is not a black box: you can see the "if this then that" list it
produces and runs, but it's too big and complex for any human to follow. ML it's a
practical science trying to reproduce the real world's chaos and human intuition, without
giving a simple or theoretical explanation. It gives the too big to understand linear
algebra producing the results. It's like when you have an idea which works, but you
can't explain exactly how you came up with the idea: for the brain that's called

3 of 7 6/25/17, 8:53 PM
AI (Deep Learning) explained simply | Fabio Ciucci | Pulse | LinkedIn https://www.linkedin.com/pulse/ai-deep-learning-explained-simply...

Back to LinkedIn.com

Everyone can intuitively imagine (some even draw) the face of a person, in original
and in Picasso style. Or imagine (some even play) sounds or music styles. But no one
can describe, with a complete and working formula, the face, sound or style change.
Humans can visualize only up to 3 dimensions: even Einstein it could not conceive,
consciously, ML-like math with let's say 500 dimensions. Such 500D math it can be
handled: our brain solves these all the time, but intuitively, like magics. Why? Imagine
if for each idea, the brain also gave us the formulas used, with thousands of variables.
That extra info would confuse and slow us down a lot, and for what? No human could
use pages-long math, we're not evolved with an USB cable on the head.

(bidirectional AI transforms: Horse to Zebra, Zebra to Horse, Summer from/to


Winter, Photo from/to Monet etc. credits: Jun-Yan Zhu, Taesung Park et all.)

ML is not a brain simulator: real neurons are very different. ML it's an alternative way
to reach brain-like results, similar to a brain like a horse is similar to a car. It matters
that both car and horse can transport you from point A to point B: the car do it faster,
lacking most horse features. Both the brain and ML run statistics (probability) to
approximate complex functions: they give result only a bit wrong, but usable.

If no human can predict something, often the ML can't too. Many people trained
MLs with years of market price changes, but these MLs fail to predict the market. The
ML will guess how things will go if the learned past factors and trends will keep the
same. But all it changes every hour or minute, like at random. MLs fail when the older
data gets less relevant or wrong very soon and often. The task or rules learned must
keep the same, or at most rarely updated, so you can re-train. For ex. learning to drive,
play poker, paint in a style, predict a sickness given health data, translate between
languages are jobs for MLs: old examples will keep valid for the near future.

MLs can predict what humans can't, in some cases. For ex. Deep Patient, trained
from 700,000 patients data by M. Sinai Hospital in New York, it can anticipate the onset
of schizophrenia: no one knows how! Only the ML can: humans can't learn to do the
same by studying the ML. This is an issue: for an investment, medical, judicial or
military decision, you may want to know how the AI reached its conclusions, but you
can't. The ML computations are visible, but too many to make a human-readable
summary. The ML speaks like a prophet: you humans can't understand (even if I show
you the math), so have faith or not. The faith depends by the past exact predictions.

Are humans better at explaining their decisions? We give reasonable-sounding, but


incomplete, over-simplified explanations. For ex: "We invaded Iraq due to its weapons
of mass destruction". It looked right, but the reasons really was dozens. This looks
wrong, even when the ML is right: "We bombed that village since a reputable ML said
they was terrorists". It only lacks explanation. People using MLs will probably start to
make up fake explanations just for the public to accept the MLs predictions.

4 of 7 6/25/17, 8:53 PM
AI (Deep Learning) explained simply | Fabio Ciucci | Pulse | LinkedIn https://www.linkedin.com/pulse/ai-deep-learning-explained-simply...

Back to LinkedIn.com
write software, that's provided by Google (Keras, Tensorflow), Microsoft etc. ML is an
unpredictable science defined by experimentation, not by theory. You spend most of the
time preparing the data to train and studying the results, then doing lots of changes,
mostly by guessing, and retrying. ML's fed with too few or inaccurate data will give
wrong results. Google Images incorrectly classified African Americans as gorillas,
while Microsoft’s Tay bot learned racist, hate speech after only hours training on
Twitter. The issue was the data, not the software.

Undesirable biases are implicit in human-generated data: a Google News trained


ML associates “father is to doctor as mother is to nurse” reflecting gender bias. If used
as is, an ML might prioritize male job applicants over female ones. A law enforcement
ML could discriminate by skin color. You can't simply copy data from the internet into
your ML, and expect it to end up balanced. To train a wise ML it's expensive: humans
must review and “de-bias” what's wrong or evil, but naturally happening in the media.

ML is limited since it lacks general intelligence and prior common sense. Even
merging together all the specialized MLs, or training an ML on everything, it will still
fail at general AI tasks, for ex. at understanding language. You can't talk about anything
with Siri or Cortana as much as with real people, they're just assistants. ML can produce
useful summaries of long texts, including sentiment analysis (opinions and mood
identification), but not as reliable as human works. ML wins humans only at narrow,
specific tasks not requiring general AI. But narrow AI tasks include creative or
supposedly human-only ones too: paint, compose, guess, deceive etc.

No one knows how to build a general AI. This is great: we get super-human
specialized (narrow AI) workers, but no any Terminator or Matrix will decide on its
own to kill us anytime soon. Unfortunately, humans can train machines to kill us right
now, for ex. a terrorist teaching self-driving trucks to hit pedestrians. An AI with
general intelligence it would probably self-destruct, rather than obey to terrorist orders.

To teach a human it's easy: you give a dozen of examples and let him try a few times.
Instead an ML requires thousand times more data and trials. An ML often overfits too:
it memorizes too specific detail of the training data, rather general patterns, doing bad
on real tasks over never seen before data, even just a little different from the training
data. The ML lacks the human general intelligence that models each situation and
relates it to prior experience, and permits to learn from very few examples, memorizing
just what's general, ignoring what's not relevant.

After a million examples learnt, an ML can do less mistakes than humans in


percentage, but errors can be of a different kind, that humans would never make,
such as classify a toothbrush as a baseball bat. This difference with humans it can be
used as malicious AI hacking, for ex. painting small changes over street signals,
unnoticeable by humans, but dramatically confusing for self-driving cars.

5 of 7 6/25/17, 8:53 PM
AI (Deep Learning) explained simply | Fabio Ciucci | Pulse | LinkedIn https://www.linkedin.com/pulse/ai-deep-learning-explained-simply...

Back to LinkedIn.com

The AI will kill old jobs, but create new ML-trainer jobs, similar to puppy trainers,
not to engineers. An ML is harder to train than a puppy, since (unlike the puppy) it lacks
general intelligence, and so it learns everything it spots in data, without any selection or
common sense. A puppy it would think twice before to learn evil things, such as killing
friends. Instead, for an ML it makes no difference to serve terrorists or hospitals.

Practical ML training. If you train with photos of objects held by a hand, the ML will
include the hand as part of the objects, failing to recognize the objects alone. A dog
knows how to eat from a hand, the dumb ML eats your hand too. To fix, train on hands
alone, then on objects alone, finally on objects held by hands, labeled as "object x held
by hand". If you train an ML with photos taken only in sunny days, and test it over other
sunny days photos, it will seem to work. But it may not work over photos of same
things took in cloudy days. The ML may classify photos based on sunny or cloudy
weather, not on the objects in it. A dog knows it's the same object in sunny or cloudy
light, but not an ML, unless you teach it explicitly.

For most tasks in small companies, it will keep cheaper to train humans than MLs.
It's easy to teach a human to drive, but epic to teach the same to an ML. It needs to tell
and let crash the ML in millions of handcrafted examples of all the road situations.
After, perhaps the ML will be safer than any human driver, but a so expensive training
is viable for big companies only. A trained ML can be copied in no time, unlike a brain's
experience transfer to another brain. Bigger providers will sell pre-trained MLs for
reusable common tasks. The cost of training an ML will decrease every year, when
more people will learn how to do it. Humans will keep doing general AI tasks (out of
ML reach), and tasks so uncommon to be not worth teaching to an ML.

For questions, or consulting, message me on LinkedIn, or email at fabci@anfyteam.com

Report this

Fabio Ciucci
Founder, CEO at Anfy srl Follow
1 article

8 comments Newest

Leave your thoughts here…

Paul Harts M.Sc. 6h


Omnichannel consultant & (project) manager (contractor)
Excellent read!

6 of 7 6/25/17, 8:53 PM
AI (Deep Learning) explained simply | Fabio Ciucci | Pulse | LinkedIn https://www.linkedin.com/pulse/ai-deep-learning-explained-simply...

Like Reply 1
Back to LinkedIn.com

Fabio Ciucci 10h


Founder, CEO at Anfy srl
I have updated the article with some added and revised content.
Like Reply 1

Philipp Hammans 8h
| Strategist | Technology- & Trend Enthusiast | Innovation Driver | Futurist | Senior Manager…
Thank you Fabio

For those of you who are interested in our 2b Ahead study about the >> FUTURE of Intellectual
Property << please follow the LINK:

http://go.dennemeyer.com/study-future-of-ip
Like Reply 1

There are 6 other comments. Show more.

Top stories from Editors' Picks

Want tastier airplane food? Wear a pair Why “How many jobs will be killed by Bad Bro Behavior? Not in the Company I
of noise-cancelling headphones. AI?” is the wrong question Keep
Charles Spence on LinkedIn Andrew McAfee on LinkedIn Steve Swasey on LinkedIn

Looking for more of the latest headlines on LinkedIn?

Discover more stories

Help Center About Careers Advertising Talent Solutions Sales Solutions Small Business Mobile Language Upgrade Your Account
LinkedIn Corporation © 2017 User Agreement Privacy Policy Ad Choices Community Guidelines Cookie Policy Copyright Policy Send Feedback

7 of 7 6/25/17, 8:53 PM

Vous aimerez peut-être aussi