Vous êtes sur la page 1sur 2

Artificial Intelligence

Technological innovations are the foundation for economic growth. Of course, some innovations
advance society farther and faster than others. Economies refer to these revolutionary innovations
as general-purpose technologies – a category that includes tech such as the steam engine,
electricity and internal combustion engines. Each of these technologies catalyzed waves of
complementary innovation and opportunities for scientific and human advancement. For example,
the internal combustion engine gave rise to modern transportation (automobiles, trucks, busses,
airplanes), and by extension, supply chain advancements, and even the suburbs. Economists have
identified AI and more specifically machine learning (ML) as the most important general-purpose
technology of our era. This is significant for two reasons: (1) humans know more that we
understand – we are capable of doing many things, but we cannot always explain how we are able
to do these things; (2) AI systems do not need humans to train them – machine learning allows
these systems to learn by recognizing patterns in data.

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction
often portrays AI as robots with human-like characteristics, this is only one representation of AI
capabilities. AI technology can include anything from Google’s search algorithms to IBM’s Watson
computer. The AI that we are familiar with in today’s world is more formally referred to as ‘narrow
AI’ as most AI technology is deployed to perform a specific task, such as facial recognition,
translation, or object detection. However, this is just the beginning of the technology. Most
research seeks to develop AI beyond its narrow focus to create general AI technology that can be
used for a wider variety of tasks. While many narrow AI systems outperform humans on specific
tasks, the idea behind general AI is that it could potentially outperform humans at nearly every
cognitive task.

Current AI systems are making significant advances in two main areas: perception and cognition.
Practical advances in perception are largely related to speech recognition software such as SIRI
and Alexa. While voice recognition is still far from perfect, millions of people around the world are
actively using some form of this technology. Image recognition has also advanced rapidly, and has
positively contributed to many industries such as healthcare (radiology), automotive (self-driving
vehicles), and security (biometric scanners). Advances in cognition problem solving are the second
most common application of AI. Machines have demonstrated superior cognitive ability by beating
the finest human players in chess – an achievement that many believed would take 10+ years to
accomplish. Companies around the world are using different levels of machine learning to develop
adaptive algorithms to optimize everything from targeted marketing to supply chain inventory.
As with any new technology, there are risks and limitations to the widespread adoption of AI. In
particular, machine learning systems often have low “interpretability”, meaning that humans have
a difficult time interpreting how the systems reached their results due to the complex neural
networks used in the decision-making process. Unlike humans, these machines are not good story
tellers. They often cannot give a specific rationale for why a particular decision was made, which
is ironic considering AI was designed to demystify complexities of the human cognitive processes.
This inherent trait of machine learning creates three main risks for AI deployment.

First, similar to the unconscious biases in humans, machines have hidden biases. These biases are
not intentional, but are a result of the data used to train the system. Most machine learning
systems are designed to learn from human experts or past processes. If a machine is
unintentionally given biased data to learn from, it may inadvertently learn to perpetuate the biases
in its decision making process. Moreover, these biases will not necessarily appear as an explicit
rule, but will be embedded in subtle interactions among the thousands of factors considered in
the neural network.

Second, unlikely traditional systems build on logic, AI systems are built on neural networks that
operate on statistical truths rather than literal truths. This can make it difficult to demonstrate an
AI system’s ability to function as intended in all cases – especially for cases that were not explicitly
represented in the training data. Lack of verifiability can become a concern for AI systems in certain
mission-critical applications, such as when life-or-death decisions are involved.

Third, when a machine learning systems makes errors (it will), diagnosing and correcting the error
may be very difficult. Machine learning systems rely on intricate relationships among variables in
complex neural networks. The underlying system can be so complicated that the solution may be
far from optimal if the conditions for the decision differ significantly from the training conditions.

Ultimately, AI technology is not without flaws, but neither are humans. The advantage of machine
learning and AI, is the ability of the system to improve over time and provide consistent results
when presented with the same data. Does this mean that AI will take over the world? Probably
not, as with any technology, AI has limitations. A great place to start with the limits of AI is Picasso’s
observation of computers: “They are useless. They only give you answers.” While computers are
far from useless, Picasso still provides insight. Computers are designed to provide answers. They
can analyze data, highlight problems, and raise concerns, but they don’t ask questions, and neither
will AI. For that, we will continue to need people – entrepreneurs, innovators, scientists and the
like who figure out what problem to tackle next, and what new opportunity to explore.