Vous êtes sur la page 1sur 93

SPECIAL REPORT

Managing AI and ML in the


enterprise

COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.


HARNESSING IOT IN THE ENTERPRISE

TABLE OF CONTENTS

03 Enterprise AI in 2019: What you need to know


12 Survey: Tech leaders cautiously approach artificial intelligence and
machine learning projects
14 The true costs and ROI of implementing AI in the enterprise
19 Machine learning and information architecture: Success factors
22 CIO Jury: 92 percent of tech leaders have no policy for ethically
using AI
24 What is AI? Everything you need to know
39 What is machine learning? Everything you need to know
52 What is artificial general intelligence? Everything you need to know
63 What is deep learning? Everything you need to know
76 How machines are beating cardiologists in north central Pennsylvania
81 Enterprise AI and machine learning: Comparing the companies
and applications

2
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Enterprise AI in 2019: What you need to know


BY NICK HEATH
It’s quite possible you’re sick of hearing about artificial intelligence (AI) and how it could transform your
business. But away from the marketing hype, there are sound reasons to start investigating how AI could
benefit your company.

The first step to understanding what the fuss is about and separating the signal from the noise is jettisoning
the term AI. While ‘AI’ describes an academic field devoted to studying how to build intelligent machines, it’s
a loosely defined term, leaving room for unscrupulous vendors to rebrand legacy software by throwing AI into
the sales pitch.

“With people using AI to describe pretty much everything, this is where the hype comes in,” says Dr Panos
Constantinides, associate professor at Warwick Business School. “The hype revolves around the lack of clarity
as to what we mean by AI,” he added. 

To sidestep that confusion, it’s better to be more specific: what most tech vendors mean today when talking
about AI is machine learning (ML).

Machine learning is a subset of AI, which describes the process of computers learning how to carry out a wide
range of tasks by analysing large volumes of data, rather than following instructions laid out by a programmer
in a piece of software.

Interest in machine learning has exploded thanks to recent breakthroughs in areas like computer vision, speech
recognition, and natural language understanding. Fuelling these advances are new ways of carrying out machine
learning, such as deep learning, which in turn has been made possible by the power of modern processors and
the large quantities of data that organisations can now collect.

In theory, machine learning holds the promise of automating large areas of work that until recently were
manual processes. Handling customer contact centre queries, back-office administration roles—even eventually
driving vehicles, at least on simple stretches of road like highways.

The reality is, however, that many businesses are a long way from implementing machine-learning powered
system in production. A survey for the O’Reilly AI Adoption in the Enterprise report found that just under 75
percent of respondents said their business was either evaluating ‘AI’ or not yet using ‘AI’, although the stage of
use did vary by industry:

3
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

IMAGE: O’REILLY

WHAT ARE COMPANIES DOING WITH MACHINE LEARNING?


There are plenty of eye-catching examples of companies using machine learning: Rolls Royce analysing data
from internet-connected sensors to spot telltale signs of wear in its plane engines and carry out predictive
maintenance; Google using DeepMind’s reducing energy used to cool its datacenters by about 40 percent;
Amazon using image recognition to recognise what shoppers buy in the cashierless Amazon Go Store; and
retail technology firm Ocado coordinating the movement of its robots in its automated warehouses.

Then there are the more prosaic uses of machine learning that have been in place for years: in the
recommendations systems used by Amazon to get you to buy more products, and by Netflix to get you to
watch more shows; and in the global security systems run by the likes of Microsoft to flag online threats as they
emerge. More recently, financial investment firms like Citigroup have also started using machine learning to
spot fraudulent transactions and errors in payments.

It’s quite possible your firm already uses a service that at least partially relies on machine learning, particularly
as vendors augment existing services to include new features powered by ML’s prodigious pattern-matching
capabilities. Examples might be the use of ML in natural language processing and speech recognition for
chatbots and other automated response systems in customer contact centres, or to spot spam and autocomplete
sentences in mail services. Indeed, respondents to the O’Reilly report named customer service and IT as two
of the most common areas where their firm was using ‘AI’.

4
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Other companies are experimenting with using At present, companies using


machine learning to model repetitive tasks carried
out by employees, in an attempt to automate machine learning in production
those tasks using software. There are already systems appear to be the
companies that specialise in this area, which is outliers, with the majority of
known as Robotic Process Automation (RPA).
In the report Automation, AI, And Robotics firms only trialling ML systems
Aren’t Quick Wins J P Gownder, VP at analysts or simply using services like
Forrester, gives Gmail that include some
the example of a German pharmaceutical
company that uses RPA to automate the ML-powered features.
procurement process. 

RPA doesn’t always involve machine learning, and historically been carried out by developers spelling out the
rules for automating the process in software, rather than those rules being learned by the system. So while
automation shouldn’t be confused with machine learning (as the steps to automate a process could have been
coded by a developer), Forrester predicts a greater role for ML in RPA in future.

“Firms are already combining AI building block technologies such as ML and text analytics with RPA features
to drive greater value for digital workers,” states Forrester’s Predictions 2019: Artificial Intelligence report.
The analyst firm predicts a role for chatbots in controlling RPA software, machine learning models that spot
patterns in Internet of Things (IoT) data to trigger ‘digital workers’, and the use of text analytics to increase
RPA’s capabilities.

But, at present, companies using machine learning in production systems appear to be the outliers, with
the majority of firms only trialling ML systems or simply using services like Gmail that include some
ML-powered features.

“Companies are doing Robotic Process Automation—there’s a reasonable uptake, 20-30 percent of company
processes are in my estimation being automated that way, but the uptake of machine learning is in very small
areas,” says Mark Skilton, professor of practice at Warwick Business School.

That said, companies seem to be aware that there’s potential for machine-learning systems to open up new
efficiencies, services and products in the coming years: the O’Reilly report found that just under two thirds of
respondents say their company plans to invest at least 5 percent of their IT budgets in ‘AI projects’ over the
next year.

5
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Companies told Forrester that their main priority for investing in automation were cost savings, as you can
see in this extract from the Automation, AI, And Robotics Aren’t Quick Wins report. Companies were asked:
‘What are or could be the biggest benefits of adopting automation technologies for your organization?’

IMAGE: FORRESTER

HOW TO GET STARTED?


Of course, it would be foolish to adopt machine learning without being clear on why you’re doing it. So what
exactly can you do with machine learning?

Machine learning is a typically tasked with spotting patterns in large volumes in data. In practice, this pattern-
recognition ability has resulted in systems that can pick out words from audio, people from photographs, and
understand a word’s meaning in a sentence—to give just a few examples.

6
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

You’ll need a mix of domain expertise and in-house data science skills to get started, with the first steps
being to decide what you want to achieve, whether machine learning is a good fit and, if you’re not using an
on-demand service, which category of ML to use—supervised, unsupervised or reinforcement learning.

There are a number of considerations before starting on a project: What data are you collecting?’; ‘How can
that data be transformed to make it suitable for training a machine learning model?’; and ‘What features of that
data are going to be of interest for training your machine-learning model?’. 

“You can’t expect that data are ready made,” said Constantinides. “It’s data scientists who will create the
categories that the machine-learning algorithm will be looking for. If you can’t get data right, you don’t have a
successful application of machine learning.”

There’s also the question of whether using existing data to train a model would require you to seek further
permissions, or impose additional protections to comply with privacy regulations such as the EU’s General
Data Protection Regulation (GDPR).

“The way data are aggregated makes it very difficult to know exactly where the data came from or how
decisions are made in particular cases,” said Constantinides, adding that gaining proper consent with GDPR
could be particularly challenging when training deep neural networks. Just one area where GDPR throws up
additional barriers to the use of machine-learning based technologies is the use of  facial recognition by retailers
in stores.

image: istock/monsitj

7
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

When it comes to the technical choices, you’ll need You’ll need to invest in a
to decide whether to rent hardware in the cloud or
build your own deep-learning rig. The major cloud decent GPU to train anything
providers—Amazon, Microsoft and Google—offer more than very simple neural
a range of on-demand, pay-per-use machine- networks—the brain-inspired
learning services. 
mathematical models that
These services cover speech recognition, computer
vision (such as object, face, and emotion recog-
underpin machine learning.
nition), natural language processing (the ability to
interpret human language), sentiment analysis, data forecasting and translation. Sometimes these services are
bundled up into higher-level more sophisticated offerings, such as chatbot creation kits and recommendation
engines for retailers.

Beyond the on-demand services, each of the major cloud platforms, including AWS, Google Cloud, and
Microsoft Azure, also offer services that allow firms to train and run machine learning models using their
cloud infrastructure. These models can be applied to any data the firm requires, although doing so will require
in-house data scientists to work with domain experts and IT ops staff to decide where machine learning could
be used most effectively, and to design a process for preparing data, training and deploying the machine-
learning model.

The cloud platform providers have even started offering services that partially automate the training of
machine-learning models, although these are aimed more at augmenting the skills of data scientists than
replacing them. These offerings streamline the process of training a machine-learning model via drag-and-drop
tools and other simplifications, with services including Microsoft’s Machine Learning Studio, Google’s Cloud
AutoML and AWS SageMaker. Meanwhile, preparing data for training a machine-learning model—labelling
images in a computer vision task, for example—is often contracted out to freelancers via crowdworking
websites such as Amazon Mechanical Turk.

If you do decide to build your own machine-learning system in-house, it won’t be cheap but may be more
affordable than using a cloud service if you anticipate the training process will require more than a couple of
months of intensive work. 

You’ll need to invest in a decent GPU to train anything more than very simple neural networks—the brain-
inspired mathematical models that underpin machine learning. GPUs are typically necessary to train neural
networks thanks to their ability to carry out a very large number of matrix multiplications in parallel, which
helps to accelerate a crucial step during training.

8
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

If you’re not planning on training a neural network with a large number of layers, you can opt for consumer-
grade graphics cards, such as the Nvidia GeForce GTX 2060, which typically sells for about £320, while still
offering 1,920 CUDA cores.

More heavy-duty training, however, will require specialist equipment. One of the most powerful GPUs for
machine learning is the Nvidia Tesla V100, which packs 640 AI-tailored Tensor cores and 5,120 general
high-performance computing CUDA cores. These cards cost considerably more than consumer alternatives,
with prices for the PCI Express version starting at £7,500.

Building AI-specific workstations and servers costs an order of magnitude more, for example, Nvidia’s
deep-learning focused DGX-2 packs 16 Tesla V100 cards and sells for $399,000.

There are a wide range of deep-learning software frameworks, which allow users to design, train and validate
deep neural networks, using a range of different programming languages. 

A popular choice is Google’s TensorFlow software library, which allows users to write in the Python, Java,
C++, and Swift programming languages, can be used for a wide range of deep-learning tasks such as image
and speech recognition, and which executes on a wide range of CPUs, GPUs, and other processors. It’s well
documented, and has many tutorials and implemented models that are available.

Another commonly used framework, especially for beginners, is PyTorch, which offers the imperative
programming model familiar to developers and allows programmers to use standard Python statements.
PyTorch works with various types of deep neural networks, ranging from CNNs to RNNs, and runs efficiently
on GPUs.

Among the many other options are Microsoft’s Cognitive Toolkit, MATLAB, MXNet, Chainer, and Keras. 
Advances in technology, both in the machine-learning frameworks and computer hardware, mean it’s now
feasible to deploy trained machine learning models to cheap, low-power computers at the edge of a corporate
network, making it easier to use these models to spot patterns or trigger actions based on data collected by
IoT sensors.

A SIMPLE FIRST STEP


What types of project could firms use to experiment with machine learning? Constantinides recommends
starting simple, focusing on a non-critical area of the business, and then scaling up from there.

While the nature of the project will depend heavily on the industry sector, Constantinides cites a contact centre
chatbot as an example of a simple project for many businesses.

9
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

This chatbot could handle straightforward, tech firm focusing on analysing medical scans for
repeated customer queries and hand the customer radiologists, rather than setting the broader and
off to a human operator if the query gets too less manageable goal of tackling cancer as a whole.
complex. It would use natural-language processing
to enable it to deal with more complex interactions THE CHALLENGES
than the old rules-based chatbots. Being realistic about what a machine-learning
“Most companies consider call centre support as a project can achieve is important. In Automation,
secondary function that’s outside the core compe- AI, And Robotics Aren’t Quick Wins, Forrester’s
tencies of the organisation,” Constantinides said. Gownder says that over-ambition is a common
“As such, it’s considered a low-risk use case.”  misstep, as illustrated by M.D. Anderson Cancer
Center abandoning its project to use IBM Watson
From that starting point, companies could move to to help identify treatment for patients after
another ML-powered service—recommendation spending $62m on the project. 
engines—using data from customer interactions
to push customers towards other products and In general, it’s important to keep your expectations
services, Constantinides added. in check when using machine learning-powered
technologies and to realise they will rarely
“From there they can scale up. Once you have deliver perfect results: speech recognition makes
all this data about your customers through these transcription errors and facial-recognition systems
customer interactions, then you can start making often misidentify people outside of strictly
different kinds of predictions. You can start controlled conditions.
asking different kinds of questions. Leading
These shortcomings are why many of these
questions, like ‘Would you consider buying this
systems are often talked about as augmenting
other product?’ or ‘If you’re already satisfied with
human judgement, as narrowing down the choices
this service, then why don’t you consider this?’.
a person has to make, rather than replacing the
So, from customer support it changes to dynamic
person outright. There may be fewer humans in
marketing. You’re building on top of this initial
the loop, but full automation of many roles isn’t
use case.”
feasible, at least for now.
In a similar vein, Forrester’s Gownder also The perils of automating too much, too fast are
highlights the importance of narrowing the focus also flagged by Gownder in the Forrester report,
of any starter projects to a specific task. In the which cites carmaker Tesla’s move to restore
report Automation, AI, And Robotics Aren’t humans to its production line after robots were
Quick Wins, he gives the example of a healthcare found to be unsuitable for certain tasks.

10
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

IMAGE: O’REILLY

“Since restoring humans to the production line, however, Tesla’s Model 3 became one of the best-selling cars
in America, growing from only 1,825 cars produced in January 2018 to 14,250 in July 12,” Gownder writes.

A further complication facing firms may be finding the data science expertise necessary to implement machine
learning projects. Over half of the respondents to the O’Reilly survey said their organizations were in need
of machine learning experts and data scientists, for example. In a separate O’Reilly report, Evolving Data
Infrastructure, data science and data engineering were again named as the two areas where companies suffered
the biggest analytics-related skills gap.

“The technology and the promise is there—the problem is really the tagging of the data and having the
knowledge and the skills within the company to understand ‘How do I prepare my data so I can start learning
from it?’,” said Warwick Business School’s Skilton.  

Despite these wrinkles, companies are increasingly experimenting with machine-learning technologies.
According to Skilton, 2019 is a good year for companies to get stuck into the challenge of machine learning
“so they can move the dial from human knowledge to machine knowledge, to augmenting people and making
them more productive”.

11
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Survey: Tech leaders cautiously approach


artificial intelligence and machine learning
projects
BY MELANIE WACHSMAN
Enthusiasm for artificial intelligence
(AI) and machine learning (ML)
remains steady for 2019. However,
tech leaders admit some trepidation
in terms of AI/ML project
management and support. How
companies manage their AI/ML
projects was the topic of a recent
survey by ZDNet’s premium sister
site, Tech Pro Research.

Overall, survey respondents said


that their AI/ML projects will be
more difficult than previous IT
projects. Respondents cited a lack
of staff readiness for implementing
and supporting an AI/ML system
as a cause for concern. More
specifically, 38% of respondents
said that their company employs
an insufficient amount of technical
personnel who can develop
applications for, and support, an
AI/ML environment, while 22%
said that business analysts could
use more experience defining
IMAGE: ERIK UNDERWOOD/TECHREPUBLIC
system requirements and work with
end users. Another worry (for 14%

12
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

of respondents) was system programmers and architects lacking experience in integrating AI/ML applicants
with existing infrastructure. Training end users and modifying business processes caused unease for 13% of
respondents. Only 8% of respondents felt that their IT staff were up to the task of managing AI/ML projects.

Staff readiness wasn’t the lone concern respondents noted about imminent AI/ML projects. More than half of
respondents (53%) still remain uncertain about the business value of AI/ML. Echoing the above sentiments,
47% of respondents worry that IT lacks necessary AI/ML skills for implementation and support. Further, 33%
of respondents expressed apprehension that upper management will stay committed to AI/ML projects. Time
and cost overruns and insufficient vendor support rounded out the list of respondent concerns. This makes
sense since AI and ML are emerging technologies. Identifying and implementing business opportunities for
AI/ML will increase once organizations gain more confidence in their AI/ML management and support skills.

Interestingly, 6% of respondents said that their AI/ML projects will be less difficult than previous projects and
expressed no hesitation about AI/ML project execution or support.

IT leadership is driving AI/ML projects. Respondents cited project requests originating from the offices of
the CEO or other C-suite executives (33%), IT management (25%), and end business management (24%).
While C-level, end-business managers, and IT managers will promote AI/ML, IT will lead the deployment and
support of AI/ML projects.

To combat impending difficulties with AI/ML projects, more than half of the survey respondents are
performing small pilot projects and proofs of concept before full implementation. This approach lets
organizations try out a solution before enabling it, which ultimately will protect the investment. The more
comfortable organizations get with AI/ML initiatives the more likely they will pursue additional projects.

The infographic contains selected details from the research. To read more findings, plus analysis, download
the full report: Managing AI and ML projects in the enterprise 2019: Tech leaders expect more difficulty than
previous IT projects (Tech Pro Research subscription required).

13
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The true costs and ROI of implementing AI in


the enterprise
BY MARY SHACKLETT

image: istock/metamorworks

A recent analysis of web topic popularity by web content evaluator MarketMuse revealed that 80% of IT and
corporate business leaders want to learn more about the cost of implementing existing AI technology in an
enterprise; 74% are interested in how much more it would cost over present expenditure levels to implement
AI in their enterprises; and 69% want more information about how to measure return on investment (ROI) for
a new AI solution.

Concerns about AI and machine learning (ML) costs and payoffs correlated with data I recently evaluated for
Tech Pro Research. That research showed that a majority of organizations don’t have a clear understanding of
how AI/ML is going to help their businesses. Unsure of results, 64% of the TechRepublic survey respondents
said they’re using pilot projects to test AI/ML concepts before proceeding into full implementations.

14
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The takeaways are clear. In fact, they are all too familiar whenever companies deal with emerging technologies.
Just as cloud technology presented its share of uncertainties when it was first being deployed, AI and ML are
generating similar heartburn as companies cautiously move forward.

The causes for this anxiety are easy to understand. Many organizational influencers still don’t know enough
about AI/ML and how these newer technologies can specifically pay off for their businesses. They are
uncertain when they step into strategic meetings and budget discussions. They are asking themselves, “How far
can I push for these promising new technologies when I lack empirical, firsthand knowledge about their pros
and cons—and about the investment paybacks management will surely ask me for?”

In short, AI champions, whether they come from IT or the end business, want affirmation in two
principal areas:

• How can I present an impactive business case for AI/ML?


• How can I ensure that there will be an acceptable return on investment and understanding of ongoing
costs for any recommendation I might make?

DEVELOPING THE BUSINESS CASE


According to the TechRepublic research, 53% of companies interviewed reported that they don’t have a clear
understanding of how AI or ML could benefit their businesses.

This is a red flag area, where vendors and industry consultants with experience both in AI/ML and in specific
industry verticals can help.

Consultants can help by working alongside corporate IT and business managers, helping them identify sound
business use cases where AI and ML can be put to work and pay off.

AI and ML vendors can help by prepackaging AI/ML uses cases that are purposed toward specific industry
verticals. One example is IBM Watson for healthcare, which is now a “tried” and prepackaged solution that
hospitals and medical clinics can use to assist in medical diagnoses.

Even with prepackaged and tried solutions, however, it is currently company best practice to trial these systems
with a preliminary pilot project that can 1) show that the solution will deliver what the company thinks it will
and 2) show promise that it will deliver a return on money and effort investments.

An AI/ML pilot project is important as a technology proof of concept that could justify increased spending. It
is equally important as a vehicle that can build confidence and experience with AI in both IT and the
end business.

15
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

JUSTIFYING THE INVESTMENT


Once business use cases are identified and trialed, the
If you automate packaging
task of identifying an ROI and funding the costs of a
broader implementation of AI/ML begins. on an assembly line,
A common method IT departments employ for calcu-
reducing time and waste—
lating ROI for an IT project is assessing how much time but all of your other
and money a system improvement will subtract from end-to-end processes are
a business process. For example, if you’re investing in
virtual servers to replace physical servers in the data
unaffected and continue to
center, as most companies did 10 years ago, it’s relatively throttle the workflow—the
straightforward to calculate your upfront costs in new ROI visibility of your AI/
virtualization software and equipment and then compare
this against the floorspace, energy, and physical server
ML insertion and its ROI
investments you’re saving. delivery will be lost.
With AI and ML, determining an ROI isn’t that simple.

Most commonly, AI and ML can be used to achieve manpower savings because they can automate portions of
operational and decision-making processes--but they seldom automate or economize all parts of an end-to-end
business workflow.

Why is this important?


Because promoters of AI and ML will be expected to provide an ROI that their companies will see on the
bottom line. This means that the entire business workflow, not just part of it, must deliver tangible
bottom-line value.

For instance, if you automate packaging on an assembly line, reducing time and waste—but all of your other
end-to-end processes are unaffected and continue to throttle the workflow, the ROI visibility of your AI/ML
insertion and its ROI delivery will be lost.

Takeaway
If you’re piloting AI/ML for a single process in an entire chain of end-to-end business processes, ensure that
the AI/ML you’re using can also be leveraged for value to these other business processes so you can make a
total impact on the business without bottlenecks. And as part of this effort, if you are first documenting an
ROI gain for a single business process, be sure to structure that ROI around that single business process only,
so company expectations are properly set.

16
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

UNDERSTANDING (AND FACTORING IN) THE COSTS FOR A


TRUE ROI
At its simplest, an ROI formula benchmarks a current process against a revised process that uses AI and/ or
ML. So, if you’re using AI/ML for purposes of medical diagnosis, suddenly you have compute power and
predictive algorithms that enable the digestion of thousands of pages of medical data in seconds, resulting in a
rapid diagnosis of a patient’s medical condition that a medical specialist then reviews and assesses. The desired
outcomes you measure for are speed to diagnosis, reduction in man-hours, and improved accuracy of results. If
these business metrics are achieved, ROI is well on its way because the AI/ML have reduced time to diagnosis,
saved man-hours, and hopefully have reduced margins for error.

Unfortunately, this initial ROI doesn’t factor in the cost of obtaining more compute power, storage etc., to
support the new solution. Nor does it include time for restructuring business processes, revising surrounding
systems, integrating these disparate systems with the new AI platform, training IT and end business users,
consumption of energy and data center costs, initial implementation costs, licensing, and so on. These setup
and ongoing support costs must also be factored into the ROI equation to ensure that you are still achieving
positive ROI results over time.

Takeaway
Achieving an initially attractive ROI in a pilot project by reducing time of operations or improving revenue
potential is not a strong enough ROI result to move forward with. The champion of an AI/ML project should
get together with finance and determine longer term ROI projections over a period of several years. These
long term projections should take into account every corporate asset that is required to run the AI/ML, such as
new equipment/software, cloud costs, energy and data center costs, training costs, system and business process
revision and integration costs, and even extra manpower that might be needed to run the new technology. The
goal should be achieving an ROI that remains in the black over time and that builds on its value by continuing
to enrich business processes and results.

AVOIDING THE ROI SAND TRAPS


Another key to producing a credible ROI formula that can operate over the long term for AI/ML is to
recognize the potential cost sand traps that can threaten your ROI. Here are several typical ones:

System integration
AI/ML systems don’t operate in a vacuum. Vendors know this, and many will tell you that their systems have
a complete set of APIs that interoperate with all systems. This works until the AI must work with a highly

17
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

customized or legacy system you have in-house. When this happens, it is usually IT that must hand-code
system interfaces.

This costs time and money, and both can destroy your ROI.

AI gone wrong
Because AI depends upon computers emulating the human mind, and because ML is a subset of AI that
strives to continue learning from repetitive pattern recognition in the same way that the human mind learns,
computers—like the human mind—can misinterpret.

One case in point: Symrise, a major global fragrance company in Germany, used AI to produce new perfumes
for Brazil’s Millennial market. These perfumes boosted revenues and global reach. But Symrise executive Anton
Daub said it took almost two years to get to this point. Those two years were spent in intensive training of the
AI system by Symrise’s perfumers and included costly IT upgrades to connect the company’s disparate data to
the AI.

Because AI systems need to be continually recalibrated and trained, there will always be a “venture” element
in any AI project—because the human mind (and emulating it) can be unpredictable. This uncertainty must be
planned for in any AI ROI formula. One step AI promoters can take is to educate upper management of the
risks so that these risks can be planned for and managed. A second step is to factor risk into the ROI formula
by adding a 20% cushion to your AI project cost projections as a margin for the unknown and the unexpected.

New system and business processes


Introducing AI in your company is going to impact systems and business processes. Minimally, systems
that need to communicate and exchange information with your AI must be integrated with the AI. System
processes will by necessity be modified. As the AI is rolled out to your business operations, processes that
formerly were performed by humans will be undertaken by the AI—causing the displaced humans to enter
into new and/or revised job roles. This business process revision will need be planned for, trained for, and
accounted for in your ROI formula.

18
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Machine learning and information architecture:


Success factors
BY JAMES SANDERS
The enterprise has long since eclipsed the days of manually analyzing data, as doing so is both expensive
and impractical considering the sheer amount of data organizations generate. For years, this task has been
delegated to programmers, who often were tasked with creating custom scripts requiring frequent revision
and fine tuning.

Those days are quickly coming to an end, as both the quantity of data and variety of sources from which that
data is collected have increased beyond the practicality of this strategy. Now, organizations are rapidly adopting
machine learning to generate insights from data. However, this transition is not a completely seamless one.
Understanding how to efficiently utilize machine learning and the data regulations for information processed by
machine learning, as well as contending with how computers inherit bias from human decision making, are vital
to a successful adoption in your organization.

image: istock/Rick_Jo

19
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

HOW TO PREPARE UNSTRUCTURED DATA FOR PROCESSING


WITH AI AND ML
The preparations for unstructured data depend on “The way that you imagine,
what type of data it is and how you define unstruc-
tured. “‘Unstructured’ is often a misnomer, as lots of
or envision, design, develop,
data types associated with ‘big data,’ such as JSON roll out, and then run an
files (associated with mobile and social feeds), log AI capability, is not in a
files, text documents, email messages, and more have
structure,” Doug Henschen, principal analyst on
traditional IT technology
data-driven decision making at Constellation Research, fashion. It’s not even in a
told ZDNet. “In the case of this semi-structured traditional product fashion.
data, parsing, filtering and transformation steps can
be applied in ETL and ETL-like processes. When this
It changes the organization,
happens at scale, [Apache] Spark is often used instead more than the organization
of old-school commercial integration servers. This changes its technology.”
processing can bring more structure and consistency
to the data.”
—Michele Goetz

WHAT IS NECESSARY TO GENERATE ACTIONABLE


INTELLIGENCE FROM AI/ML-POWERED ANALYSIS?
Think back to an introductory statistics class you may have taken as a student: Without a sufficiently large
sample size—or in this case, data set—no meaningful conclusions can be drawn. According to Henschen, some
machine learning systems “require at least 10,000 rows of data before you can achieve adequate accuracy.”

Repeatability and scale are key to success for utilizing machine learning effectively. “If you can find decisions
that happen at scale and that can be made by humans in seconds, they’re probably good candidates for
automation,” Henschen said. “If they’re more complex, but still high-scale and consistent, then they may be
good candidates for recommendations.”

The role that machine learning plays in your organization is also worth reconsidering—using it as a drop-in
replacement for analytics scripts undercuts the benefits that machine learning offers. “AI has a job to do. You’re
defining the model to automate and scale your decisions and actions, to take care of a job,” Forrester enterprise
architecture analyst Michele Goetz told ZDNet. “What you’re doing is training a system to be a co-bot with the

20
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

rest of your organization, not to just say, ‘Oh, look how great our performance was,’ or, ‘This is just where your
forecast is going.’”

According to Goetz, “The way that you imagine, or envision, design, develop, roll out, and then run an AI
capability, is not in a traditional IT technology fashion. It’s not even in a traditional product fashion. It changes
the organization, more than the organization changes its technology.” These deployments require a mindfulness
about how your organization operates, not how your organization uses a specific technology.

WHAT’S THE DIFFERENCE BETWEEN DATA ARCHITECTURE


AND INFORMATION ARCHITECTURE?
Considering the quantity and quality of data is not quite enough to take full advantage of machine learning.
The structures built around your data—and the way your data is structured—influences the extent to which
you can effectively use machine learning. Data architecture applies “specifically to structured data,” Goetz said.
“Information architecture tends to look at things more holistically, regardless of the structure of the data.
When thinking about information architecture, it’s how do you bring together disparate data—structured versus
unstructured and semi-structured, and [harmonize] them… you want to take advantage of all possibilities that
create appropriate and representative views of the world that AI is going to operate in.”

UNDERSTANDING DATA REGULATION AND COMPLIANCE


REQUIREMENTS
Naturally, the extent to which data is regulated depends on the applicable jurisdictions in play—most of these
regulations are not unique to machine learning; likewise, machine learning is not a shield, workaround, or
otherwise a pass enabling organizations to flout data regulations. “Applicable jurisdictions” is also intentionally
broad—regulations like GDPR apply to American firms under specific (though broad) circumstances.

Bias is the primary issue when using machine learning. “In regulated industries such as banking and insurance,
they’ve long faced regulatory oversight to ensure that decisions on loans, claims, policy issuance, etc., are
explainable and unbiased,” Henschen told ZDNet. “As the use of ML and AI spread, I think we’ll see more
general interest and demand for explainability and transparency. Bias is not just a matter of the models; it’s also
a matter of the data.”

21
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

CIO Jury: 92 percent of tech leaders have no


policy for ethically using AI
BY ALISON DENISCO RAYOME
As more organizations explore artificial intelligence (AI) and machine learning tools, some are beginning to
grapple with ethical questions that may arise around bias, interpretability, robustness, security, and governance.
However, very few have policies in place to ensure that AI is used ethically, according to a TechRepublic CIO
Jury poll.

When asked, “Does your company have a policy for ethically using AI or machine learning?” 11 out of 12 tech
leaders said no, while just one said yes.

However, most of the ‘Nos’ are not expected to stay that way for long.

“’Not yet’ would be more accurate long-term,” said John C. Gracyalny, vice president of digital member
services at Coast Central Credit Union.

Dan Gallivan, director of information technology at Payette, agreed. “Something tells me we should be adding
it to our current IT policies!” he said.

For some, including Michael Hanken, vice president of IT, Multiquip Inc., it’s simply “too early in the game,”
but policies will likely come in the future.

While Power Home Remodeling does not have an ethical AI policy, it does have policies for the ethical use of
technology in general, which extends to AI, said CIO Timothy Wenhold.

“That said, after embarking on multiple machine learning [projects], I believe that there will be a need
for companies to monitor the application of AI within their organizations and processes,” Wenhold said.
“Through this practice we will gain the necessary knowledge to craft updated policies that will allow our
organizations to govern the use of AI and remain good corporate citizens.”

The topic will continue to be relevant in the coming years, said Greg Carter, CTO of GlobalTranz.

“In logistics, AI and machine learning are becoming increasingly important to how logistics services providers
manage the flow of goods and materials through the supply chain,” Carter said. “For example, one area we
are exploring now is using AI to model the behavior of specific elements of the supply chain -- including
drivers. We are essentially creating a digital persona of drivers in an effort to understand their preferred routes
and load types. This will allow us to book the ideal driver for multiple loads in advance. Knowing this much

22
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

about a driver and targeting using AI requires a governance, policy, and security framework to make sure this
information is not misused.”

AI and its related technologies are already impacting how users interact with the internet, said Kris Seeburn, an
independent IT consultant, evangelist, and researcher.

“AI for us brings greatly the potential to vastly change the way that humans/stakeholders and staff interact,
not only with the digital world, but also with each other, through their work and through other socioeconomic
institutions -- for better or for worse,” Seeburn said. “We want to ensure that the impact of artificial
intelligence will be in a positive way, and that we do recognize the essentials that all stakeholders participate in
the use and adoption surrounding AI and machine learning principles.”

Organizations should implement policies for ethically using AI and machine learning in the near future, because
the long-term effects of not doing so could be damaging for the business, said Christopher Hazard, CTO of
Diveplane, who was not a member of the CIO Jury.

“Implementing AI without interpretability can lead to loss of tacit knowledge in the organization, leaving the
business unable to adapt to changing circumstances due to the inability to understand those circumstances,”
Hazard said. “Lack of AI robustness can exacerbate the inability to adapt, as well as potentially lead to the
business being vulnerable to exploitations by customers, employees, or competitors. The company should also
document the trade-offs they are willing and prepared to make with regard to removing bias from their AI
deployments to ensure that bias is properly prioritized and addressed throughout the organization.”

This month’s CIO Jury included:

• Lance Taylor-Warren, CIO, Community Health Alliance


• Michael Hanken, vice president of IT, Multiquip Inc.
• John C. Gracyalny, vice president of digital member services, Coast Central Credit Union
Dan Gallivan, director of information technology, Payette
• Timothy Wenhold, CIO, Power Home Remodeling
• Kris Seeburn, independent IT consultant, evangelist, and researcher
Joel Robertson, CIO, King University
• Jeff Focke, director of IT, Shealy Electrical Wholesalers
• Jeff Kopp, technology director, Christ the King Catholic School
• Eric Carrasquilla, senior vice president of product, Apttus
• David Wilson, director of IT services, VectorCSP
• Greg Carter, CTO, GlobalTranz

23
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

What is AI? Everything you need to know


BY NICK HEATH
What is artificial intelligence (AI)?

It depends who you ask.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task
performed by a program or a machine that, if a human carried out the same activity, we would say the human
had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether
something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intel-
ligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and
manipulation and, to a lesser extent, social intelligence and creativity.

WHAT ARE THE USES FOR AI?


AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to
virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, to spot
spam, or detect credit card fraud.

WHAT ARE THE DIFFERENT TYPES OF AI?


At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or
learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant
on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines
that suggest products you might like based on what you bought in the past. Unlike humans, these systems can
only learn or be taught how to do specific tasks, which is why they are called narrow AI.

24
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

WHAT CAN NARROW AI DO?


There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying
out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars,
responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks
like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays,
flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices,
the list goes on and on.

WHAT CAN GENERAL AI DO?


Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible
form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to
building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is
the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which
doesn’t exist today and AI experts are fiercely divided over how soon it will become a reality.

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and
philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be
developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting
that so-called ‘ superintelligence’ -- which Bostrom defines as “any intellect that greatly exceeds the cognitive
performance of humans in virtually all domains of interest” -- was expected some 30 years after the
achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of
the human brain, and believe that AGI is still centuries away.

WHAT IS MACHINE LEARNING?


There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large
amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or
captioning a photograph.

WHAT ARE NEURAL NETWORKS?


Key to the process of machine learning are neural networks. These are brain-inspired networks of
interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained

25
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers.
During training of these neural networks, the weights attached to different inputs will continue to be varied
until the output from the neural network is very close to what is desired, at which point the network will have
‘learned’ how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks
with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks
that have fueled the current leap forward in the ability of computers to carry out task like speech recognition
and computer vision.

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural
networks are a type of neural net particularly well suited to language processing and speech recognition, while
convolutional neural networks are more commonly used in image recognition. The design of neural networks
is also evolving, with researchers recently refining a more effective form of deep neural network called long
short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like
Google Translate.

THE STRUCTURE AND TRAINING OF DEEP NEURAL NETWORKS. (IMAGE: NUANCE)

26
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Another area of AI research is evolutionary computation, which borrows from Darwin’s famous theory
of natural selection, and sees genetic algorithms undergo random mutations and combinations between
generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use
of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important
role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly
as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs,
which released papers on using genetic algorithms to train deep neural networks for reinforcement
learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series
of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human
expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot
system flying a plane.

WHAT IS FUELING THE RESURGENCE IN AI?


The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in
particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel
computing power in recent years, during which time the use of GPU clusters to train machine-learning systems
has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they
are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google
and Microsoft, have moved to using specialized chips tailored to both running, and more recently training,
machine-learning models.

An example of one of these custom chips is Google’s Tensor Processing Unit (TPU), the latest version of
which accelerates the rate at which useful machine-learning models built using Google’s TensorFlow software
library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that
underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public
to build machine learning models using Google’s TensorFlow Research Cloud. The second generation of these
chips was unveiled at Google’s I/O conference in May last year, with an array of these new TPUs able to train

27
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

a Google machine-learning model used for translation in half the time it would take an array of the top-end
graphics processing units (GPUs).

WHAT ARE THE ELEMENTS OF MACHINE LEARNING?


As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised
and unsupervised learning.

Supervised learning
A common technique for teaching AI systems is by training them using a very large number of labeled
examples. These machine-learning systems are fed huge amounts of data, which has been annotated to
highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or
written sentences that have footnotes to indicate whether the word ‘bass’ relates to music or a fish. Once
trained, the system can then apply these labels can to new data, for example to a dog in a photo that’s just been
uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these
examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical
Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions
of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of
big data and widespread data mining. Training datasets are huge and growing in size -- Google’s Open Images
Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million
labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images.
Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through
Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large
amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are
fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out
tasks using a far smaller amount of labelled data than is necessary for training systems using supervised
learning today.

28
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Unsupervised learning
In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data,
looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn’t setup in advance to pick out specific types of data, it simply looks for data that can be
grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning
A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going
through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind’s Deep Q-network, which has been used to
best human performance in a variety of classic video games. The system is fed pixels from each game and
determines various information, such as the distance between objects on screen.

MANY AI-RELATED TECHNOLOGIES ARE APPROACHING, OR HAVE ALREADY REACHED, THE ‘PEAK OF
INFLATED EXPECTATIONS’ IN GARTNER’S HYPE CYCLE, WITH THE BACKLASH-DRIVEN ‘TROUGH OF
DISILLUSIONMENT’ LYING IN WAIT. (IMAGE: GARTNER / ANNOTATIONS: ZDNET)

29
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

By also looking at the score achieved in each game cloud-based data stores, capable of holding the
the system builds a model of which action will vast amount of data needed to train machine-
maximize the score in different circumstances, for learning models, services to transform data to
instance, in the case of the video game Breakout, prepare it for analysis, visualisation tools to display
where the paddle should be moved to in order to the results clearly, and software that simplifies the
intercept the ball. building of models.

These cloud platforms are even simplifying the


WHICH ARE THE LEADING creation of custom machine-learning models, with
FIRMS IN AI? Google recently revealing a service that automates
With AI playing an increasingly major role in the creation of AI models, called Cloud AutoML.
modern software and services, each of the major This drag-and-drop service builds custom
tech firms is battling to develop robust machine- image-recognition models and requires the user to
learning technology for use in-house and to sell to have no machine-learning expertise.
the public via cloud services.
Cloud-based, machine-learning services are
Each regularly makes headlines for breaking new constantly evolving, and at the start of 2018,
ground in AI research, although it is probably Amazon revealed a host of new AWS offerings
Google with its DeepMind AI AlphaGo that has designed to streamline the process of training up
probably made the biggest impact on the public machine-learning models.
awareness of AI.
For those firms that don’t want to build their
WHICH AI SERVICES ARE own machine learning models but instead want to
consume AI-powered, on-demand services—such
AVAILABLE?
as voice, vision, and language recognition—
All of the major cloud platforms—Amazon Web
Microsoft Azure stands out for the breadth of
Services, Microsoft Azure and Google Cloud
services on offer, closely followed by Google
Platform—provide access to GPU arrays for
Cloud Platform and then AWS. Meanwhile IBM,
training and running machine learning models,
alongside its more general on-demand offerings, is
with Google also gearing up to let users use its
also attempting to sell sector-specific AI services
Tensor Processing Units -- custom chips whose
aimed at everything from healthcare to retail,
design is optimized for training and running
grouping these offerings together under its IBM
machine-learning models.
Watson umbrella—and recently investing $2bn in
All of the necessary associated infrastructure buying The Weather Channel to unlock a trove of
and services are available from the big three, the data to augment its AI services.

30
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

WHICH OF THE MAJOR TECH FIRMS IS WINNING THE


AI RACE?
Internally, each of the tech giants -- and others such as Facebook—use AI to help drive myriad public services:
serving search results, offering recommendations, recognizing people and things in photos, on-demand
translation, spotting spam—the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple’s
Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana.

THE AMAZON ECHO PLUS IS A SMART SPEAKER WITH ACCESS TO AMAZON’S ALEXA VIRTUAL ASSISTANT BUILT IN.
(IMAGE: JASON CIPRIANI/ZDNET)

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to
draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple’s Siri may have come to prominence first, it is Google and Amazon whose assistants have
since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and
Amazon’s Alexa with the massive number of ‘Skills’ that third-party devs have created to add to its capabilities.

31
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion
that major PC makers will build Alexa into laptops adding to speculation about whether Cortana’s days are
numbered, although Microsoft was quick to reject this.

WHICH COUNTRIES ARE LEADING THE WAY IN AI?


It’d be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu,
and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country
China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150
billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain,
and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce
them by 2021.

Baidu’s self-driving car, a modified BMW 3 series. (Image: Baidu)

32
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto
manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by
major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage
over the US when it comes to future AI research, with one analyst describing the chances of China taking the
lead over the US as 500 to one in China’s favor.

HOW CAN I GET STARTED WITH AI?


While you could try to build your own GPU array at home and start training a machine-learning model,
probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own
machine-learning models through to web services that allow you to access AI-powered tools such as speech,
language, vision and sentiment recognition on demand.

WHAT ARE RECENT LANDMARKS IN THE DEVELOPMENT


OF AI?
There’s too many to put together
a comprehensive list, but some
recent highlights include: in 2009
Google showed it was possible
for its self-driving Toyota Prius to
complete more than 10 journeys of
100 miles each -- setting society on
a path towards driverless vehicles.

In 2011, the computer system IBM


Watson made headlines worldwide
when it won the US quiz show IBM Watson competes on Jeopardy! in January 14, 2011 (Image: IBM)

Jeopardy!, beating two of the best


players the show had ever produced. To win the show, Watson used natural language processing and analytics
on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision,
with Google training a system to recognise an internet favorite, pictures of cats.

33
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Since Watson’s win, perhaps the most famous demonstration of the efficacy of machine-learning systems was
the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese
game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to
about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through
each of them in advance to identify the best play is too costly from a computational point of view. Instead,
AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games
and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested
and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played
“completely random” games against itself, and then learnt from the results. At last year’s prestigious Neural
Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed
AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world’s top
players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate
and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

HOW WILL AI CHANGE THE WORLD?


Robots and driverless cars
The desire for robots to be able to act autonomously and understand and navigate the world around them
means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in
robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as
helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering
wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside
Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news
We are on the verge of having neural networks that can create photo-realistic images or replicate someone’s
voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as
no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about
how such technologies will be used to misappropriate people’s image, with tools already being created to
convincingly splice famous actresses into adult films.
34
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Speech and language recognition


Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost
95 percent. Recently Microsoft’s Artificial Intelligence and Research group reported it had developed a system
able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm
alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance


In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech
giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video.
While police forces in western countries have generally only trialled using facial-recognition systems at large
events, in China the authorities are mounting a nationwide program to connect CCTV across the country to
facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use
of facial-recognition glasses by police.

Although privacy regulations vary across the world, it’s likely this more intrusive use of AI technology—
including AI that can recognize emotions -- will gradually become more widespread elsewhere.

Healthcare
AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays,
aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to
more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM’s Watson
clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center,
and the use of Google DeepMind systems by the UK’s National Health Service, where it will help spot eye
abnormalities and streamline the process of screening patients for head and neck cancers.

WILL AI KILL US ALL?


Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides
have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a “fundamental risk to the existence of human civili-
zation”. As part of his push for stronger regulatory oversight and more responsible research into mitigating the
downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote
and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has

35
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

warned that once a sufficiently advanced AI is created


While AI won’t replace all jobs,
it will rapidly advance to the point at which it vastly
outstrips human capabilities, a phenomenon known what seems to be certain is
as the singularity, and could pose an existential threat that AI will change the nature
to the human race.
of work, with the only question
Yet the notion that humanity is on the verge of an AI being how rapidly and how
explosion that will dwarf our intellect seems ludicrous
to some AI researchers.
profoundly automation will alter
the workplace.
Chris Bishop, Microsoft’s director of research in
Cambridge, England, stresses how different the
narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry
about “Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are
decades away.”

WILL AN AI STEAL YOUR JOB?


The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more
credible near-future possibility.

While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the
only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn’t have the potential to impact. As AI expert Andrew
Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at
automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the
next few decades”.

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go,
a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this
means for the more than three million people in the US who works as cashiers remains to be seen. Amazon
again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves
of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its
fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have
grown, so has the number of human workers in these warehouses. However, Amazon and small robotics
firms are working to automate the remaining manual jobs in the warehouse, so it’s not a given that manual and
robotic labor will continue to grow hand-in-hand.

36
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Amazon bought Kiva robotics in 2012 and today uses Kiva robots throughout its warehouses. (Image: Amazon)

Fully autonomous self-driving vehicles aren’t a reality yet, but by some predictions the self-driving trucking
industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact
on couriers and taxi drivers.

Yet some of the easiest jobs to automate won’t even require robotics. At present there are millions of people
working in administration, entering and copying data between systems, chasing and booking appointments
for companies. As software gets better at automatically updating systems and flagging the information that’s
important, so the need for administrators will fall.

As with every technological shift, new jobs will be created to replace those lost. However, what’s uncertain is
whether these new roles will be created rapidly enough to offer employment to those displaced, and whether
the newly unemployed will have the necessary skills or temperament to fill these emerging roles.

Not everyone is a pessimist. For some, AI is a technology that will augment, rather than replace, workers. Not
only that but they argue there will be a commercial imperative to not replace people outright, as an AI-assisted
worker -- think a human concierge with an AR headset that tells them exactly what a client wants before they
ask for it -- will be more productive or effective than an AI working on its own.

37
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Among AI experts there’s a broad range of opinion about how quickly artificially intelligent systems will
surpass human capabilities.

Oxford University’s Future of Humanity Institute asked several hundred machine-learning experts to predict
AI capabilities, over the coming decades.

Notable dates included AI writing essays that could pass for being written by a human by 2026, truck drivers
being made redundant by 2027, AI surpassing human capabilities in retail by 2031, writing a best-seller by 2049,
and doing a surgeon’s work by 2053.

They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and
automates all human jobs within 120 years.

38
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

What is machine learning? Everything you


need to know
BY NICK HEATH
Machine learning is enabling computers to tackle tasks that have, until now, only been carried out by people.

From driving cars to translating speech, machine learning is driving an explosion in the capabilities of artificial
intelligence—helping software make sense of the messy and unpredictable real world.

But what exactly is machine learning and what is making the current boom in machine learning possible?

WHAT IS MACHINE LEARNING?


At a very high level, machine learning is the process of teaching a computer system how to make accurate
predictions when fed data.

Those predictions could be answering whether a piece of fruit in a photo is a banana or an apple, spotting
people crossing the road in front of a self-driving car, whether the use of the word book in a sentence relates
to a paperback or a hotel reservation, whether an email is spam, or recognizing speech accurately enough to
generate captions for a YouTube video.

The key difference from traditional computer software is that a human developer hasn’t written code that
instructs the system how to tell the difference between the banana and the apple.

Instead a machine-learning model has been taught how to reliably discriminate between the fruits by being
trained on a large amount of data, in this instance likely a huge number of images labelled as containing a
banana or an apple.

Data, and lots of it, is the key to making machine learning possible.

• Blockchain, AI, machine learning: What do CIOs really think are the most exciting tech trends?
• How to use machine learning to accelerate your IoT initiatives

WHAT IS THE DIFFERENCE BETWEEN AI AND MACHINE


LEARNING?
Machine learning may have enjoyed enormous success of late, but it is just one method for achieving artificial
intelligence.

39
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

At the birth of the field of AI in the 1950s, AI was defined as any machine capable of performing a task that
would typically require human intelligence.

AI systems will generally demonstrate at least some of the following traits: planning, learning, reasoning,
problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social
intelligence and creativity.

Alongside machine learning, there are various other approaches used to build AI systems, including
evolutionary computation, where algorithms undergo random mutations and combinations between
generations in an attempt to “evolve” optimal solutions, and expert systems, where computers are programmed
with rules that allow them to mimic the behavior of a human expert in a specific domain, for example an
autopilot system flying a plane.

WHAT ARE THE MAIN TYPES OF MACHINE LEARNING?


Machine learning is generally split into two main categories: supervised and unsupervised learning.

WHAT IS SUPERVISED LEARNING?


This approach basically teaches machines by example.

During training for supervised learning, systems are exposed to large amounts of labelled data, for example
images of handwritten figures annotated to indicate which number they correspond to. Given sufficient
examples, a supervised-learning system would learn to recognize the clusters of pixels and shapes associated
with each number and eventually be able to recognize handwritten numbers, able to reliably distinguish
between the numbers 9 and 4 or 6 and 8.

However, training these systems typically requires huge amounts of labelled data, with some systems needing to
be exposed to millions of examples to master a task.

As a result, the datasets used to train these systems can be vast, with Google’s Open Images Dataset having
about nine million images, its labeled video repository YouTube-8M linking to seven million labeled videos
and ImageNet, one of the early databases of this kind, having more than 14 million categorized images. The
size of training datasets continues to grow, with Facebook recently announcing it had compiled 3.5 billion
images publicly available on Instagram, using hashtags attached to each image as labels. Using one billion of
these photos to train an image-recognition system yielded record levels of accuracy—of 85.4 percent—on
ImageNet’s benchmark.

40
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The laborious process of labeling the datasets used in training is often carried out using crowdworking services,
such as Amazon Mechanical Turk, which provides access to a large pool of low-cost labor spread across the
globe. For instance, ImageNet was put together over two years by nearly 50,000 people, mainly recruited
through Amazon Mechanical Turk. However, Facebook’s approach of using publicly available data to train
systems could provide an alternative way of training systems using billion-strong datasets without the overhead
of manual labeling.

• How machine learning can be used to catch a hacker (TechRepublic)


• Scientists built this Raspberry Pi-powered, 3D-printed robot-lab to study flies

WHAT IS UNSUPERVISED LEARNING?


In contrast, unsupervised learning tasks algorithms with identifying patterns in data, trying to spot similarities
that split that data into categories.

An example might be Airbnb clustering together houses available to rent by neighborhood, or Google News
grouping together stories on similar topics each day.

The algorithm isn’t designed to single out specific types of data, it simply looks for data that can be grouped by
its similarities, or for anomalies that stand out.

WHAT IS SEMI-SUPERVISED LEARNING?


The importance of huge sets of labelled data for training machine-learning systems may diminish over time,
due to the rise of semi-supervised learning.

As the name suggests, the approach mixes supervised and unsupervised learning. The technique relies upon
using a small amount of labelled data and a large amount of unlabelled data to train systems. The labelled data
is used to partially train a machine-learning model, and then that partially trained model is used to label the
unlabelled data, a process called pseudo-labelling. The model is then trained on the resulting mix of the labelled
and pseudo-labelled data.

The viability of semi-supervised learning has been boosted recently by Generative Adversarial Networks
(GANs), machine-learning systems that can use labelled data to generate completely new data, for example
creating new images of Pokemon from existing images, which in turn can be used to help train a machine-
learning model.

41
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Were semi-supervised learning to become as effective as supervised learning, then access to huge amounts of
computing power may end up being more important for successfully training machine-learning systems than
access to large, labelled datasets.

WHAT IS REINFORCEMENT LEARNING?


A way to understand reinforcement learning is to think about how someone might learn to play an old school
computer game for the first time, when they aren’t familiar with the rules or how to control the game. While
they may be a complete novice, eventually, by looking at the relationship between the buttons they press, what
happens on screen and their in-game score, their performance will get better and better.

An example of reinforcement learning is Google DeepMind’s Deep Q-network, which has beaten humans in
a wide range of vintage video games. The system is fed pixels from each game and determines various infor-
mation about the state of the game, such as the distance between objects on screen. It then considers how the
state of the game and the actions it performs in game relate to the score it achieves.

Over the process of many cycles of playing the game, eventually the system builds a model of which actions
will maximize the score in which circumstance, for instance, in the case of the video game Breakout, where the
paddle should be moved to in order to intercept the ball.

HOW DOES SUPERVISED MACHINE LEARNING WORK?


Everything begins with training a machine-learning model, a mathematical function capable of repeatedly
modifying how it operates until it can make accurate predictions when given fresh data.

Before training begins, you first have to choose which data to gather and decide which features of the data are
important.

A hugely simplified example of what data features are is given in this explainer by Google, where a machine
learning model is trained to recognize the difference between beer and wine, based on two features, the drinks’
color and their alcoholic volume (ABV).

Each drink is labelled as a beer or a wine, and then the relevant data is collected, using a spectrometer to
measure their color and hydrometer to measure their alcohol content.

An important point to note is that the data has to be balanced, in this instance to have a roughly equal number
of examples of beer and wine.

42
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The gathered data is then split, into a larger proportion for training, say about 70 percent, and a smaller
proportion for evaluation, say the remaining 30 percent. This evaluation data allows the trained model to be
tested to see how well it is likely to perform on real-world data.

Before training gets underway there will generally also be a data-preparation step, during which processes such
as deduplication, normalization and error correction will be carried out.

The next step will be choosing an appropriate machine-learning model from the wide variety available. Each
have strengths and weaknesses depending on the type of data, for example some are suited to handling images,
some to text, and some to purely numerical data.

HOW DOES SUPERVISED MACHINE-LEARNING TRAINING


WORK?
Basically, the training process involves the machine-learning model automatically tweaking how it functions
until it can make accurate predictions from data, in the Google example, correctly labeling a drink as beer or
wine when the model is given a drink’s color and ABV.

A good way to explain the training process is to consider an example using a simple machine-learning model,
known as linear regression with gradient descent. In the following example, the model is used to estimate how
many ice creams will be sold based on the outside temperature.

Imagine taking past data showing ice cream sales and outside temperature, and plotting that data against each
other on a scatter graph—basically creating a scattering of discrete points.

To predict how many ice creams will


be sold in future based on the outdoor
temperature, you can draw a line that
passes through the middle of all these
points, similar to this illustration.

Once this is done, ice cream sales can be


predicted at any temperature by finding the
point at which the line passes through a
particular temperature and reading off the
corresponding sales at that point.
IMAGE: NICK HEATH / ZDNET

43
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Bringing it back to training a machine-learning model, in this instance training a linear regression model would
involve adjusting the vertical position and slope of the line until it lies in the middle of all of the points on the
scatter graph.

At each step of the training process, the vertical distance of each of these points from the line is measured.
If a change in slope or position of the line results in the distance to these points increasing, then the slope or
position of the line is changed in the opposite direction, and a new measurement is taken.

In this way, via many tiny adjustments to the slope and the position of the line, the line will keep moving
until it eventually settles in a position which is a good fit for the distribution of all these points, as seen in the
video below. Once this training process is complete, the line can be used to make accurate predictions for how
temperature will affect ice cream sales, and the machine-learning model can be said to have been trained.

While training for more complex machine-learning models such as neural networks differs in several respects, it
is similar in that it also uses a “gradient descent” approach, where the value of “weights” that modify input data
are repeatedly tweaked until the output values produced by the model are as close as possible to what is desired.

• To master artificial intelligence, don’t forget people and process


• How Adobe moves AI, machine learning research to the product pipeline

HOW TO EVALUATE MACHINE-LEARNING MODELS?


Once training of the model is complete, the model is evaluated using the remaining data that wasn’t used
during training, helping to gauge its real-world performance.

To further improve performance, training parameters can be tuned. An example might be altering the extent to
which the “weights” are altered at each step in the training process.

WHAT ARE NEURAL NETWORKS AND HOW ARE THEY


TRAINED?
A very important group of algorithms for both supervised and unsupervised machine learning are neural
networks. These underlie much of machine learning, and while simple models like linear regression used can
be used to make predictions based on a small number of data features, as in the Google example with beer and
wine, neural networks are useful when dealing with large sets of data with many features.

Neural networks, whose structure is loosely inspired by that of the brain, are interconnected layers of
algorithms, called neurons, which feed data into each other, with the output of the preceding layer being the
input of the subsequent layer.

44
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Each layer can be thought of as recognizing different features of the overall data. For instance, consider the
example of using machine learning to recognize handwritten numbers between 0 and 9. The first layer in the
neural network might measure the color of the individual pixels in the image, the second layer could spot
shapes, such as lines and curves, the next layer might look for larger components of the written number—for
example, the rounded loop at the base of the number 6. This carries on all the way through to the final layer,
which will output the probability that a given handwritten figure is a number between 0 and 9.

The network learns how to recognize each component of the numbers during the training process, by gradually
tweaking the importance of data as it flows between the layers of the network. This is possible due to each link
between layers having an attached weight, whose value can be increased or decreased to alter that link’s signif-
icance. At the end of each training cycle the system will examine whether the neural network’s final output
is getting closer or further away from what is desired—for instance is the network getting better or worse at
identifying a handwritten number 6. To close the gap between between the actual output and desired output,
the system will then work backwards through the neural network, altering the weights attached to all of these
links between layers, as well as an associated value called bias. This process is called back-propagation.

Eventually this process will settle on values for these weights and biases that will allow the network to reliably
perform a given task, such as recognizing handwritten numbers, and the network can be said to have “learned”
how to carry out a specific task

AN ILLUSTRATION OF THE STRUCTURE OF A NEURAL NETWORK AND HOW TRAINING


WORKS. (IMAGE: NVIDIA)

WHAT IS DEEP LEARNING AND WHAT ARE DEEP NEURAL


NETWORKS?
A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks
with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks

45
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

that have fueled the current leap forward in the ability of computers to carry out task like speech recognition
and computer vision.

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural
networks are a type of neural net particularly well suited to language processing and speech recognition, while
convolutional neural networks are more commonly used in image recognition. The design of neural networks
is also evolving, with researchers recently devising a more efficient design for an effective type of deep neural
network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand
systems like Google Translate.

The AI technique of evolutionary algorithms is even being used to optimize neural networks, thanks to a
process called neuroevolution. The approach was recently showcased by Uber AI Labs, which released papers
on using genetic algorithms to train deep neural networks for reinforcement learning problems.

• Deep Learning: The interest is more than latent


• Dell EMC high-performance computing bundles aimed at AI, deep learning

WHY IS MACHINE LEARNING SO SUCCESSFUL?


While machine learning is not a new technique, interest in the field has exploded in recent years.

This resurgence comes on the back of a series of breakthroughs, with deep learning setting new records for
accuracy in areas such as speech and language recognition, and computer vision.

What’s made these successes possible are primarily two factors, one being the vast quantities of images, speech,
video and text that is accessible to researchers looking to train machine-learning systems.

But even more important is the availability of vast amounts of parallel-processing power, courtesy of modern
graphics processing units (GPUs), which can be linked together into clusters to form machine-learning
powerhouses.

Today anyone with an internet connection can use these clusters to train machine-learning models, via cloud
services provided by firms like Amazon, Google and Microsoft.

As the use of machine-learning has taken off, so companies are now creating specialized hardware tailored to
running and training machine-learning models. An example of one of these custom chips is Google’s Tensor
Processing Unit (TPU), the latest version of which accelerates the rate at which machine-learning models built
using Google’s TensorFlow software library can infer information from data, as well as the rate at which they
can be trained.

46
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

These chips are not just used to train models for Google DeepMind and Google Brain, but also the models
that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the
public to build machine learning models using Google’s TensorFlow Research Cloud. The second generation
of these chips was unveiled at Google’s I/O conference in May last year, with an array of these new TPUs
able to train a Google machine-learning model used for translation in half the time it would take an array of
the top-end GPUs, and the recently announced third-generation TPUs able to accelerate training and inference
even further.

As hardware becomes increasingly specialized and machine-learning software frameworks are refined, it’s
becoming increasingly common for ML tasks to be carried out on consumer-grade phones and computers,
rather than in cloud datacenters. In the summer of 2018, Google took a step towards offering the same quality
of automated translation on phones that are offline as is available online, by rolling out local neural machine
translation for 59 languages to the Google Translate app for iOS and Android.

• The great data science hope: Machine learning can cure your terrible data hygiene
• Machine learning as a service: Can privacy be taught?
• Five ways your company can get started implementing AI and ML
• Why AI and machine learning need to be part of your digital transformation plans

WHAT IS ALPHAGO?
Perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph
of the Google DeepMind AlphaGo AI over a human grandmaster in Go, a feat that wasn’t expected until
2026. Go is an ancient Chinese game whose complexity bamboozled computers for decades. Go has about 200
moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible
moves that searching through each of them in advance to identify the best play is too costly from a compu-
tational standpoint. Instead, AlphaGo was trained how to play the game by taking moves played by human
experts in 30 million Go games and feeding them into deep-learning neural networks.

Training the deep-learning networks needed can take a very long time, requiring vast amounts of data to be
ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played
“completely random” games against itself, and then learnt from the results. At last year’s prestigious Neural
Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed
AlphaGo had also mastered the games of chess and shogi.

47
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

DeepMind continue to break new ground in the Every Google search uses
field of machine learning. In July 2018, DeepMind
reported that its AI agents had taught themselves multiple machine-learning
how to play the 1999 multiplayer 3D first-person systems, to understand the
shooter Quake III Arena, well enough to beat language in your query through
teams of human players. These agents learned
how to play the game using no more information to personalizing your results, so
than the human players, with their only input being fishing enthusiasts searching
the pixels on the screen as they tried out random for “bass” aren’t inundated with
actions in game, and feedback on their perfor-
mance during each game. results about guitars.

More recently DeepMind demonstrated an AI agent capable of superhuman performance across multiple
classic Atari games, an improvement over earlier approaches where each AI agent could only perform well at
a single game. DeepMind researchers say these general capabilities will be important if AI research is to tackle
more complex real-world domains.

• Google’s AlphaGo retires after beating Chinese Go champion


• DeepMind AlphaGo Zero learns on its own without meatbag intervention

WHAT IS MACHINE LEARNING USED FOR?


Machine learning systems are used all around us, and are a cornerstone of the modern internet.

Machine-learning systems are used to recommend which product you might want to buy next on Amazon or
video you want to may want to watch on Netflix.

Every Google search uses multiple machine-learning systems, to understand the language in your query through
to personalizing your results, so fishing enthusiasts searching for “bass” aren’t inundated with results about
guitars. Similarly Gmail’s spam and phishing-recognition systems use machine-learning trained models to keep
your inbox clear of rogue messages.

One of the most obvious demonstrations of the power of machine learning are virtual assistants, such as
Apple’s Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana.

Each relies heavily on machine learning to support their voice recognition and ability to understand natural
language, as well as needing an immense corpus to draw upon to answer queries.

48
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

But beyond these very visible manifestations of machine learning, systems are starting to find a use in just
about every industry. These exploitations include: computer vision for driverless cars, drones and delivery
robots; speech and language recognition and synthesis for chatbots and service robots; facial recognition for
surveillance in countries like China; helping radiologists to pick out tumors in x-rays, aiding researchers in
spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs
in healthcare; allowing for predictive maintenance on infrastructure by analyzing IoT sensor data; underpinning
the computer vision that makes the cashierless Amazon Go supermarket possible, offering reasonably accurate
transcription and translation of speech for business meetings—the list goes on and on.

Deep-learning could eventually pave the way for robots that can learn directly from humans, with researchers
from Nvidia recently creating a deep-learning system designed to teach a robot to how to carry out a task,
simply by observing that job being performed by a human.

• Startup uses AI and machine learning for real-time background checks


• Three out of four believe that AI applications are the next mega trend
• How ubiquitous AI will permeate everything we do without our knowledge

ARE MACHINE-LEARNING SYSTEMS OBJECTIVE?


As you’d expect, the choice and breadth of data used to train systems will influence the tasks they are suited to.

For example, in 2016 Rachael Tatman, a National Science Foundation Graduate Research Fellow in the
Linguistics Department at the University of Washington, found that Google’s speech-recognition system
performed better for male voices than female ones when auto-captioning a sample of YouTube videos, a result
she ascribed to ‘unbalanced training sets’ with a preponderance of male speakers.

As machine-learning systems move into new areas, such as aiding medical diagnosis, the possibility of systems
being skewed towards offering a better service or fairer treatment to particular groups of people will likely
become more of a concern.

WHICH ARE THE BEST MACHINE-LEARNING COURSES?


A heavily recommended course for beginners to teach themselves the fundamentals of machine learning is this
free Stanford University and Coursera lecture series by AI expert and Google Brain founder Andrew Ng.

Another highly-rated free online course, praised for both the breadth of its coverage and the quality of its
teaching, is this EdX and Columbia University introduction to machine learning, although students do mention
it requires a solid knowledge of math up to university level.

49
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

HOW TO GET STARTED WITH MACHINE LEARNING?


Technologies designed to allow developers to teach themselves about machine learning are increasingly
common, from AWS’ deep-learning enabled camera DeepLens to Google’s Raspberry Pi-powered AIY kits.

WHICH SERVICES ARE AVAILABLE FOR MACHINE


LEARNING?
All of the major cloud platforms—Amazon Web Services, Microsoft Azure and Google Cloud Platform—
provide access to the hardware needed to train and run machine-learning models, with Google letting Cloud
Platform users test out its Tensor Processing Units—custom chips whose design is optimized for training and
running machine-learning models.

This cloud-based infrastructure includes the data stores needed to hold the vast amounts of training data,
services to prepare that data for analysis, and visualization tools to display the results clearly.

Newer services even streamline the creation of custom machine-learning models, with Google recently
revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service
builds custom image-recognition models and requires the user to have no machine-learning expertise, similar
to Microsoft’s Azure Machine Learning Studio. In a similar vein, Amazon recently unveiled new AWS offerings
designed to accelerate the process of training up machine-learning models.

For data scientists, Google’s Cloud ML Engine is a managed machine-learning service that allows users to train,
deploy and export custom machine-learning models based either on Google’s open-sourced TensorFlow ML
framework or the open neural network framework Keras, and which now can be used with the Python library
sci-kit learn and XGBoost.

Database admins without a background in data science can use Google’s BigQueryML, a beta service that
allows admins to call trained machine-learning models using SQL commands, allowing predictions to be made
in database, which is simpler than exporting data to a separate machine learning and analytics environment.

For firms that don’t want to build their own machine-learning models, the cloud platforms also offer
AI-powered, on-demand services—such as voice, vision, and language recognition. Microsoft Azure stands out
for the breadth of on-demand services on offer, closely followed by Google Cloud Platform and then AWS.

Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI
services aimed at everything from healthcare to retail, grouping these offerings together under its IBM
Watson umbrella.

50
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Early in 2018, Google expanded its machine-learning driven services to the world of advertising, releasing a
suite of tools for making more effective ads, both digital and physical.

While Apple doesn’t enjoy the same reputation for cutting edge speech recognition, natural language processing
and computer vision as Google and Amazon, it is investing in improving its AI services, recently putting
Google’s former chief in charge of machine learning and AI strategy across the company, including the devel-
opment of its assistant Siri and its on-demand machine learning service Core ML.

In September 2018, NVIDIA launched a combined hardware and software platform designed to be installed in
datacenters that can accelerate the rate at which trained machine-learning models can carry out voice, video and
image recognition, as well as other ML-related services.

The NVIDIA TensorRT Hyperscale Inference Platform uses NVIDIA Tesla T4 GPUs, which delivers up to
40x the performance of CPUs when using machine-learning models to make inferences from data, and the
TensorRT software platform, which is designed to optimize the performance of trained neural networks.

• Amazon Web Services adds more data and ML services, but when is enough enough?
• Microsoft Stresses Choice, From SQL Server 2017 to Azure Machine Learning
• Splunk updates flagship suites with machine learning, AI advances

WHICH SOFTWARE LIBRARIES ARE AVAILABLE FOR GETTING


STARTED WITH MACHINE LEARNING?
There are a wide variety of software frameworks for getting started with training and running machine-learning
models, typically for the programming languages Python, R, C++, Java and MATLAB.

Famous examples include Google’s TensorFlow, the open-source library Keras, the Python library Scikit-learn,
the deep-learning framework CAFFE and the machine-learning library Torch.

51
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

What is artificial general intelligence?


Everything you need to know
BY NICK HEATH
An Artificial General Intelligence (AGI) would be a machine capable of understanding the world as well as any
human, and with the same capacity to learn how to carry out a huge range of tasks.

AGI doesn’t exist, but has featured in science-fiction stories for more than a century, and been popularized in
modern times by films such as 2001: A Space Odyssey.

Fictional depictions of AGI vary widely, although tend more towards the dystopian vision of intelligent
machines eradicating or enslaving humanity, as seen in films like The Matrix or The Terminator. In such stories,
AGI is often cast as either indifferent to human suffering or even bent on mankind’s destruction.

In contrast, utopian imaginings, such as Iain M Banks’ Culture civilization novels, cast AGI as benevolent
custodians, running egalitarian societies free of suffering, where inhabitants can pursue their passions and
technology advances at a breathless pace.

image: istock/PhonlamaiPhoto

52
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Whether these ideas would bear any resemblance to real-world AGI is unknowable since nothing of the sort
has been created, or, according to many working in the field of AI, is even close to being created.

WHAT COULD AN ARTIFICIAL GENERAL INTELLIGENCE DO?


In theory, an artificial general intelligence could carry out any task a human could, and likely many that a human
couldn’t. At the very least, an AGI would be able to combine human-like, flexible thinking and reasoning with
computational advantages, such as near-instant recall and split-second number crunching.

Using this intelligence to control robots at least as dexterous and mobile as a person would result in a new
breed of machines that could perform any human task. Over time these intelligences would be able to take
over every role performed by humans. Initially, humans might be cheaper than machines, or humans working
alongside AI might be more effective than AI on their own. But the advent of AGI would likely render human
labor obsolete.

Effectively ending the need for human labor would have huge social ramifications, impacting both the
population’s ability to feed themselves and the sense of purpose and self-worth employment can bring.

Even today, the debate over the eventual impact on jobs of the very different, narrow AI that currently exist
has led some to call for the introduction of Universal Basic Income (UBI).

Under UBI everyone in society would receive a regular payment from the government with no strings attached.
The approach is divisive, with some advocates arguing it would provide a universal safety net and reduce
bureaucratic costs. However, some anti-poverty campaigners have produced economic models showing such a
scheme could worsen deprivation among vulnerable groups if it replaced existing social security systems
in Europe.

Beyond the impact on social cohesion, the advent of artificial general intelligence could be profound. The
ability to employ an army of intelligences equal to the best and brightest humans could help develop new
technologies and approaches for mitigating intractable problems such as climate change. On a more mundane
level, such systems could perform everyday tasks, from surgery and medical diagnosis to driving cars, at a
consistently higher level than humans—which in aggregate could be a huge positive in terms of time, money
and lives saved.

The downside is that this combined intelligence could also have a profoundly negative effect: empowering
surveillance and control of populations, entrenching power in the hands of a small group of organizations,
underpinning fearsome weapons, and removing the need for governments to look after the obsolete populace.

53
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

• AI shouldn’t be held back by scaremongering: Michael Dell


• What is AI? Everything you need to know about Artificial Intelligence
• Deloitte: These tech trends will create the Symphonic Enterprise
• Sophia the robot is crowdfunding her brain

COULD AN ARTIFICIAL GENERAL INTELLIGENCE OUTSMART


HUMANS?
Yes, not only would such an intelligence have the same general capabilities as a human being, it would be
augmented by the advantages that computers have over humans today—the perfect recall, and the ability to
perform calculations near instantaneously.

WHEN WILL AN ARTIFICIAL GENERAL INTELLIGENCE BE


INVENTED?
It depends who you ask, with answers ranging between within 11 years and never.

Part of the reason it’s so hard to pin down is the lack of a clear path to AGI. Today machine-learning
systems underpin online services, allowing computers to recognize language, understand speech, spot faces,
and describe photos and videos. These recent breakthroughs, and high-profile successes such as AlphaGo’s
domination of the notoriously complex game of Go, can give the impression society is on the fast track to
developing AGI. Yet the systems in use today are generally rather one-note, excelling at a single task after
extensive training, but useless for anything else. Their nature is very different to that of a general intelligence
that can perform any task asked of it, and as such these narrow AIs aren’t necessarily stepping stones to devel-
oping an AGI.

The limited abilities of today’s narrow AI was highlighted in a recent report, co-authored by Yoav Shoham of
Stanford Artificial Intelligence Laboratory.

“While machines may exhibit stellar performance on a certain task, performance may degrade dramatically if
the task is modified even slightly,” it states.

“For example, a human who can read Chinese characters would likely understand Chinese speech, know
something about Chinese culture and even make good recommendations at Chinese restaurants. In contrast,
very different AI systems would be needed for each of these tasks.”

Michael Woolridge, head of the computer science department at the University of Oxford, picked up on this
point in the report, stressing “neither I nor anyone else would know how to measure progress” towards AGI.

54
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Despite this uncertainty, there are some highly


The idea of a near-future
vocal advocates of near-future AGI. Perhaps the
most famous is Ray Kurzweil, Google’s director superintelligence has prompted
of engineering, who predicts an AGI capable of some of the world’s most
passing the Turing Test will exist by 2029 and that
prominent scientists and
by the 2040s affordable computers will perform
the same number of calculations per second as technologists to warn of the
the combined brains of the entire human race. dire risks posed by AGI. SpaceX
Kurzweil’s supporters point to his successful track and Tesla founder Elon Musk
record in forecasting technological advancement, calls AGI the “biggest existential
with Kurzweil estimating that by the end of 2009
just under 80% of the predictions he made in the
threat” facing humanity and
1990s had come true. the famous physicist and
Kurzweil’s confidence in the rate of progress Cambridge University Professor
stems from what he calls the law of accelerating Stephen Hawking told the
returns. In 2001 he said the exponential nature of
BBC “the development of full
technological change, where each advance accel-
erates the rate of future breakthroughs, means artificial intelligence could spell
the human race will experience the equivalent of the end of the human race”.
20,000 years of technological progress in the 21st
century. These rapid changes in areas such as computer processing power and brain-mapping technologies are
what underpins Kurzweil’s confidence in the near-future development of the hardware and software needed to
support an AGI.

WHAT IS SUPERINTELLIGENCE?
Kurzweil believes that once an AGI exists it will improve upon itself at an exponential rate, rapidly evolving to
the point where its intelligence operates at a level beyond human comprehension. He refers to this point as the
singularity, and says it will occur in 2045, at which stage an AI will exist that is “one billion times more powerful
than all human intelligence today”.

The idea of a near-future superintelligence has prompted some of the world’s most prominent scientists and
technologists to warn of the dire risks posed by AGI. SpaceX and Tesla founder Elon Musk calls AGI the
“biggest existential threat” facing humanity and the famous physicist and Cambridge University Professor
Stephen Hawking told the BBC “the development of full artificial intelligence could spell the end of the
human race”.
55
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Both were signatories to an open letter calling on the AI community to engage in “research on how to make AI
systems robust and beneficial”.

Nick Bostrom, philosopher and director of Oxford University’s Future of Humanity Institute, has cautioned
what might happen when superintelligence is reached.

Describing superintelligence as a bomb waiting to be detonated by irresponsible research, he believes


superintelligent agents may pose a threat to humans, who could likely stand “in its way”.

“If the robot becomes sufficiently powerful,” said Bostrom, “it might seize control to gain rewards.”

IS IT EVEN SENSIBLE TO TALK ABOUT AGI?


The problem with discussing the effects of AGI and superintelligences is that most working in the field of AI
stress that AGI is currently fiction, and may remain so for a very long time.

Chris Bishop, laboratory director at Microsoft Research Cambridge, has said discussions about artificial general
intelligences rising up are “utter nonsense”, adding “at best, such discussions are decades away”.

Worse than such discussions being pointless scaremongering, other AI experts say they are diverting attention
from the near-future risks posed by today’s narrow AI.

Andrew Ng, is a well-known figure in the field of deep learning, previously having worked on the “Google
Brain” project and served as chief scientist for Chinese search giant Baidu. He recently called on those debating
AI and ethics to “cut out the AGI nonsense” and spend more time focusing on how today’s technology
is exacerbating or will exacerbate problems such as “job loss/stagnant wages, undermining democracy,
discrimination/bias, wealth inequality”.

Even highlighting the potential upside of AGI could damage public perceptions of AI, fuelling disappointment
in the comparatively limited abilities of existing machine-learning systems and their narrow, one-note skillset—
be that translating text or recognizing faces.

HOW WOULD YOU CREATE AN ARTIFICIAL GENERAL


INTELLIGENCE?
Demis Hassabis, the co-founder of Google DeepMind, argues that the secrets to general artificial intelligence
lie in nature.

Hassabis and his colleagues believe it is important for AI researchers to engage in “scrutinizing the inner
workings of the human brain—the only existing proof that such an intelligence is even possible”.

56
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

“Studying animal cognition and its neural implementation also has a vital role to play, as it can provide a
window into various important aspects of higher-level general intelligence,” they wrote in a paper last year.

They argue that doing so will help inspire new approaches to machine learning and new architectures for the
neural networks, the mathematical models that make machine learning possible.

Hassabis and his colleagues say “key ingredients of human intelligence” are missing in most AI systems,
including how infants build mental models of the world that guide predictions about what might will happen
next and that allow them to plan. Also absent from current AI models is the human ability to learn from only a
handful of examples, to generalize knowledge learned in one instance to many similar situations, such as a new
driver understanding how to drive more than just the car they learned in.

“New tools for brain imaging and genetic bioengineering have begun to offer a detailed characterization of the
computations occurring in neural circuits, promising a revolution in our understanding of mammalian brain
function,” according to the paper, which says neuroscience should serve as “roadmap for the AI
research agenda”.

Another perspective comes from Yann LeCun, Facebook’s chief AI scientist, who played a pioneering role in
machine-learning research due to his work on convolutional neural networks.

He believes the path towards general AI lies in developing systems that can build models of the world they
can use to predict future outcomes. A good route to achieving this, he said in a talk last year, could be using
generative adversarial networks (GANs).

In a GAN, two neural networks do battle, the generator network tries to create convincing “fake” data and the
discriminator network attempts to tell the difference between the fake and real data. With each training cycle,
the generator gets better at producing fake data and the discriminator gains a sharper eye for spotting those
fakes. By pitting the two networks against each other during training, both can achieve better performance.
GANs have been used to carry out some remarkable tasks, such as turning these dashcam videos from day to
night or from winter to summer.

• Democratic artificial intelligence will shape future technologies: Gartner


• DeepLocker: When malware turns artificial intelligence into a weapon
• IEEE publishes draft report on ‘ethically aligned’ AI design
• The biggest threat to artificial intelligence: Human stupidity

57
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

WOULD AN ARTIFICIAL GENERAL INTELLIGENCE HAVE


CONSCIOUSNESS?
Given the many definitions of consciousness, this is a very tricky question to answer. A famous thought
experiment by philosopher John Searle demonstrates how difficult it would be to determine whether an AGI
was truly self-aware.

Searle’s Chinese Room suggests a hypothetical scenario in which the philosopher is presented with a written
query in an unfamiliar Chinese language. Searle is sat alone in a closed room and individual characters from
each word in the query are slid under the door in order. Despite not understanding the language, Searle is able
to follow the instructions given by a book in the room for manipulating the symbols and numerals fed to him.
These instructions allow him to create his own series of Chinese characters that he feeds back under the door.

By following the instructions Searle is able to create an appropriate response and fool the person outside the
room into thinking there is a native speaker inside, despite Searle not understanding the Chinese language. In
this way, Searle argued the experiment demonstrates a computer could converse with people and appear to
understand a language, while having no actual comprehension of its meaning.

The experiment has been used to attack the Turing Test. Devised by the brilliant mathematician and father of
computing Alan Turing, the test suggests a computer could be classed as a thinking machine if it could fool
one-third of the people it was talking with into believing it was a human.

image: istock/PhonlamaiPhoto

58
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

In a more recent book, Searle says this uncertainty “A machine cannot be


over the true nature of an intelligent computer
extends to consciousness. In his book Language and taught what is fair unless
Consciousness, he says: “Just as behavior by itself is the engineers designing the
not sufficient for consciousness, so computational AI system have a precise
models of consciousness by themselves are not
sufficient for consciousness” going on to give an conception of what fairness is.”
example that: “Nobody supposes the computational —Vyacheslav W. Polonski
model of rainstorms in London will leave us wet”.

Searle creates a distinction between strong AI, where the AI can be said to have a mind, and weak AI, where
the AI is instead a convincing model of a mind.

Various counterpoints have been raised to the Chinese Room and Searle’s conclusions, ranging from arguments
that the experiment mischaracterizes the nature of a mind, to it ignoring the fact that Searle is part of a wider
system, which, as a whole, understands the Chinese language.

There is also the question of whether the distinction between a simulation of a mind and an actual mind
matters, with Stuart Russell and Peter Norvig, who wrote the definitive textbook on artificial intelligence,
arguing most AI researchers are more focused on the outcome than the intrinsic nature of the system.

CAN MORALITY BE ENGINEERED IN ARTIFICIAL GENERAL


INTELLIGENCE SYSTEMS?
Maybe, but there are no good examples of how this might be achieved.

Russell paints a clear picture of how an AI’s ambivalence towards human morality could go awry.

“Imagine you have a domestic robot. It’s at home looking after the kids and the kids have had their dinner and
are still hungry. It looks in the fridge and there’s not much left to eat. The robot is wondering what to do, then
it sees the kitty, you can imagine what might happen next,” he said.

“It’s a misunderstanding of human values, it’s not understanding that the sentimental value of a cat is much
greater than the nutritional value.”

Vyacheslav W. Polonski of the Oxford Internet Institute argues that before an AGI could be gifted morality
that people would first have to codify exactly what morality is.

59
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

“A machine cannot be taught what is fair unless the engineers designing the AI system have a precise
conception of what fairness is,” he writes, going on to question how a machine could be taught to
“algorithmically maximise fairness” or to “overcome racial and gender biases in its training data”.

Polonski’s suggested solution to these problems is explicitly defining ethical behavior—citing Germany’s
Ethics Commission on Automated and Connected Driving recommendation that designers of self-driving cars
program the system with ethical values that prioritize the protection of human life above all else.

Another possible answer he highlights is training a machine-learning system on what constitutes moral
behavior, drawing on many different human examples. One such repository of this data might be MIT’s Moral
Machine Project, which asks participants to judge the ‘best’ response in difficult hypothetical situations, such as
whether it is better to kill five people in a car or five pedestrians.

Of course, such approaches are fraught with potential for misinterpretation and unintended consequences.

Hard-coding morality into machines seems too immense a challenge, given the impossibility of predicting
every situation a machine could find itself in. If a collision is unavoidable, should a self-driving car knock down
someone in their sixties or a child? What if that child had a terminal illness? What if the person in their sixties
were the sole carer of their partner?

Having a machine learn what is moral behavior from human examples may be the better solution, albeit one
that risks encoding in the machine the same biases that exist in the wider population.

Russell suggests intelligent systems and robots could accrue understanding of human values over time, through
their shared observation of human behavior, both today and recorded throughout history. Russell suggests one
method that robots could use to gain such an appreciation of human values could be via inverse reinforcement
learning, a machine-learning technique where a system is trained by being rewarded for desired behavior.

• Optus and Curtin University partner for artificial intelligence research


• HR and artificial intelligence?
• The ethical challenges of artificial intelligence
• AI for business: Why artificial intelligence and machine learning will be revolutionary

HOW DO WE STOP A GENERAL AI FROM BREAKING ITS


CONSTRAINTS?
As part of its mission to tackle existential risks, the US-based Future of Life Institute (FLI) has funded various
research into AGI safety, in anticipation of AI capable of causing harm being created in the near future.

60
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

“To justify a modest investment in this AI Deviant behavior in AGIs will also need
robustness research, this probability need not addressing, the FLI says. Just as an airplane’s
be high, merely non-negligible, just as a modest onboard software undergoes rigorous checks for
investment in home insurance is justified by a bugs that might trigger unexpected behavior, so
non-negligible probability of the home burning the code that underlies AIs should be subject to
down,” it said upon launching its research similar formal constraints.
program, pointing out that in the 1930s one of the
For traditional software there are projects
greatest physicists of the time, Ernest Rutherford,
such as seL4, which has developed a complete,
said nuclear energy was “moonshine”, just five
general-purpose operating-system kernel that has
years before the discovery of nuclear fission.
been mathematically checked against a formal
Before an AGI’s behavior can be constrained, the specification to give a strong guarantee against
FLI argues it’s necessary to pinpoint precisely what crashes and unsafe operations.
it should and shouldn’t do.
However, in the case of AI, new approaches to
“In order to build systems that robustly behave verification may be needed, according to the FLI.
well, we of course need to decide what ‘good
behavior’ means in each application domain. “Perhaps the most salient difference between
Designing simplified rules—for example, to verification of traditional software and verification
govern a self-driving car’s decisions in critical of AI systems is that the correctness of
situations—will likely require expertise from traditional software is defined with respect to a
both ethicists and computer scientists,” it says in fixed and known machine model, whereas AI
a research priorities report compiled by Stuart systems—especially robots and other embodied
Russell and other academics. systems—operate in environments that are at best
partially known by the system designer.
Ensuring proper behavior becomes problematic
with strong, general AI, the paper says, adding “In these cases, it may be practical to verify that
that societies are likely to encounter significant the system acts correctly given the knowledge that
challenges in aligning the values of powerful AI it has, avoiding the problem of modelling the real
systems with their own values and preferences. environment,” the research states.

“Consider, for instance, the difficulty of creating The FLI suggests it should be possible to build
a utility function that encompasses an entire body AI systems from components, each of which has
of law; even a literal rendition of the law is far been verified.
beyond our current capabilities, and would be
highly unsatisfactory in practice,” it states.

61
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Where the risks of a misbehaving AGI are particularly high, it suggests such systems could be isolated from the
wider world.

“Very general and capable systems will pose distinctive security problems. In particular, if the problems of
validity and control are not solved, it may be useful to create ‘containers’ for AI systems that could have
undesirable behaviors and consequences in less-controlled environments,”
it states.

The difficulty is that ensuring humans can keep control of a general AI is not straightforward.

For example, a system is likely to do its best to route around problems that prevent it from completing its
desired task.

“This could become problematic, however, if we wish to repurpose the system, to deactivate it, or to
significantly alter its decision-making process; such a system would rationally avoid these changes,” the research
points out.

The FLI recommend more research into corrigible systems, which do not exhibit this behavior.

“It may be possible to design utility functions or decision processes so that a system will not try to avoid being
shut down or repurposed,” according to the research.

Another potential problem could stem from an AI negatively impacting its environment in the pursuit of its
goals—leading the FLI to suggest more research into the setting of “domestic” goals that are limited in scope.

In addition, it recommends more work needs to be carried out into the likelihood and nature of an “intelli-
gence explosion” among AI—where the capabilities of self-improving AI advance far beyond humans’ ability
to control them.

The IEEE has its own recommendations for building safe AGI systems, which broadly echo those of the FLI
research. These include that AGI systems should be transparent and their reasoning understood by human
operators, that “safe and secure” environments should be developed in which AI systems can be developed and
tested, that systems should be developed to fail gracefully in the event of tampering or crashes and that such
systems shouldn’t resist being shutdown by operators.

Today the question of how to develop AI in a manner beneficial to society as a whole is the subject of ongoing
research by the non-profit organization OpenAI.

62
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The FLI research speculates that given the right checks and balances a general AI could transform societies
for the better: “Success in the quest for artificial intelligence has the potential to bring unprecedented
benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding
potential pitfalls.”

63
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

What is deep learning? Everything you


need to know
BY NICK HEATH

image: istock/Artystarty

Deep learning is a subset of machine learning, which itself falls within the field of artificial intelligence.

WHAT IS THE DIFFERENCE BETWEEN DEEP LEARNING,


MACHINE LEARNING AND AI?
Artificial intelligence is the study of how to build machines capable of carrying out tasks that would typically
require human intelligence.

That rather loose definition means that AI encompasses many fields of research, from genetic algorithms to
expert systems, and provides scope for arguments over what constitutes AI.

64
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Within the field of AI research, machine learning has enjoyed remarkable success in recent years—allowing
computers to surpass or come close to matching human performance in areas ranging from facial recognition
to speech and language recognition.

Machine learning is the process of teaching a computer to carry out a task, rather than programming it how to
carry that task out step by step.

At the end of training, a machine-learning system will be able to make accurate predictions when given data.

That may sound dry, but those predictions could be answering whether a piece of fruit in a photo is a banana
or an apple, if a person is crossing in front of a self-driving car, whether the use of the word book in a sentence
relates to a paperback or a hotel reservation, whether an email is spam, or recognizing speech accurately
enough to generate captions for a YouTube video.

Machine learning is typically split into supervised learning, where the computer learns by example from labeled
data, and unsupervised learning, where the computer groups similar data and pinpoints anomalies.

Deep learning is a subset of machine learning, whose capabilities differ in several key respects from traditional
shallow machine learning, allowing computers to solve a host of complex problems that couldn’t otherwise
be tackled.

An example of a simple, shallow machine-learning task might be predicting how ice-cream sales will vary
based on outdoor temperature. Making predictions using only a couple of data features in this way is relatively
straightforward, and can be carried out using a shallow machine-learning technique called linear regression with
gradient descent.

The issue is that swathes of problems in the real world aren’t a good fit for such simple models. An example of
one of these complex real-world problems is recognizing handwritten numbers.

To solve this problem, the computer needs to be able to cope with huge variety in how the data can be
presented. Every digit between 0 and 9 can be written in myriad ways: the size and exact shape of each
handwritten digit can be very different depending on who’s writing and in what circumstance.

Coping with the variability of these features, and the even bigger mess of interactions between them, is where
deep learning and deep neural networks become useful.

65
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Neural networks are mathematical models whose structure is loosely inspired by that of the brain.

Each neuron within a neural network is a mathematical function that takes in data via an input, transforms
that data into a more amenable form, and then spits it out via an output. You can think of neurons in a neural
network as being arranged in layers, as shown here.

IMAGE: NICK HEATH / ZDNET

All neural networks have an input layer, where the initial data is fed in, and an output layer, that generates the
final prediction. But in a deep neural network, there will be multiple “hidden layers” of neurons between these
input and output layers, each feeding data into each other. Hence the term “deep” in “deep learning” and
“deep neural networks”, it is a reference to the large number of hidden layers—typically greater than three—at
the heart of these neural networks.

This simplified diagram above hopefully helps to provide an idea of how a simple neural network is structured.
In this example, the network has been trained to recognize handwritten figures, such as the number 2 shown
here, with the input layer being fed values representing the pixels that make up an image of a handwritten digit,
and the output layer predicting which handwritten number was shown in the image.

In the diagram above, each circle represents a neuron in the network, with the neurons organized into
vertical layers.

As you can see, each neuron is linked to every neuron in the following layer, representing the fact that each
neuron outputs a value into every neuron in the subsequent layer.

66
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The color of the links in the diagram also vary. The different colors, black and red, represent the significance
of the links between neurons. The red links are those of greater significance, meaning they will amplify the
value as it passes between the layers. In turn, this amplification of the value can help activate the neuron that
the value is being fed into.

A neuron can be said to have been activated when the sum of the values being input into this neuron passes
a set threshold. In the diagram, the activated neurons are shaded red. What this activation means differs
according to the layer. In “Hidden layer 1” shown in the diagram, an activated neuron might mean the image of
the handwritten figure contains a certain combination of pixels that resemble the horizontal line at the top of a
handwritten number 7. In this way, “Hidden layer 1” could detect many of the tell-tale lines and curves that will
eventually combine together into the full handwritten figure.

An actual neural network would likely have both more hidden layers and more neurons in each layer. For
instance, a “Hidden layer 2” could be fed the small lines and curves identified by “Hidden layer 1”, and detect
how these combine to form recognizable shapes that make up digits, such as the entire bottom loop of a six.
By feeding data forward between layers in this way, each subsequent hidden layer handles increasingly higher-
level features.

As mentioned the activated neuron in the diagram’s output layer has a different meaning. In this instance, the
activated neuron corresponds to which number the neural network estimates it was shown in the image of a
handwritten digit it was fed as an input.

As you can see, the output of one layer is the input of the next layer in the network, with data flowing through
the network from the input to the output.

But how do these multiple hidden layers allow a computer to determine the nature of a handwritten digit?
These multiple layers of neurons basically provide a way for the neural network to build a rough hierarchy of
different features that make up the handwritten digit in question. For instance, if the input is an array of values
representing the individual pixels in the image of the handwritten figure, the next layer might combine these
pixels into lines and shapes, the next layer combines those shapes into distinct features like the loops in an 8 or
upper triangle in a 4, and so on. By building a picture of which of these features, modern neural networks can
determine—with a very high level of accuracy—the number that corresponds to a handwritten digit. Similarly,
different types of deep neural networks can be trained to recognize faces in an image or to transcribe written
speech from audio.

The process of building this increasingly complex hierarchy of features of the handwritten number out of
nothing but pixels is learned by the network. The learning process is made possible by how the network is

67
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

able to alter the importance of the links Essentially, deep learning allows
between the neurons in each layer. Each
link has an attached value called a weight,
machine learning to tackle a whole
which will modify the value spat out by a host of new complex problems—
neuron as it passes from one layer to such as image, language, and
the next.
speech recognition—by allowing
By altering the value of these weights, and
machines to learn how features in
an associated value called bias, it is possible
to emphasize or diminish the importance the data combine into increasingly
of links between neurons in the network. higher level, abstract forms.
For instance, in the case of the model for
recognizing handwritten digits, these weights could be modified to stress the importance of a particular group
of pixels that form a line, or a pair of intersecting lines that
form a 7.

AN ILLUSTRATION OF THE STRUCTURE OF A NEURAL NETWORK AND HOW TRAINING WORKS. (IMAGE: NVIDIA)

The model learns which links between neurons are important in making successful predictions during training.
At each step during training, the network will use a mathematical function to determine how accurate its latest
prediction was compared to what was expected. This function generates a series of error values, which in turn
can be used by the system to calculate how the model should update the value of the weights attached to each
link, with the ultimate aim of improving the accuracy of the network’s predictions. The extent to which these

68
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

values should be changed is calculated by an optimization function, such as gradient descent, and those changes
are pushed back throughout the network at the end of each training cycle in a step called back propagation.

Over the course of many, many training cycles, and with the help of occasional manual parameter tuning,
the network will continue to generate better and better predictions until it hits close to peak accuracy. At this
point, for example, when handwritten digits could be recognized with more than 95 percent accuracy, the
deep-learning model can be said to have been trained.

Essentially deep learning allows machine learning to tackle a host of new complex problems, such as image,
language and speech recognition, by allowing machines to learn how features in the data combine into increas-
ingly higher level, abstract forms. For example in facial recognition, how pixels in an image create lines and
shapes, how those lines and shapes create facial features and how these facial features are arranged into a face.

WHY IS IT CALLED DEEP LEARNING?


As mentioned, the depth refers to the number of hidden layers, typically more than three, used within
deep-neural networks.

HOW IS DEEP LEARNING BEING USED?


For many tasks, for recognizing and generating images, speech and language, and in combination with
reinforcement learning to match human-level performance in games ranging from the ancient, such as Go, to
the modern, such as Dota 2 and Quake III.
Deep-learning systems are a foundation of modern online services. Such systems are used by Amazon to
understand what you say—both your speech and the language you use—to the Alexa virtual assistant or by
Google to translate text when you visit a foreign-language website.
Every Google search uses multiple machine-learning systems to understand the language in your query and
personalize your results, so fishing enthusiasts searching for “bass” aren’t inundated with results about guitars.

But beyond these very visible manifestations of machine and deep learning, such systems are starting to find
a use in just about every industry. These uses include: computer vision for driverless cars, drones and delivery
robots; speech and language recognition and synthesis for chatbots and service robots; facial recognition for
surveillance in countries like China; helping radiologists to pick out tumors in x-rays, aiding researchers in
spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs
in healthcare; allowing for predictive maintenance on infrastructure by analyzing IoT sensor data; underpinning
the computer vision that makes the cashierless Amazon Go supermarket possible, offering reasonably accurate
transcription and translation of speech for business meetings—the list goes on and on.

69
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The Amazon Go Store relies on image recognition powered by deep learning to detect what shoppers buy. (Image: Amazon)

WHEN SHOULD YOU USE DEEP LEARNING?


When your data is largely unstructured and you have a lot of it.

Deep learning algorithms can take messy and broadly unlabeled data—such as video, images, audio recordings,
and text—and impose enough order upon that data to make useful predictions, building a hierarchy of features
that make up a dog or cat in an image or of sounds that form a word in speech.

• IoT boom will change how data is analysed


• Google Next 2018: A deeper dive on AI and machine learning advances
• The 4 hottest tech trends that are transforming the world in 2018
• Google’s human-sounding AI to answer calls at contact centers

70
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

WHAT KIND OF PROBLEMS DOES DEEP LEARNING SOLVE?


As mentioned, deep neural networks excel at making predictions based on largely unstructured data. That
means they deliver best in class performance in areas such as speech and image recognition, where they work
with messy data such as recorded speech and photographs.

SHOULD YOU USE ALWAYS DEEP LEARNING INSTEAD OF


SHALLOW MACHINE LEARNING?
No, because deep learning can be very expensive from a computational point of view.

For non-trivial tasks, training a deep-neural network will often require processing large amounts of data using
clusters of high-end GPUs for many, many hours.

Given top-of-the-range GPUs can cost thousands of dollars to buy, or up to $5 per hour to rent in the cloud,
it’s unwise to jump straight to deep learning.

If the problem can be solved using a simpler machine-learning algorithm such as Bayesian inference or linear
regression, one that doesn’t require the system to grapple with a complex combination of hierarchical features
in the data, then these far less computational demanding options will be the better choice.

Deep learning may also not be the best choice for making a prediction based on data. For example, if the
dataset is small then sometimes simple linear machine-learning models may yield more accurate results—
although some machine-learning specialists argue a properly trained deep-learning neural network can still
perform well with small amounts of data.

• AI and health: Using machine learning to understand the human immune system
• MapR platform update brings major AI and Analytics innovation...through the file system
• Microsoft buys machine-learning startup Bonsai
• Nvidia researchers use deep learning to create super-slow motion videos

WHAT ARE THE DRAWBACKS OF DEEP LEARNING?


One of the big drawbacks is the amount of data they require to train, with Facebook recently announcing it
had used one billion images to achieve record-breaking performance by an image-recognition system. When
the datasets are this large, training systems also require access to vast amounts of distributed computing power.
This is another issue of deep learning, the cost of training. Due to the size of datasets and number of training
cycles that have to be run, training often requires access to high-powered and expensive computer hardware,

71
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

typically high-end GPUs or GPU arrays. Whether you’re building your own system or renting hardware from a
cloud platform, neither option is likely to be cheap.

Deep-neural networks are also difficult to train, due to what is called the vanishing gradient problem, which
can worsen the more layers there are in a neural network. As more layers are added the vanishing gradient
problem can result in it taking an unfeasibly long time to train a neural network to a good level of accuracy, as
the improvement between each training cycle is so minute. The problem doesn’t afflict every multi-layer neural
network, rather those that use gradient-based learning methods. That said this problem can be addressed in
various ways, by choosing an appropriate activation function or by training a system using a heavy-duty GPU.

• Deep learning helps Google keep an eye on heart attack risk


• The future of the future: Spark, big data insights, streaming and deep learning in the cloud

WHY ARE DEEP NEURAL NETWORKS HARD TO TRAIN?


As mentioned deep neural networks are hard to train because of the number of layers in the neural network.
The number of layers and links between neurons in the network is such that it can become difficult to calculate
the adjustments that need to be made at each step in the training process—a problem referred to as the
vanishing gradient problem.

Another big issue is the vast quantities of data that are necessary to train deep learning neural networks, with
training corpuses often measuring petabytes in size.

WHAT DEEP LEARNING TECHNIQUES EXIST?


There are various types of deep neural network, with structures suited to different types of tasks. For example,
Convolutional Neural Networks (CNNs) are typically used for computer vision tasks, while Recurrent Neural
Networks (RNNs) are commonly used for processing language. Each has its own specializations, in CNNs
the initial layers are specialized for extracting distinct features from the image, which are then fed into a more
conventional neural network to allow the image to be classified. Meanwhile, RNNs differ from a traditional
feed-forward neural network in that they don’t just feed data from one neural layer to the next but also have
built-in feedback loops, where data output from one layer is passed back to the layer preceding it—lending the
network a form of memory. There is a more specialized form of RNN that includes what is called a memory
cell and that is tailored to processing data with lags between inputs.

The most basic type of neural network is a multi-layer perceptron network, the type discussed above in the
handwritten figures example, where data is fed forward between layers of neurons. Each neuron will typically
transform the values they are fed using an activation function, which changes those values into a form that, at

72
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

the end of the training cycle, will allow the network to calculate how far off it is from making an
accurate prediction.

There are a large number of different types of deep neural networks. No one network is inherently better than
the other, they just are better suited to learning particular types of tasks.

More recently, generative adversarial networks (GANS) are extending what is possible using neural networks.
In this architecture two neural networks do battle, the generator network tries to create convincing “fake” data
and the discriminator attempts to tell the difference between fake and real data. With each training cycle, the
generator gets better at producing fake data and the discriminator gains a sharper eye for spotting those fakes.
By pitting the two networks against each other during training, both can achieve better performance. GANs
have been used to carry out some remarkable tasks, such as turning these dashcam videos from day to night or
from winter to summer, and have applications ranging from turning low-resolution photos into high-resolution
alternatives and generating images from written text. GANs have their own limitations, however, that can make
them challenging to work with, although these are being tackled by developing more robust GAN variants.

WHERE CAN YOU LEARN MORE ABOUT DEEP LEARNING?


There’s no shortage of courses out there that cover deep learning.

If you’re interested in those put together by leading figures in the field, you could check out these Coursera
offerings, one by Geoff Hinton on neural networks and another co-created by Andrew Ng that provides
a general overview of the topic, while this Udacity course was co-created by Sebastian Thrun, of Google
self-driving car fame, and provides access to experts from OpenAI, Google Brain, and DeepMind.

There’s also a wealth of free lessons available online, many from top educational institutions, such as these
classes on natural language processing and convolutional neural networks from Stanford University.

If you’re just after a more detailed overview of deep learning, then Neural Networks and Deep Learning is an
excellent free online book. While if you are comfortable with high-school maths and the Python programming
language, then Google’s Colab project offers an interactive introduction to machine learning.

• Google races against AWS, Microsoft to bring AI to developers


• Store scanning robots will get AI, object recognition boost with recent acquisition
• Robotics in business: Everything humans need to know
• GPU computing: Accelerating the deep learning curve

73
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

HOW MUCH DOES IT COST TO INVEST IN DEEP LEARNING?


It depends on your approach, but it will typically cost you hundreds of dollars upwards, depending on the
complexity of the machine-learning task and your chosen method.

WHAT HARDWARE DO YOU NEED FOR MACHINE LEARNING?


The first choice is whether you want to rent hardware in the cloud or build your own deep-learning rig.
Answering this question comes down to how long you anticipate you will be training your deep-learning model.
You will pay more over time if you stick with cloud services, so if you anticipate the training process will take
more than a couple of months of intensive use then buying/building your own machine for training will likely
be prudent.

If the cloud sounds suitable, then you can rent computing infrastructure tailored to deep learning from the
major cloud providers, including AWS, Google Cloud, and Microsoft Azure. Each also offers automated
systems that streamline the process of training a machine-learning model with offerings such as drag-and-drop
tools, including Microsoft’s Machine Learning Studio, Google’s Cloud AutoML and AWS SageMaker.

That said, building your own machine won’t be cheap. You’ll need to invest in a decent GPU to train anything
more than very simple neural networks, as GPUs can carry out a very large number of matrix multiplications in
parallel, helping accelerate a crucial step during training.

If you’re not planning on training a neural network with a large number of layers, you can opt for consumer-
grade cards, such as the Nvidia GeForce GTX 1060, which typically sells for about £270, while still offering
1,280 CUDA cores.

More heavy-duty training, however, will require specialist equipment. One of the most powerful GPUs for
machine learning is the Nvidia Tesla V100, which packs 640 AI-tailored Tensor cores and 5,120 general HPC
CUDA cores. These cost considerably more than consumer cards, with prices for the PCI Express version
starting at £7,500.

Building AI-specific workstations and servers can cost even more, for example, the deep-learning focused
DGX1 sells for $149,000.

As well as a PCIe adapter, the Tesla V100 is available as an SXM module to plug into Nvidia’s high-speed
NVLink bus.

74
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

AS WELL AS A PCIE ADAPTER, THE TESLA V100 IS AVAILABLE AS AN SXM MODULE TO PLUG INTO NVIDIA’S HIGH-SPEED NVLINK BUS.
(IMAGE: NVIDIA)

HOW LONG DOES IT TAKE TO TRAIN A DEEP LEARNING


MODEL?
The time taken to train a deep-learning model varies hugely, from hours to weeks or more, and is dependent on
factors such as the available hardware, optimization, the number of layers in the neural network, the network
architecture, the size of the dataset and more.

WHICH DEEP-LEARNING SOFTWARE FRAMEWORKS ARE


AVAILABLE?
There are a wide range of deep-learning software frameworks, which allow users to design, train and validate
deep neural networks, using a range of different programming languages.

A popular choice is Google’s TensorFlow software library, which allows users to write in Python, Java, C++,
and Swift, and that can be used for a wide range of deep learning tasks such as image and speech recognition,
and which executes on a wide range of CPUs, GPUs, and other processors. It is well-documented, and has
many tutorials and implemented models that are available.

75
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Another popular choice, especially for beginners, is PyTorch, a framework that offers the imperative
programming model familiar to developers and allows developers to use standard Python statements. It works
with deep neural networks ranging from CNNs to RNNs and runs efficiently on GPUs.

Among the wide range of other options are Microsoft’s Cognitive Toolkit, MATLAB, MXNet, Chainer, and
Keras.

WILL NEURAL NETWORKS AND DEEP LEARNING LEAD TO


GENERAL ARTIFICIAL INTELLIGENCE?
At present deep learning is used to build narrow AI, artificial intelligence that performs a particular task, be
that captioning photos or transcribing speech.

There’s no system so far that can be thought of as a general artificial intelligence, able to tackle the same
breadth of tasks and with the same broad understanding as a human being. When such systems will be
developed is unknown, with predictions ranging from decades upwards.

76
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

How machines are beating cardiologists in


north central Pennsylvania
BY CHRIS DUCKETT

Image: Geisinger

As the world rushes headlong down a path where machine learning is going to insert itself into myriad tasks,
medicine is primed to be one of the early success stories where humans augmented by machines will literally
save lives.

In north central Pennsylvania, integrated health network Geisinger has trained neural networks to examine
echocardiograms, and the machines are outperforming its cardiologists. But when it comes to improving the
overall medical field, which includes teaching doctors about what the machine has picked up, it all remains a
black box.

Speaking at Nvidia GTC in March, Dr Brandon Fornwalt, associate professor and director of Geisinger
Department of Imaging Science & Innovation, said that’s something the company is trying to look into.

77
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

“We haven’t been able to figure out [what the model is seeing that the cardiologists cannot], we’re trying
different things,” he said. “It’s very hard to figure that out.”

ELECTRONIC HEALTH RECORDS ENABLE MACHINE


LEARNING
Geisinger is a health network of 13 hospitals spanning the northern part of Pennsylvania and into New Jersey.
It has a wealth of data, thanks to an electronic health record (EHR) introduced in 1996, containing 1.9 million
patients, almost a billion outpatient vital sign measurements, 3.6 billion rows in the dataset, 140,000 whole
exomes that have been sequenced, and demographics information indicating the population is incredibly stable.

“They are born there, they live there, they work there, they grow up, they have kids there—and so we have
an average of 16 years of longitudinal follow on our patients, which allows us to go back in time and grab a
snapshot of the patients and then predict the future, and the future has already happened because we captured
that in the retrospective data,” Fornwalt said. “So it’s a unique application for predictive modelling and machine
learning in this.”

The health provider has also had a team of data analysts and modellers for a over a decade, with the clinical
image archive possessing 11 million clinical studies taking up around 2 petabyes of data—almost 200,000 of
the studies have been used in research or innovation pushes, Fornwalt explained.

One area where deep learning has been applied is in intracranial haemorrhaging, where a neural network is
used on images coming off a CAT scanner. When the network detects an abnormality in the patient, the corre-
sponding image is brought to the top of the queue for the radiologist to look at—the idea behind this is that
radiologists can read the most acute findings first.

With over 46,000 head CAT scans consisting of 2 million images collected over a decade, the neural network is
trained on data classified as containing or not containing an intracranial haemorrhage.

The system has been operating within Geisinger for two years and during that period has seen the time to
diagnosis of outpatient haemorrhages reduce by 96 percent. For 10 percent of cases where a radiologist had
given the all clear while the machine determined a haemorrhage was present, a second opinion with another
human had found a “subtle haemorrhage that may or may not have been clinically significant”.

78
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

IMAGE: GEISINGER

MACHINE BESTS HUMANS ON CARDIAC IMAGING


Another project with neural networks that Geisinger has undertaken is attempting to improve mortality
prediction and risk scores.

“Risk is everything in medicine, we give diagnosis because we think it tells us about risk, and we also give
treatments because we think we can mitigate risk with treatments,” Fornwalt said.

“That’s really what medicine is about, predicting future events, the risk scores, and risk stratification.”

The team used 300,000 echocardiograms from 170,00 patients for the neural network, and it performed much,
much better than current clinical metrics. While the result itself was not entirely unsurprising, how the network
derived the results was.

Fornwalt said that after age, the variable that most affected the result returned by the machine learning network
was the TR max velocity variable that measures pulmonary pressure.

79
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

“Injection faction is the way that we really look at cardiac function data and the one variable that we use the
most... and yet this other variable that’s sort of buried in the report that we never pay attention to came up to
the top of the list above all of the [other] metrics,” he said.

“This is a way of uncovering the black box and making physicians more comfortable and understanding what
these models are doing to predict the future.”

Typically during an echocardiogram appointment, the service provider receives 30 videos to make a diagnosis
on, Fornwalt explained, and using just one video, a neural network can perform better than the current
clinical metrics.

“I didn’t believe it because that’s beating the clinical risk scores ... and that’s one of greater than 20 videos and
clinical data that we’ve not yet added to this model—it’s just one video,” he said.

The Geisinger team then ran the neural network over its catalogue of 720,000 videos, which took two weeks
on an Nvidia DGX-1, and found the accuracy of the model approached 80 percent, while the best human was
only hitting 60 percent.

“The cardiologist tended to say that the patient would live ... but that was at the expense of sensitivity, i.e.,
saying that patients were going to die,” Fornwalt said.

“I’m not saying that the machine is better than a cardiologist at what they do, but what I’m saying here is that
this is evidence to suggest that machines are going to be able to predictive value that humans are probably not
going to be able to do.”

With electrocardiograms (ECGs), Geisinger trained a neural network to predict one-year mortality on 1.8
million ECGs it had collected over 38 years from around 400,000 patients that were linked to outcomes such
as death and clinical events—and once again, accuracy was around the 80 percent mark. But as Fornwalt
described, the interesting result that appeared when the network ran over 300,000 ECGs that were deemed as
normal by clinicians, was the network returned the same sort of one-year survival trend as the wider dataset.

“I was kind of shocked at this, because that means that the cardiologist has essentially said ‘Hey this is
completely normal’ but the neural network is finding features in there that are predictive of one-year survival,
so how can it be truly normal?,” he said.

Going back to the cardiologists, the team returned to see if they could train them to see what the machine was
seeing. The cardiologists were shown a pair of ECGs as well the parameters associated with it including age
and sex, and told that one of them was predicted by the machine to live, and the other to die. This activity was
completed around 400 times, and the result was little better than 50/50 random chance.

80
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

So the cardiologists were given another dataset to practice on, and the same test again.

“It didn’t change,” Fornwalt said. “So we can’t even teach them how to see the features that the neural network
is picking up on.”

“We thought that was a pretty powerful result to say: ‘The neural network is doing things that we can’t see
as humans’.”

Disclosure: Chris Duckett travelled to GTC in San Jose as a guest of Nvidia.

81
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Enterprise AI and machine learning:


Comparing the companies and applications
BY TIERNAN RAY
If you’ve ever had the pleasure—and we use that word lightly—of pricing cloud computing offerings, you’ll be
delighted to know there’s a whole new roster of offerings to complicate your buying decision, under the rubric of
artificial intelligence.

The Big Four cloud computing majors—Amazon, Microsoft, Google, and IBM—all offer the ability to construct
and run neural networks and other forms of AI in their public cloud computing facilities, and they all have various
tools and various prices for doing it. Yet another class of offerings are provided by the cloud SaaS champs, Oracle
and Salesforce.

There are so many offerings, with so many idiosyncrasies in their features and pricing, you might need some
artificial intelligence just to figure out which are the best deals.

Fortunately, ZDNet is offering real intelligence: We’ve studied the various offerings and compiled ways to think
about the buying decision.

image: istock/metamorworks

82
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The good news: There’s a lot of overlap in the one’s work over many hours in computing time. A
services, and there are many ways to get started maker is one part data scientist, one part IT admin-
for free. You have choice and you can start out by istrator, and one part business analyst, or perhaps a
dipping a toe. team comprising all those abilities.

The less-good news: Making your final decision will A taker, on the other hand, is someone who wants
depend on a very careful assessment of what your to quickly use some kind of AI capability with a
goal is in a still very nascent field, machine learning. minimal effort. A taker may be a marketing exec
You may not know until you spend  some time or sales rep with no knowledge of AI, or an IT
working with these vendors’ technology just what admin who simply wants to deliver new capabilities
exactly you want out of their services. to customers or employees who have to use those
applications.
MAKERS VERSUS TAKERS Thinking about the two uses cases immediately
The first thing to do in approaching these offerings begins easing the buying decision.
is to think about who you and your company are in
relation to these offerings.
THOSE WHO MAKE AI
Machine learning lets a company find patterns in Makers build neural networks, train them, and then
data. That simple statement encompasses a wide unleash them on real-time signals, which could
variety of goals, from detecting sentiment in a text be batches of transactional data or individual
document to projecting the next action to take with a transactions via a Web commerce site.
customer based on a history of interactions.

To understand that spectrum from a practical stand-


point, think of yourself in one of two buckets:
Makers and takers.

Makers are those who wish to build some poten-


tially new application, perhaps from scratch, or at
least with a heavy degree of customization, from
preparing data to designing the neural network
model that will be used to how it will be served up.
That can involve a lot of experiment with areas of
data science and machine learning concepts at the
very bleeding edge of the discipline, and revising image: istock. Oleg Vyshnevskyy

83
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

That requires preparing data, designing a


Putting all your data in public cloud is
model to test against some data repository,
training it on a large set of data, and finally a big buying decision in itself. Unless
deploying it as a live service. you’ve already standardized on
That means purchasing storage—for Amazon’s S3 storage, or Microsoft’s
development data, training data, and for Azure Blob storage, you may want
the data returned as a result of a query
using the live, trained model.
to first try out the options with a free
account from a vendor, and monitor
The process with each of the Big Four
starts by setting up a cloud account and what kind of economics you’ll
choosing a storage option. This stage achieve as you go along.
already involves choices, not just about
how much data but how you’re going to analyze that data in your neural network. Google, for example, offers two
kinds of pipelines for machine learning data, called Dataproc and Dataflow. The former is optimized for using
the Hadoop file system with analysis packages that are meant to handle it, such as Spark ML. Each has different
per-gigabyte pricing plans TK. The latter is meant to ingest either batch or stream data, meaning, single instances
of incoming data, via things such as Apache Beam. It is meant to be used for Google’s Machine Learning Engine,
where one builds models with TensorFlow or PyTorch, or another ML programming framework.

The point is, putting all your data in public cloud is a big buying decision in itself. Unless you’ve already
standardized on Amazon’s S3 storage, or Microsoft’s Azure Blob storage, you may want to first try out the options
with a free account from a vendor, and monitor what kind of economics you’ll achieve as you go along. All the
vendors offer free accounts for just this purpose, and most of those free offerings will last up to a year, so you have
some time to explore.

PLETHORA OF CHOICES
Once you’ve got the data, you have a plethora of choices for making things. The simplest and most flexible
option: The various machine learning engines with which you can build multiple models in TensorFlow and other
frameworks. These are Google’s Cloud Machine Learning Engine, Amazon AWS’s SageMaker, IBM’s Watson
Machine Learning, and Microsoft’s Azure Machine Learning Service. All of them will let you purchase by the
training hour, when developing the model, and then deploy based on a number of transactions. You have the
greatest freedom with these offerings to bring in different frameworks in which to program models, and the greatest
freedom to choose the configuration of machine, such as memory and processor cores.

84
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

At this point, you may also want to consider options for accelerating the task of training or performing inference.
Google, of course, makes a play for its Tensor Processing Unit, a custom chip now on its third iteration, that
is expressly designed to accelerate the matrix math at the heart of training models. Microsoft promotes use of
field-programmable gate arrays, or FPGAs, called project brainwave. Amazon, in addition to developing its own
chips for running model training, has announced a chip called Inferentia, which will be available sometime later this
year. All four offer graphics processing units, or GPUs, which have become the workhorse of model training, to
accelerate workloads.

There are several ways to simplify your setup, and the buying process. They include pre-packaged virtual machines
and containers designed specifically for machine learning and data science. Google offers the Cloud Deep Learning
Virtual Machine, Microsoft offers its Data Science Virtual Machine, and Amazon has the Deep Learning Amazon
Machine Image. IBM takes a somewhat different tack, promoting its Watson Deep Learning Studio as a dedicated
program that can be used to visually drag and drop components of a machine learning model. Microsoft has
something similar with its Machine Learning Studio.

A key distinguishing factor for both Microsoft and IBM in all of this is their ability to handle on-premise machine
learning. With deep hooks into decades of enterprise wares, the two vendors offer more substantial offerings for
companies that want to perform machine learning on their own infrastructure. IBM’s Watson Studio can be used
behind the firewall to build and train models, which can then either be deployed in the cloud, or deployed to the
local data center with the option of Watson Machine Learning for Private Cloud. Another option is IBM’s Watson
AI Accelerator, a software stack running on the company’s Power line of servers on premise. IBM advises this for
building out large-scale deployment of heavy deep-learning AI models.

image: istock/ Girolamo Sferrazza Papa

85
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Similarly, Microsoft’s Azure ML Studio can be used behind the firewall to design neural networks, drawing training
data from the company’s SQL Server database. There is also a version of Azure Machine Learning that is a licensed
server product for on-premise. Analytics functions can be constructed natively in SQL Server. And even the public
cloud version of Azure Machine Learning can draw data from the on-premise SQL Server. Clearly, there is a
plethora of private and hybrid functions.

In both IBM and Microsoft’s case, a strong argument for on-premise is that the biggest use of data is during the
training period of a new neural network. If customers can do that work in their own data centers, they stand to save
a bundle on buying storage in the public cloud.

Whichever vendor you go with, you’ll want to scrutinize A lot of people talk about
the programming frameworks and tools each offers.
All the Big Four offer support for the most popular AI
AI when all they really
frameworks, TensorFlow, and PyTorch. Amazon and want is to perform some
Google tend to support a greater breadth, including Sci-kit simple data analysis
Learn, MXNet, Rapids, Spark ML, and XGBoost. There
are some that have become dividing lines, such as the
without conducting
ONNX framework to establish a common framework fundamental data science.
between models, supported by Microsoft and Amazon,
but not Google. IBM has its own package for data analysis forms of machine learning that is unique to it, SPSS
Modeler. You’ll have to double check if your favorite framework is supported.

All of the services, in addition to offering special workbenches such as Watson Studio, allow one to use popular
tools for prototyping neural networks such as Jupyter notebooks or Pandas. Your biggest question as you test
these services is how easily you can move data and models in and out of the rest of the cloud workflow.

TAKING AI ON A CONSUMPTION BASIS


Let’s face it: A lot of people talk about AI when all they really want is to perform some simple data analysis
without conducting fundamental data science. For those who would rather skip a lot of coding, there are a growing
number of APIs that can be plugged into an app, or prepackaged solutions that deliver a ready function such as
understanding natural language or running a chat bot.

More and more, vendors are moving to new ways to simplify building things. Google offers AutoML, which
basically gives you the model for image processing (face recognition and object recognition), natural language
processing, and language translation. This means you can skip a lot of the work of building a neural net from
scratch. IBM later  this year will release as a beta something similar, called Neural Network Synthesis, or NeuNetS.

86
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

In a similar vein, Amazon offers a raft of AI/ML documents or images you want, in varying rates from
Services that include Comprehend, which identifies each vendor.
phrases, names of people and places, or brands, in
Many of these APIs are an extension of the idea of
text documents, among other things; Rokognition,
serverless computing, where programming functions
which identifies people and objects in images, and
can join many different functions together. Hence,
can spot inappropriate content; and Forecast, which
each vendor’s cloud serverless functions can be
makes predictions when fed historical data by
used as glue to tie together these AI and machine
combining time series analysis with other data, such
learning  services. They include Amazon AWS’s
as product information, using machine learning.
Lambda architecture, Microsoft’s Azure Functions,
Like Google’s AutoML, Amazon’s AI/ML Services Google’s Cloud Functions, and IBM Cloud
let you forego specifying a neural network model; Functions. For takers of AI, serverless functions will
simply run a script and the system tries a bunch be an increasingly important glue to stitch together
of nets and you let it know when it arrives at lots of capabilities rather than writing everything
predictions that fulfill your objective. APIs let you from scratch.
incorporate the results of predictions into your
More and more, the vendors are adding functions
applications.
that make these basic machine learning tasks behave
Microsoft offers Azure Cognitive Services, including like finished applications. Discovery News, for
vision, language and speech services, to classify example, can analyze blogs and news reports for
images, understand spoken phrases, and create categories and sentiments. Google is relatively new
question-and-answer sessions from documents such with packaged offers, having recently rolled out
as an FAQ. Contact Center AI, a call handling app that uses
virtual agent technology, and Cloud Talent Solution,
IBM’s Watson offers a raft of services within
a job search program.
categories such as Knowledge and Data and Speech
that offers functions such as text-to-speech, speech-
to-text, and the Knowledge Catalog, which can find,
curate, and categorize data within meta-data you
feed it.

In each of these cases, you not only don’t program,


you don’t have to provision infrastructure services
from the major vendors. You simply set up your data
in the cloud and pay by the amount of characters or image: istock/scanrail

87
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

THE FUTURE IS EMBEDDED AI Have your calculators


The next step for makers and takers alike is to incorporate AI ready, or, better yet,
into much larger applications. Known as embedded machine
learning and AI, such programs are especially well represented
reach for an online bill
by two giants of enterprise applications: Oracle and Salesforce. calculator, because
Oracle has a solid pitch for makers who want to start from machine learning in the
their data repository and work outward from there. Its cloud involves a variety
Platform-as-a-Service, or PaaS tools such as the Autonomous
Data Warehouse and the Data Science Cloud are data stores
of pricing models that
that embed the ability to develop and train neural network achieve a somewhat
models, using TensorFlow and Sci-kit Learn and other complex equation.
popular frameworks.
For those who are makers, Oracle offers a suite of what are known as Adaptive Intelligence applications, in the
domains of customer experience, enterprise resource planning, and manufacturing. These applications act as
add-ons, for a separate fee, that integrate with Oracle’s traditional apps in those areas. Models built by Oracle will
yield insights such as a next best action for a sales team, or how to provide optimal discounts to suppliers. Oracle
enhances the offering with what it calls Firmagraphics, data on companies and industries that the company has
amassed through a number of acquisitions.

Salesforce stakes out a position firmly in the taker camp, with its Einstein family of machine learning functions
meant to enhance its selling and marketing and customer service apps, similar to Oracle. Within an application for
sales, for example, a rep will see lead scoring of prospects, based on an assemblage of neural network models that
the company runs under the hood, as a tournament of competing machine learning.

The makers, the Salesforce admins in a company responsible for providing the applications to enterprise users, can
deploy the capabilities without engaging in the design of models. Instead, they turn on capabilities with the help of
prompts from the programs that recommend features suitable to the organization, which can be customized to the
firm’s needs.

OH, THE PRICES YOU’LL CALCULATE!


Have your calculators ready, or, better yet, reach for an online bill calculator, because machine learning in the cloud
involves a variety of pricing models that achieve a somewhat complex equation.

88
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

The Big Four pricing plans for doing the most sophisticated AI development and training are generally broken
down into separate training and inference pricing. Hours of training are then multiplied by various forms of units
of capacity, to reflect the compute power you’re using depending on the kind of compute instance you select.

There are exceptions. For example, IBM prices its Watson Machine Learning as a combined training and inference
cost, somewhat reflecting the view that training may be done offline, behind the firewall. Microsoft doesn’t charge
for training, it says, though you still have to pay for the underlying virtual machine instance.

Choosing acceleration chips, such as GPUs or Google’s TPU, adds another cost on top of the base price.

For some of the API choices, such as video search, image categorization, or text to speech, you’ll pay in allotments
of pennies or dollars per minute of video, or thousands of images, or thousands of characters of text, based on
how frequently you are sending API requests to perform inference.

Still other modules are on a per-seat basis. IBM charges $99 per user, per month, for the cloud version of its Studio
neural network design, but $199 per month for a desktop version. Another fee is charged for local installations
behind the firewall.

Oracle’s Adaptive Intelligence apps range in price for the different bundles but are charged based on a per-user
license, with the CX version, for marketing, sales and services roles, costing $1,000 per month per user, plus $5 for
every 1,000 interactions per month.

image: istock/ monsitj

89
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

Salesforce applications are included with the Unlimited version of the company’s Lightning platform, but for other
cloud SKUs, there’s an extra charge of $4,000 per month that increases depending on the units of millions of
predictions you ask of the software.

Remember that in several cases, customers will end up amassing store of credits, such as in Oracle’s system, which
can then be allotted to services on a case-by-case basis. Consequently, spending may be a matter not merely of
budget allocation but also deciding how to spend credit already collected with a given vendor.

Using the online calculators can be helpful, but your best bet is to try the free version of each application. This way,
you can get a feel for how machine learning training time adds up, in the case of building or customizing machine
learning models; how much data you’ll have to use in the cloud; and at what rate you are likely to draw predictions
from any of these systems. Especially for the last item, the meter is running once you go live with an AI model, and
will keep running for as long as you and your users keep asking the system for predictions.

THE OFFERINGS
Google has arguably the deepest portfolio of machine learning technology of any of the Big Four. You could
do worse than use the company’s own developed algorithms in its AutoML service. And the Tensor Processing
Unit chips are a unique offering for those in the market for AI acceleration. Google’s control of the ubiquitous
TensorFlow framework for machine learning implies you’re in especially good hands if that’s your development
platform of choice. 

Amazon AWS SageMaker


Amazon has been in the cloud computing business longer than anyone, so the breadth of offerings to complement
SageMaker is substantial, and many may already be familiar with pricing and buying in the Amazon system. The
company’s marketplace of third-party machine learning programs that can be added on top of Amazon’s own is
superior to others. The introduction of custom ARM-based processors for cloud compute will be complemented
later this year by Amazon’s first home-made inference chip.  

Microsoft Azure Machine Learning


As a pioneer in speech and vision and natural language processing, Microsoft’s Redmond research labs have
endowed the software giant with a substantial claim to greatness in modern machine learning, which should inform
the company’s cloud AI offerings in those functions. Microsoft can also provide an on-premise or hybrid cloud
machine learning experience with enterprise applications such as SQL Server that embed analytics and machine
learning capabilities. Its development of the open standard ONNX technology for AI model portability also sets
the company apart, as does its development of FPGAs as acceleration tools for machine learning inference.

90
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

IBM Watson Machine Learning Salesforce Einstein


IBM has the richest set of tools to take AI from a The Einstein suite from Salesforce offers the same
company’s internal data sets all the way to publicly simple, pure approach that the cloud company
accessible Web services that deliver analysis. The pioneered, a minimum of engagement with the
company’s Watson Studio acts as a hub that can messy details of provision and deploying software
coordinate the reaching into on-premise repositories and systems. The focus is applications that sit atop
such as Db2 or Oracle DB; clean up and prepare the company’s existing cloud-based commercial
the data via multiple programs such as Data Stage or apps, making deploying and consuming machine
Cloud Private Data; analyze it with applications such learning as easy as possible for admin and IT worker.
as Knowledge Studio; and then deploy predictions No machine learning development is required for an
to the Web, all built upon a modern Kubernetes admin to turn on functions, and predictions, such as
architecture. IBM’s decades of interaction with the next best action for a sales rep, are surfaced in
transaction processing systems means an added the context of the apps they already use. Salesforce
ability to perform machine learning on things can draw upon 20 years of customer trends as
such as fraud and get a result within a window of data that fuels the predictions of the embedded
milliseconds necessary for every transaction. algorithms of Einstein.

Oracle Adaptive Intelligence Apps


With decades in transaction processing and the
OTHER PLAYERS
database that stores the vast majority of enterprises’ In addition to the Big Four, a number of young
data, Oracle is well positioned to make machine companies are offering overlays to cloud computing
learning a function within the same user interface that aim to speed machine learning model training
that customers use daily. The company has coupled and deployment, and that in some cases can offer
its infrastructure-as-service offerings, such as lower rates on compute and storage by amortizing
bare metal computing, to its extensive developer costs across many users.
platform in the cloud, as platform-as-a-service, to
make possible autonomous programs that speed Paperspace
up database functions by anticipating much of the Straight out of Brooklyn, New( York, the Intel-
analytic work that would have to be done by hand. backed startup offers a job scheduler called
Programs such as the Autonomous Data Warehouse Gradient that handles the details of running neural
can then feed into the Applied Intelligence applica- networks in the cloud. You install the company’s
tions to deliver line-of-business predictions such as command-line on your local machine, turn on a
which customers are more likely to be closed in a Jupyter notebook, pre-packaged with machine
given time frame, or which suppliers should be given learning frameworks, and runtimes all in a Docker
special payment terms. container that packages up your model, which is then

91
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE

submitted to Gradient to be run in a cloud instance. You pay either by the hour, with rates varying by CPU, GPU, or
TPU, or for a flat monthly fee of $8 for teams, with other rates for enterprise use. Data storage charges also apply. 

FloydHub
With an illustrious crew from Microsoft and Oracle, and backers such as YCombinator and Gitbhub, FloydHub
aims to simplify model deployment via a simple command-line interface connecting to cloud computing instances,
similar to Paperspace. The company offers monthly plans of $9 for individuals and $99 for teams, as well as the
option for per-second pricing. 

DigitalOcean
Run by former Citrix Systems CEO Mark Templeton, DigitalOcean claims it can get your compute instance in the
cloud up and running in as little as 55 seconds, using pre-built virtual machines with choice of Linux distributions,
called droplets. An application programming interface lets you start and run multiple droplets in parallel and
tag each one to filter job instances. Prices start at less than a penny per hour and offer a wide array of compute
configurations. A cluster of Kubernetes application containers can be had for $30 per month. 

Snark
The Baidu-backed startup promises to let you test thousands of different models on multiple cloud instances from
the command line. Infrastructure costs range from 27 cents per hour up to $6, depending on GPU selection, with
a terabyte or model and data storage for $23, plus extra fees for pro and enterprise tiers. The company cuts the fees
of normal cloud jobs by storing persistent Jupyter notebook instances and repeatedly re-starting spot GPU or CPU
instances after they stop running.

NimbleBox
Designed to be ultra-fast machine learning setup, a Web-based dashboard starts you off with a blank project
template or a Github template that lets you clone a Github instance. Click a button and you’re up and running in a
Jupyter notebook online. The service features only one instance at the moment, an Nvidia K80 GPU with 15GB of
memory attached to a four- core CPU and 50GB of space. Pricing starts at $10 per month for individuals and $49
for a professional plan.

92
COPYRIGHT ©2019 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
CREDITS
Editor in Chief ABOUT ZDNET
Bill Detwiler
ZDNet brings together the reach of global and the depth of
Editor in Chief, UK local, delivering 24/7 news coverage and analysis on the
Steve Ranger
trends, technologies, and opportunities that matter to IT
Editor, Australia professionals and decision makers.
Chris Duckett
ABOUT TECHREPUBLIC
Associate Managing Editor
Mary Weilage TechRepublic is a digital publication and online community
that empowers the people of business and technology. It
Senior Features Editor provides analysis, tips, best practices, and case studies
Jody Gilbert
aimed at helping leaders make better decisions about
Senior Editor technology.
Alison DeNisco Rayome
DISCLAIMER
Senior Writer
Teena Maddox The information contained herein has been obtained
from sources believed to be reliable. CBS Interactive Inc.
Chief Reporter
Nick Heath disclaims all warranties as to the accuracy, completeness,
or adequacy of such information. CBS Interactive Inc. shall
Staff Writer have no liability for errors, omissions, or inadequacies in
Macy Bayern
the information contained herein or for the interpretations
Associate Editor thereof. The reader assumes sole responsibility for the
Melanie Wachsman
selection of these materials to achieve its intended results.
Multimedia Producer The opinions expressed herein are subject to change
Derek Poore
without notice.

Copyright ©2019 by CBS Interactive Inc. All rights reserved. TechRepublic


and its logo are trademarks of CBS Interactive Inc. ZDNet and its logo are
trademarks of CBS Interactive Inc. All other product names or services
identified throughout this article are trademarks or registered trademarks of
their respective companies.

Vous aimerez peut-être aussi