Vous êtes sur la page 1sur 16

6/19/2019 Gartner Reprint

Licensed for Distribution

Zero Trust Is an Initial Step on the Roadmap to CARTA


Published 10 December 2018 - ID G00377791 - 25 min read
By Analysts Neil MacDonald

Customer interest in and vendor marketing of a “zero trust” approach to networking are
growing. It starts with an initial security posture of default deny. But, for business to occur,
security and risk management leaders must establish and continuously assess trust using
Gartner’s CARTA approach.

Overview
Key Findings
■ Zero trust networking starts with a security posture of default deny. Trust is assessed at the
initiation of network connectivity. But, the term zero trust is a misnomer, as inevitably trust
needs to be extended for the work of digital business and government to get done.

■ Zero trust projects target networking (microsegmentation and software-defined perimeters)


because of excessive implicit trust in network connectivity and limitations of perimeter security.

■ Perimeters will actually increase in number, becoming more granular and shifting closer to the
logical entities they protect — the identities of users, devices, applications, data and workloads.

■ A CARTA strategic approach expands zero trust networking by assessing risk/trust


continuously throughout the duration of the network interaction, adapting as needed. Further,
CARTA extends these adaptive risk/trust assessments beyond networking to all information
security processes.

■ Excessive trust, like excessive risk, represents waste and a latent cost to the organization.
Continuously assessing risk/trust and adapting leads to lean trust, not zero trust.

Recommendations
Security and risk professionals responsible for improving hybrid cloud security posture should:

■ Budget and pilot two zero trust networking projects in 2019 — microsegmentation and a
software-defined perimeter — to significantly improve the security posture of the organization.

■ Avoid using “zero trust” as a term to sell security investments to business executives. Talk
about continuously assessed risk and trust that can adapt to the changing context and adapt to
the risk tolerance levels of business leaders, enabling new digital business, cloud and mobile
initiatives.
https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 1/16
6/19/2019 Gartner Reprint

■ Identify projects outside of zero trust networking where excessive trust represents a latent cost
and where the security posture can be significantly improved by risk optimizing the trust.

■ Use CARTA as a strategic approach to frame the evolution of all security and risk infrastructure
(not just networking) to be continuously adaptive to varying levels of risk and trust.

Analysis
This document was revised on 19 December 2018. The document you are viewing is the corrected
version. For more information, see the Corrections
(https://www.gartner.com/en/about/policies/current-corrections) page on gartner.com.

Significant market attention and hype around the term “zero trust” have developed over the past
several years. In a recent public Gartner webinar on Continuous Adaptive Risk and Trust
Assessment (CARTA), 1 70% of attendees had heard of the term “zero trust.” However, 23% of
these attendees weren’t quite sure what it means. The term zero trust is compelling, but remains
largely conceptual for most enterprises. Only 8% of the respondents had a specific zero trust
networking project planned for 2019.

To better understand how digital trust is evolving, consider this simple definition of trust:

Trust is the bidirectional belief established between two entities that the
other entity is what it claims to be and that it will behave in expected ways
during the duration of the interaction. Trust leads to access to capabilities
between the entities that otherwise should not be possible.

This simple definition has several implications:

■ Trust is not inherently a good thing. Trust is what we use in lieu of absolute certainty. However,
we need trust to extend or access capabilities that otherwise should not be possible.

■ Trust is not absolute, binary or static. It is an indication of the relative level of strength of the
assurance of the belief. Further, the level of trust is dynamic and changes over time. Thus,
access to the capabilities should be adapted.

■ To compensate for the risk of extending capabilities based on a belief, we should monitor for
expected behaviors during the interaction. If behaviors deviate from expectations in a risky way,
access to the capabilities should be adapted or removed entirely.

Zero trust is misnamed. A strict interpretation of “zero trust” would mean that no special
capabilities are extended. With zero trust networking, the initial security posture is one of no
https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 2/16
6/19/2019 Gartner Reprint

implicit trust (“zero trust”) between different entities. At the point where trust is needed to enable
access to capabilities, a level of sufficient trust must be established. This is based on an
assessment of the current context (e.g., location, device, credentials) and considering the
organization’s tolerance for risk. Many of the vendors refer to zero trust as “never trust, always
verify,” even though trust needs to be extended to get work done. The key is that there is no
implicit trust — the trust level is explicitly and dynamically calculated based on context. In a
seminal book on zero trust networking, 2 pillar number five is “policies must be dynamic and
calculated from as many sources of data as possible.” Indeed, more advanced zero trust
networking offerings dynamically adapt the capabilities extended based on the initial level of risk
and trust established. Given this, a simple definition of zero trust is:

Zero trust networking is a concept for secure network connectivity where


the initial security posture has no implicit trust between different entities,
regardless of whether they are inside or outside of the enterprise
perimeter. Least-privilege access to networked capabilities is dynamically
extended only after an assessment of the identity of the entity, the system
and the context.

In Gartner research, we call this shift to continuously assessing and adapting to relative risk and
trust levels “CARTA.” CARTA extends zero trust networking by continuously monitoring and
assessing the levels of risk and trust during the interaction after access to the capabilities is
extended. If the trust drops or the risk increases to a threshold requiring a response, access to the
capabilities extended should adapt accordingly. Further, we have extended the CARTA strategic
approach beyond networking, to all layers of the IT stack, into the creation of new digital business
capabilities and into risk governance processes (see Figure 1).

Figure 1. CARTA Strategic Approach

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 3/16
6/19/2019 Gartner Reprint

Source: Gartner (December 2018)

What’s Driving the Interest in Zero Trust?


Almost all zero trust projects refer to one of two types of zero trust networking. Why networking?
The internet was designed to connect things easily, not to block connections. The internet uses
inherently weak identifiers (specifically, IP addresses) to connect. If you have an IP address and a
route, you can connect and communicate to other IP addresses. IP addresses are not designed to
be authentication mechanisms. The messy problem of authentication is handled by higher levels
of the stack, typically the OS and application layers. For network connectivity, this default allow
posture creates an excessive amount of implicit trust.

Attackers abuse this trust. The first companies that connected to the public internet quickly found
out that they needed a demarcation point where their internal network connected to the internet.
This ultimately created what has become a multibillion dollar market for perimeter firewalls.
Networked systems on the inside were “trusted” and free to communicate with each other.
External systems were “untrusted” and communications with the outside, inbound or outbound,
were blocked by default. If needed, these required an exception rule — a hole — in the firewall.

This trusted/untrusted network security model is a relatively coarse and crude control, but it was
initially effective. However, it creates excessive trust that is abused by attackers in one of two

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 4/16
6/19/2019 Gartner Reprint

ways:

■ From inside our networks: Assuming all internal systems inside are “trusted” leads to easy
lateral spread of attacks (also referred to as east/west movement) when an internal system is
compromised or when a compromised device (typically a laptop) is connected to the internal
network. This is often described as a “hard exterior, soft chewy interior” security model, as the
interior is soft once the outer shell is breached. To this day, this is the leading cause of
breaches being so extensive and damaging.

■ From outside our networks: When external access to our systems and services is needed, we
typically do one of two things. For some users, we create a VPN to allow the user to punch
through the firewall and connect to the internal network. Once “inside,” the VPN connection is
treated as trusted. Alternatively, we place the front end to the service in a segmented part of the
network with direct internet connectivity (referred to as a demilitarized zone [DMZ]) so users
can access it. Both alternatives create excessive trust, resulting in latent risk. In the case of
VPNs, attackers with credentialed access now have access to our networks. (The Target HVAC
breach is an example. 3) Likewise, if the service is exposed in the DMZ, anyone on the internet
— including all of the attackers — can see it as well.

In both of these cases, excessive network trust leads to excessive latent risk. Network
connectivity (even the right to “ping” or see a server) should not be an entitlement; it should be
earned based on trust. This excessive network trust will inevitably be exploited, leading to
breaches and bringing legal, financial and regulatory exposure.

Legacy perimeter security simply won’t work and won’t scale for the requirements of digital
business and digital government. Digital transformation inverts our entire security model, further
increasing the risk of legacy network security models. We see these things happening as a result:

■ We will have more users outside of our enterprise accessing our systems and services than
users inside.

■ We will have more unmanaged devices connecting to our systems and services than managed
devices.

■ Our internal users will consume more applications and SaaS services delivered from outside of
our enterprise network than from the inside.

■ More traffic from branch offices will access services via the internet than via our data centers.

■ In the world of mobile users and unmanaged devices, IP addresses are transient, often with
address translation used. Trying to restrict access to applications and services for mobile users
based on IP addresses is futile, and forces users to perform network gymnastics to route their
traffic through on-premises systems for access — even for SaaS applications. The use of IP
addresses to set security policy is ineffective.

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 5/16
6/19/2019 Gartner Reprint

■ In the modern hybrid data centers and in public cloud IaaS, with cloud-native architectures
using VMs, containers and serverless functions (see “Security Considerations and Best
Practices for Securing Serverless PaaS”), IP addresses are transient, often with address
translation used. Again, IP addresses are an ineffective way to set security policy.

As the value of legacy network “inside versus outside” (also referred to as north/south) perimeters
decreases, new approaches to creating network trust are needed. This doesn’t mean perimeters
go away. The hype around “perimeterless” networks is misguided as there will actually be an
increase — not a decrease — in the number of demarcation boundaries of trust. Perimeters should
become more granular and shift closer to the logical entities they are protecting — notably the
identities of users, devices, applications and workloads (including networked containers in
microservices architectures). This is why the phrase “identity is the new perimeter” is so widely
used. This shift applies both for network connectivity from within the hybrid data center and for
external network access to our enterprise systems and applications.

Trust should not be established with an IP address. For users, it should be established with a
contextual assessment of the trust of the user and the device, along with an assessment of the
risk of the data, application or transaction being accessed. For workloads and applications, trust
should be based on a contextual assessment of the workload including the identity, the
application/service running, the data being handled and any associated tags/labels. These shifts
and the need to reduce risk by reducing surface area from attack have driven the significant
interest in two zero trust networking projects. One project provides stronger protection from
attacks inside the network (“keeping the bad things out”), and one project provides stronger
secure access (“letting the good things in”). We can directly map these projects to Gartner’s
CARTA strategic approach.

Zero Trust and CARTA


In both types of zero trust networking projects, zero trust networking uses a default deny network
connectivity posture as the starting point. An assessment of the identity and trust of the entity and
the device takes place before network access is granted. Even then, we aren’t done. That’s where
CARTA comes in. In a CARTA strategic approach, we continue to monitor and assess the entity
and its behaviors over the duration of the interaction. The two types of zero trust networking
projects can be visualized as an initial step on Gartner’s adaptive security architecture used in
Gartner’s CARTA strategic approach (see “Seven Imperatives to Adopt a CARTA Strategic
Approach”).

In Gartner’s adaptive attack protection architecture used in CARTA research (see Figure 2), the red
box in the upper right-hand corner indicates the initials steps of a zero trust networking project
that would protect from internal attacks.

Figure 2. Using a Default Deny, Zero Trust Initial Posture for Adaptive Attack Protection

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 6/16
6/19/2019 Gartner Reprint

Source: Gartner (December 2018)

In our enterprise networks, the default networking security posture should be that of default deny.
Systems should be hardened and isolated until a level of trust is established for network
connectivity to be established. CARTA extends the concept of zero trust further and treats attack
protection as a continuous risk/trust assessment problem — the entire bottom half of Figure 2. In
the center of the graphic, CARTA also extends this beyond networking. For example, CARTA
monitors executable code as it runs on a system for indications of malicious behavior and risk
even if it passes the initial risk/trust assessment. This technology and market are referred to as
endpoint detection and response (see Note 1).

Likewise, in Gartner’s adaptive access protection architecture used in CARTA research (see Figure
3), the red box in the upper right-hand corner indicates the initial security posture of default deny.
Users have no implicit access until an assessment of the user’s credentials, device and context
are made.

Figure 3. Using a Default Deny, Zero Trust Initial Posture for Adaptive Access Protection

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 7/16
6/19/2019 Gartner Reprint

Source: Gartner (December 2018)

For access to our enterprise systems and data, the default networking security posture should be
that of default deny. No access should be granted until a sufficient level of trust is established for
network connectivity to be established based on context. CARTA extends the concept of zero trust
further and treats access protection as a continuous risk/trust assessment problem — the entire
bottom half of figure 3. In the center of the graphic, CARTA also extends this beyond network
access. For example, a CARTA strategic approach monitors a user’s actions for risk even if they
have passed the initial risk/trust assessment and have been given access to an application. This
technology and market are referred to as user and entity behavioral analytics (UEBA; see Note 2).

The boxes in the upper right-hand corner of Figures 2 and 3 lead directly to two zero trust
networking projects that can be implemented in 2019 to improve the network security posture of
the organization:

■ Project 1: Microsegmentation — upper right-hand corner of Figure 2

■ Project 2: Software-Defined Perimeter — upper right-hand corner of Figure 3

Project 1: Microsegmentation
Data center networks are typically isolated from the public internet and separated from end-user
desktops. However, once in the data center, the network architecture is typically flat — a default
allow security posture. Any system inside the “trusted” data center can initiate network
https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 8/16
6/19/2019 Gartner Reprint

communications with any other system. An attacker that gains a foothold on one server can easily
spread laterally (east/west) to other systems. Much like bulkheads in a submarine that protect
from a breach using segmentation, we need logical, software-defined bulkheads (referred to as
microsegmentation) in our data centers.

A microsegmentation project minimizes and contains the breach when inevitably it occurs. Rather
than use IP addresses to base segmentation policies on, policies are based on logical (not
physical) attributes. Further, more advanced network microsegmentation solutions monitor and
baseline flows, and alert on anomalies. They also continuously assess the relative levels of
risk/trust of the network session behavior observed (for example, unusual connectivity patterns,
excessive bandwidth, excessive data transfers, communication to URLs or IP addresses with low
levels of trust). If a network session represents too much risk, an alert can be raised or the
session can be terminated.

We have seen a significant amount of interest (more than 300 inquiries in the past 12 months) in
microsegmentation technologies and projects. As the need for network segmentation strategies
gets even more granular in the era of microservices-based applications, some vendors are
referring to these projects as nanosegmentation or application microsegmentation. All of these
terms are variations on the same fundamental microsegmentation principles:

■ Workloads should not communicate with anything (default deny) until a sufficient level of
bidirectional trust is established based on the identity of the workloads, typically as the
workload is instantiated.

■ Workload identity is based on logical attributes — such as the identity of a certificate, the
application service, the container 4 or the use of a logical label/tag associated with the
workload (for example, Payment Card Industry [PCI]). These logical attributes are used to set
policies. While ultimately the policies may map to IP addresses underneath, policies are set
using logical attributes, not physical attributes.

■ Workloads of similar patterns/functions are grouped together for ease of management and
policy setting (for example, all workloads tagged “PCI” are segmented together and treated
similarly).

■ Groups of workloads are provided only the network capabilities they need, implementing least
privilege.

■ Workload microsegmentation policies should apply regardless of the physical location of the
workload. This should include hybrid data centers spanning on-premises and public cloud IaaS,
and into the networks inside of container-based applications as well.

To implement microsegmentation, a variety of approaches are used including network overlays,


network encryption, software-defined network (SDN) integration, host-based agents, virtual
appliances, containers or using the native APIs of the underlying cloud fabric to achieve

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 9/16
6/19/2019 Gartner Reprint

segmentation. Gartner is tracking more than 20 vendors (see Note 3) in the microsegmentation
category.

Project 2: Software-Defined Perimeter (SDP)


Inside the enterprise network our users have full visibility (default allow) to see applications and
services. When users are “outside” of the perimeter firewall, the users can’t see them. We
traditionally use DMZs and VPNs to solve this; but both of these workarounds result in excessive
trust. In a world of anywhere, anytime access to our applications and services from any device, we
need to rethink access. Why not design a network where there is no “inside” or “outside” from the
user’s perspective? Why can’t all applications be accessible from anywhere, without requiring the
user to do anything differently? In this model, the user doesn’t have to figure out the method of
access based on the context of where they are, what time of day it is or what type of device they
are using. The network figures this out for them. This is exactly the vision of an SDP.

There are multiple implementation styles. Some offerings use an agent, others are agentless;
some remain in-line, others don’t; some are offered as a cloud service, some on-premises, some
both. But all are variations on the same fundamental principles:

■ A named user can’t access a service until a sufficient level of trust is established (services are
initially hidden from all users). Authenticate first, then connect.

■ The level of trust is established at connection time, and is context-based including context such
as device trust, user trust, location and time of day. The level of access granted is also context-
based, granular (“precision access”) and configured for least privilege, typically to a specific
application or service based on the user’s identity and role.

■ Some type of trust broker, controller or service validates the level of trust of the user and the
device, and communicates this to a gateway or agent that protects the service. At that point, an
outbound connection is typically made from the gateway to the user (removing the need for
inbound firewall rules, also significant reducing the enterprise surface area for attack).

There has been growing interest in SDP projects. We have received more than 100 inquiries on this
topic over the past year. In many cases, the use case is risk reduction by replacing a legacy VPN
access solution, or use as an alternative to a DMZ as applications are migrated to IaaS. Other use
cases are described in “Fact or Fiction: Are Software-Defined Perimeters Really the Next-
Generation VPNs?” and “It’s Time to Isolate Your Services From the Internet Cesspool.” There are
various terms used for software-defined perimeter projects. Some vendors refer to these projects
as zero trust networking; others refer to them as software-defined access or software-defined
access perimeters. Google has offered its own vision for SDP perimeters called BeyondCorp (see
Note 4), and has adopted it internally for its users. We are currently tracking more than 20 vendors
in this space (see Note 5).

Summary of Zero Trust Networking Projects

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 10/16
6/19/2019 Gartner Reprint

Both of these zero trust networking projects have immediate value in improving the overall
security posture of the organization. Both projects replace excessive trust with more granular and
contextual access based on an assessment of trust at the time of connection. In both projects,
zero trust (nothing can communicate with anything) is the initial security posture until sufficient
trust is established given the risk and current context to extend network connectivity. The two
projects and associated buying centers are different, but the same underlying technology might be
used to address both needs. Indeed, some zero trust networking vendors can address both of
these distinct use cases using the same underlying technology.

A comprehensive approach to adaptive attack protection and adaptive access protection as


envisioned by CARTA can’t stop with just the initial gating assessment in the upper right-hand
corner of Figures 2 and 3. Access is granted; then what? Once the initial connection is established,
a CARTA strategic approach monitors and assesses the session for indications of an attack,
compromised credentials, compromised systems and anomalous behaviors. The entire life cycle
of the interaction (all of Figures 2 and 3) should be protected by monitoring and assessing the
risk/trust levels over the duration of session. If the trust drops or risk increases to a level requiring
action, adaptive security responses are taken. In other words, access to all IT infrastructure —
such as networks, data, APIs, systems, applications and services — is adaptive, based on a
continuous assessment of risk and trust from a starting point of zero trust. Zero trust networking
is an initial step on the roadmap to CARTA.

CARTA-Inspired Trust Leads to Lean Trust


A CARTA strategic approach views information security as a set of intertwined and interdependent
adaptive processes, capabilities and controls with data-driven feedback loops based on risk/trust
levels. Further, CARTA extends this approach into the creation of new digital capabilities and into
risk governance. Assessments and visibility of risk/trust and the exchange of context become the
immune system for digital business (see Figure 1).

The result is a future state of information security where we are always monitoring, assessing,
learning and adapting based on the relative levels of risk and trust that we actually observe.
Information security is becoming a data-driven, continuous improvement process. In our research
building out CARTA, we have found many parallels to continuous improvement in manufacturing,
and we believe there are insights to be gained from modern manufacturing and applying this to
information security

In the world of digital business, trust is a requirement to get things done. Once trust is extended, it
should not be a fixed amount. Trust should adapt and be optimized given the current context and
risk levels. We have described this as “just-in-time, just-enough trust” — much like inventory in a
lean manufacturing operation. Likewise, in manufacturing, inventory is needed to get things done.
“Zero trust” in security would be like “zero inventory” 5 in a manufacturing environment. Both are
aspirational; but trust, and inventory, are needed to get things done.

Further, once trust is extended, risk is inevitable. In manufacturing, once production starts, some
number of defects is inevitable. “Zero risk” is as unachievable as “zero defects.” Simply adopting

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 11/16
6/19/2019 Gartner Reprint

“zero defects” as an ideology doesn’t create the methodology for a life cycle approach for
continuous improvement and a Six Sigma quality control program. Six Sigma is a disciplined,
strategic approach for continuous improvement in manufacturing. That is what CARTA is for
information security — a disciplined, strategic approach bringing continuous improvement to
information security by continuously assessing risk and trust. A CARTA strategic approach is Six
Sigma for information security.

Trust, like inventory, should be risk-optimized for the current context. Zero trust isn’t the goal; lean
trust (alternatively “risk-optimized trust,” “risk-appropriate trust” or “adaptive trust”) is. With lean
manufacturing, continuous improvement and data-driven decisions are used to adapt and
minimize waste. With lean trust, CARTA-inspired continuous improvement and data-driven risk-
based decisions are used to provide just-in-time, just-enough trust. Like excessive inventory,
excessive trust represents waste and a cost to the organization. The use of trust, like inventory,
should be optimized. A CARTA strategic approach envisions security infrastructure that supports
lean trust — just-in-time, just-enough trust — given the current context and risk levels, and the risk
tolerance of the organization

Beyond the two zero trust networking projects discussed, we believe there are many other areas in
information security where excessive trust has created concentrations of latent risk that attackers
will target. By applying the concept of CARTA-inspired “lean trust” or “risk-optimized trust,” we
believe organizations can significantly improve their overall security posture by targeting these
specific areas of excessive trust with newer approaches that will be explored in future research.

Bottom Line
Zero trust is a useful network security concept, not a framework. Zero trust is an initial step on the
roadmap to CARTA — a strategic framework for information security where dynamic levels of risk
and trust are continuously assessed and security infrastructure is adapted to optimize the level of
trust extended. CARTA expands the notion of zero trust to lean trust — just-in-time, just-enough
capabilities — given the current context and risk tolerance of the enterprise, and continuously
monitoring, assessing and adapting to improve the enterprise security posture. By applying
CARTA-inspired lean trust concepts to areas of excessive trust in your enterprise, starting with the
network and extending to other areas, you can significantly improve your security posture in 2019
and beyond.

Evidence
1
Gartner webinar on “The 7 Imperative of Continuous Adaptive Risk and Trust Assessment
(CARTA),” 12 November 2019.

2
E. Gilman, D. Barth. “Zero Trust Networks: Building Secure Systems in Untrusted Networks.” First
Edition. O’Reilly Media. 2017.

3
 “Target Hackers Broke in Via HVAC Company,” (https://krebsonsecurity.com/2014/02/target-
hackers-broke-in-via-hvac-company/) Krebs on Security.

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 12/16
6/19/2019 Gartner Reprint
4
The Secure Production Identity Framework For Everyone (SPIFFE) standard provides a
specification for a framework capable of bootstrapping and issuing identity to services across
heterogeneous environments and organizational boundaries. At its heart, SPIFFE is a standard
defining how services identify themselves to each other. These are called SPIFFE IDs and are
implemented as uniform resource identifiers (URIs). Consistent identity-based security policies for
workloads will benefit by having a consistent “universal” mapping of identities across hybrid and
multicloud data center architectures.

See  “Secure Production Identity Framework for Everyone,” (https://spiffe.io/) The SPIFFE Project &
Scytale.

5
 “Zero Inventory Management: Facts or Fiction? Lessons From Japan,”
(https://www.sciencedirect.com/science/article/pii/S092552739800022X) ScienceDirect.

S. Gahlan, V. Arya,  “Study of Zero Inventory Based on Just In Time (JIT) in the Automotive
Industry,” (http://ijaegt.com/wp-content/uploads/2015/08/409659-pp-1358-1373-vivek.pdf)
International Journal of Advanced Engineering and Global Technology.

6
 “Vidder Joins the Verizon Family,” (https://enterprise.verizon.com/resources/vidder/) Verizon.

Note 1
Endpoint Detection and Response
Suppose a malicious executable or weaponized content passes the initial risk/trust assessment
and is allowed to be downloaded and used on the system. We must continue to monitor and
assess the system (e.g., processes, ports, file activity) for indications of attack or anomalous
behaviors representing excessive risk. If the risk is too great, the EDR agent can adaptively
respond, for example, by killing the process or quarantining the system (see “EDR — Benefits,
Concerns and Issues”).

Note 2
User and Entity Behavioral Analytics
Suppose an attacker or malicious insider passes the initial risk/trust assessment and is given
access to enterprise applications such as SAP. We must continue to monitor and assess the
entity’s behavioral patterns for indications of excessive risk or anomalous behavior that would
indicate the credential was compromised or that the user might be executing an insider attack. If
the risk is too great, security infrastructure can adaptively respond, for example, by requiring the
user to provide another factor for authentication before continuing, or by revoking the user’s
credentials (see “Market Guide for User and Entity Behavior Analytics”).

Note 3
Microsegmentation Vendors
SDN-based

■ Cisco

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 13/16
6/19/2019 Gartner Reprint

■ Juniper Networks

■ VMware

Network-based appliance (physical or virtual)

■ Certes Networks

■ vArmour

Microservices-based

■ ShieldX

Host-based

■ Alcide

■ Cisco (Tetration)

■ CloudPassage

■ Cloudvisory

■ Edgewise

■ GuardiCore

■ Illumio

■ Unisys

Container-centric

■ Alcide

■ Aporeto

■ Aqua Security

■ NeuVector

■ Tigera

■ Twistlock

IaaS built-in segmentation

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 14/16
6/19/2019 Gartner Reprint

■ Amazon Web Services (AWS)

■ Microsoft Azure

API-based

■ AlgoSec

■ Cloudvisory (also has an optional agent)

■ Tufin

Note 4
Google’s BeyondCorp
Google has adopted an SDP internally in a model it calls  BeyondCorp (https://beyondcorp.com/) .
Although Google doesn’t sell an offering here commercially, it is an advocate of a zero trust,
software-defined perimeter approach. Google makes the SDP capabilities available to its
customers as a service for accessing applications in the Google Cloud Platform (GCP), under the
name  Cloud Identity-Aware Proxy (https://cloud.google.com/iap/) (Cloud IAP). Google Cloud IAP
cannot be used to access applications outside of GCP.

Note 5
Software-Defined Perimeter Vendors
■ Akamai (acquired Soha Systems)

■ BlackRidge

■ Cato Networks

■ Certes Networks

■ Cisco (acquired Duo Beyond)

■ Cyxtera (formerly Cryptzone)

■ Dispel

■ Google (BeyondCorp)

■ Luminate

■ Meta Networks

■ Okta (acquired ScaleFT)

■ Perimeter 81

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 15/16
6/19/2019 Gartner Reprint

■ Safe-T

■ SAIFE

■ Trusted Knight

■ Unisys

■ Verizon (acquired the SDP assets of Vidder 6)

■ Waverley Labs

■ Zentera Systems

■ Zscaler

© 2018Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. and
its affiliates. This publication may not be reproduced or distributed in any form without Gartner's prior written
permission. It consists of the opinions of Gartner's research organization, which should not be construed as
statements of fact. While the information contained in this publication has been obtained from sources believed
to be reliable, Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such
information. Although Gartner research may address legal and financial issues, Gartner does not provide legal or
investment advice and its research should not be construed or used as such. Your access and use of this
publication are governed by Gartner’s Usage Policy. Gartner prides itself on its reputation for independence and
objectivity. Its research is produced independently by its research organization without input or influence from
any third party. For further information, see "Guiding Principles on Independence and Objectivity."

About Careers Newsroom Policies Site Index IT Glossary Gartner Blog Network Contact Send
Feedback

© 2018 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

https://www.gartner.com/doc/reprints?id=1-641B4AK&ct=190114&st=sb 16/16

Vous aimerez peut-être aussi