Vous êtes sur la page 1sur 10

What Did They Say about VMware's NSX and

Cisco's ACI?
Do you have no idea about choosing the Cisco's ACI or VMware's
NSX? Its not easy. Each of them has their own advantages. Which
one is yours? It may depend on your network requirements. Many
experts have their own opinions on the differences between
VMware's NSX and Cisco's ACISDN Strategy. So in this article we
collected some main reviews from the professionals.
What is the VMware NSX?
Brad Hedlund, VMware engineering architect, described the goal of
NSX succinctly: We want you to be able to deploy a virtual network
for an application at the same speed and operational efficiency that
you can deploy a virtual machine.
NSX tackles this lofty goal by provisioning hypervisor virtual
switches to meet an applications connectivity and security needs.
Virtual switches are connected to each other across the physical
network using an overlay network, which is no mean feat.
So how does VMware accomplish this? There are several key
elements, all of which revolve around a distributed virtual switch
(vSwitch).
Sitting at the network edge in the hypervisor, the vSwitch handles
links between local virtual machines. If a connection to a remote
resource is required, the vSwitch provides access to the physical
network. More than just a simple bridge, the NSX vSwitch is also a
router, and if needed, a firewall.
If the vSwitch is the heart of the NSX solution, the NSX controller is
the brain. Familiar in concept to those who are comfortable with
SDN architectures, the NSX controller is the arbiter of applications
and the network. The controller uses northbound APIs to talk to
applications, which express their needs, and the controller programs
all of the vSwitches under NSX control in a southbound direction to
meet those needs. The controller can talk OpenFlow for those
southbound links, but OpenFlow is not the only part of the solution,
or even a key one. In fact, VMware de-emphasizes OpenFlow in
general.
With NSX, the controller could run as a redundant cluster of virtual
machines in a pure vSphere environment, or in physical appliances
for customers with mixed hypervisors.

A distributed firewall is another key part of NSX. In the NSX model,


security is done at the network edge in the vSwitch. Policy for this
distributed firewall is managed centrally. Conceptually, the NSX
distributed firewall is like having many small firewalls, but without
the burden of maintaining many small firewall policies.
Creating the virtual network segments are overlay protocols.
VMwares choice to support multi-hypervisor environments means
they also support multiple overlays. Supporting Virtual eXtensible
LAN (VXLAN), Stateless Transport Tunneling (STT) and Generic
Routing Encapsulation (GRE), NSX builds a virtual network by taking
traditional Ethernet frames and encapsulating (tunneling) them
inside of an overlay packet. Each overlay packet is labeled with a
unique identifier that defines the virtual network segment.
Of course, not all networks would know what to do with NSX-defined
virtual networks. To connect non-NSX networks to NSX environments
and vice-versa, traffic passes through an NSX gateway, described by
VMware as the on ramp/off ramp into or out of logical networks.
Multi-hypervisor support is an important part of the NSX strategy,
adding, as it does, Citrix Xen and KVM users to the mix. In fact, NSX
is agnostic to many environment elements, including network
hardware, which is an important attribute. From a network
engineering perspective, this is critical to understand.

Hedlund put it this way: When you put NSX into the picture with
network virtualization, youre separating the virtual infrastructure
from the physical topology. With the decoupling and the tunneling
between hypervisors, you dont necessarily need to have Layer 2
between all of your racks and all of your VMs. You just need to have
IP connectivity. You could keep a Layer 2 network if thats how you
like to build. You could build a Layer 3 fabric with a Layer 3 top of
rack switch connected to a Layer 3 core switch providing a scaleout, robust, ECMP IP forwarding fabric. Now the Layer 2 adjacencies,
the logical switching and the routing is all provided by the
programmable vSwitch in the hypervisor.
In other words, the network hardware does not have to use MPLS,
802.1q VLANs, VRFs, or other network abstractions to create
securely separated, multi-tenant networks. Instead, the NSX
controlled vSwitch handles this by tunneling hypervisor-tohypervisor traffic in an overlay. The underlying networks
responsibility is merely to forward the overlay traffic.
For engineers thinking this forwarding model through, broadcast,
multicast, and unknown unicast (BUM) traffic that requires flooding
might seem to pose a problem, as BUM frames would be hidden
from the underlying network hardware by the overlay. Hedlund says
that, at the edge hypervisor, we have visibility into all of the end
hosts. When a VM turns on, we know its IP address and MAC address
right away. We dont have to glean that or learn that through
networking protocols." Since all the endpoints are known to NSX,
theres no requirement for unknown unicast flooding. Multicast and
broadcast packets are copied from hypervisor to hypervisor.
Overlays are not all there is to the NSX network virtualization
message, though. Scott Lowe, VMware engineer architect, says one
of the huge value-adds for NSX is we can now bring L4-L7 network
services into the virtual networks and be able to provide these
services and instantiate them and manage them as part of that
virtual network.
And by L4-L7 network services, he means distributed firewalls and
load-balancers. As a part of NSX, VMware offers these additional
components because it allows for greater network efficiency. In
traditional network models, centralized firewalls and load-balancers
must have traffic steered to them for processing. For host-to-host
traffic contained within a data center, this means the direct path
between hosts must be ignored in favor of the host-to-host path that
includes the network appliance.
NSX addresses this issue by placing these services in line at the
network edge, as a part of the hypervisor vSwitch traffic flow.
Whats more, these services are managed by the NSX controller,
reducing the elements a network operator is responsible for.

Despite the availability of NSXs L4-L7 services, VMware recognized


that customers might want additional capabilities, so NSX will
include support for third-party appliances. Were not going to try to
be the best load-balancer in the world or the best firewall in the
world and beat everybody at features, Hedlund says. Were going
to try and provide 80% - most of the features a customer would
deploy. But if theres that extra feature you need from a specific
firewall or load-balancer, we want to provide a platform for those to
be integrated in."
Indeed, VMware announced NSW with a budding partner ecosystem,
listing Arista, Brocade, Cumulus, Palo Alto Networks, Citrix, F5,
Symantec, and several others as vendors with products that
integrate into the NSX environment.
Despite a robust network virtualization platform and existing
customers dating back to the Nicira days, NSX has its critics. A chief
concern expressed by the engineering community surrounds NSXs
lack of communication with network switching hardware, relying
heavily on vSwitch programmability to fulfill network virtualization
goals.
While VMware has done its best to contradict this notion, the fact
remains that NSX simply does not have specific insight into all of the
network hardware forming the underlay fabric the NSX overlay rides
on, which has implications for everything from traffic engineering to
fault isolation and load distribution. Thats not to say NSX has no
knowledge of the physical network, but rather that most of what
NSX does know is inferred.
VMwares official blog site goes into depth explaining that NSX can
help isolate problem domains to point administrators in the right
direction when troubleshooting an application problem, including a
problem with the physical network. But to NSX, the physical
underlay network is largely a cloud where tunnel packets enter on
one side and exit on another.
In addition to the network hardware criticism, early reports from
organizations exploring NSX cite pricing as an adoption barrier.
VMware and 80% owner EMC are no stranger to complex SKU build
sheets and costly licensing schemes that make IT organizations
wince, and reportedly NSX is no exception. That said, folks within
VMware say they are aware of customer concerns in this area and
wish to avoid another vTax public relations debacle. Suffice it to
say, potential NSX customers need to stay tuned in this area.

Cisco ACI
The name Cisco chose for its SDN effort, Application Centric
Infrastructure (ACI), is significant because it sends a message. With

ACI, Cisco is focused on shaping network infrastructure to the needs


of specific network applications.
Does that include network virtualization? Certainly. But with ACI,
network virtualization isnt the whole story. Rather ACI is an entire
SDN solution wrapped around the idea that IT applications are the
most important thing in an organization.
In that sense, its difficult to compare NSX and ACI directly. While
there is some functional overlap between NSX and ACI, ACI doesnt
merely answer the question, How can a network be virtualized?
Rather, ACI answers the question, How can networking be
transformed to revolve around an applications needs?
As complex and nuanced of a solution as NSX is, ACI is both broader
in scope and more novel in approach. An organization could
conceivably run NSX over ACI but not the other way around.
All of that said, ACI as an entire solution isnt shipping yet. The ideas
are all there. Significant amounts of code have been written. Product
components have been named. But for customers, ACI doesnt really
exist. Customers who invest in available ACI components are
investing in roadmaps that promise a complete ACI solution
delivered over the 2014 calendar year.

Availability caveats aside, Cisco has spent a great deal of time


describing ACIs vision to the network community. The solution is
complex, with many elements working together to rethink how
networking is accomplished.
The most tangible element of the ACI platform is the Nexus 9000
switch line, which is shipping today. The 9000 switches are highdensity 10GbE and 40GbE built on the idea of merchant plus
silicon, as in merchant silicon plus custom Cisco ASICs. The
merchant silicon is Broadcom Trident II, used by several other switch
suppliers. The custom ASICs are used to aid in ACI service delivery,
but the details about how and why have not been released by Cisco
yet.
The Application Policy Infrastructure Controller (APIC) translates
application policies for security, segmentation, prioritization, etc.
into network programming. Cisco delivers APIC in a physical form
factor with redundancy options, since delivering APIC as a virtual
machine would present a chicken and egg problem. Mike Dvorkin,
chief scientist and co-found of Insieme Networks, makes the point
that, For the [ACI] fabric to bootstrap, you need APIC. But for APIC
to be installed and powered on as a VM, youd need the fabric.

As with many SDN models, APIC sits in between applications and the
network, translating what applications need into a network
configuration meeting those needs. Cisco says that APIC is open, in
that the APIs to access APIC data are to be made available to
anyone wishing to write to them. In fact, customers will be able to
download open device packages that allow network hardware not
currently part of an ACI infrastructure to be exposed to APIC.
A new Cisco virtual switch, called the Application Virtual Switch
(AVS), supports multiple hypervisors and extends ACIs
programmatic network control into the virtualization layer. While the
Nexus 9000 products are the physical switches ACI will be
programming, AVS is the virtual switch. Customers of Ciscos
Nexus 1000V virtual switch should be aware; however, that AVS
is a different piece of software, and a migration will be necessary for
environments desiring a wholesale commitment to ACI.
As with NSX, an overlay is a key element of the solution, in
this case VXLAN. However, while NSX uses overlays to connect
hypervisors no matter where they are in the network, ACI uses
VXLAN in a way most customers will never see. In ACI, VXLAN is a
transport that carries traffic between Nexus 9000 leaf and spine
switches. Cisco has tweaked VXLAN slightly, using a proprietary
extension to label the VXLAN header in way thats useful to the
Nexus 9000 hardware, but is otherwise transparent to network
operators.
As with NSX, multi-hypervisors are supported, including those from
Microsoft, VMware, Red Hat and Citrix. With multi-hypervisor
support, VMware and Cisco have recognized that customers dont
want to be locked into specific virtualization platforms, but still want
to be able to automate their network virtualization.
A major difference between ACI and NSX is that Cisco is
emphasizing hardware in addition to software. Software by itself
wont cut it, in the Cisco point of view. Frank DAgostino, senior
director at Insieme (now Cisco), says, Were going to deliver a
platform thats relevant to the application, whether its physical,
virtual, a Linux container or legacy, we need to accommodate all of
that.
DAgostino says, the battle isnt about a vSwitch or a physical
switch. The battle is about how you do service enablement on top of
these things, and how easy it is to stand up these things and audit
them after day one.
Although some pundits mock ACI as hardware defined networking,
that criticism perhaps misses the point. Even for those who wish to
de-emphasize hardware through commoditization, the fact remains

that network hardware must be provisioned, monitored and


optimized as well as updated to cope with changing network needs.
No amount of decoupling can change that fact. Cisco, in keeping
with its business model, has embraced hardwares continuing
importance, placing hardware squarely in the middle of the ACI
value proposition. We know in the fabric, on a hop-by-hop and
packet-by-packet basis, such a level of detail that we can start doing
traffic engineering differently, DAgostino says.
Thats a claim NSX cannot make.
With the integration of APIC controlled hardware and software, Cisco
plans to deliver with ACI a network infrastructure driven by policy.
Policy is created in part through the use of End Point Groups (EPG).
The idea is to create EPGs that are a useful collection of server,
service, virtualization, or network attributes describing an
application not just the IP addresses and port numbers network
engineers are used to.
Once the EPG is defined, ACI applies policy that governs the traffic
flowing between EPGs. According to Joe Onisick, technical marketing
engineer with Insieme (now Cisco), We group end points together
for the enforcement of policy, and use the EPG as our policy
instantiation pointWe instantiate our policy between groups based
on the connectivity graph that we draw within the application
network profile.
This point about policy and EPGs might seem a little detailed, but it
raises a larger point that is key to understanding Ciscos philosophy
around ACI. Applications are not merely chunks of amorphous data
payload shoved into an IP packet and forwarded across a fabric.
Rather, applications can be described in a more nuanced way.
Cisco uses these nuanced EPGs as a means of not only richly
identifying applications, but also abstracting that group definition
into an object that policy can be applied to. Theres real power in
that concept, as it allows network operators or application
developers - to fine-tune treatment of traffic to a degree that would
be pragmatically impossible if it required doing it by hand.
Traffic treatment in the ACI model also includes secure separation.
DAgostino makes the case that, Each of the containers are
completely isolated based on policy and based on their segment ID.
And whether its VXLAN oriented, whether its VLAN oriented,
whether its NVGRE oriented - to us, it doesnt matter at the edge.
We bring it, we isolate it based on the logical architecture or the
system and based on the policy definition. We can keep complete
and strict isolation with full visibility into the workloads and resource

consumption of any resource thats defined for any tenant or


application thats running.
With ACI then, policies that govern application communications are
pushed down into the network infrastructure by the APIC. The APICs
interface is open such that, over time, any number of third parties
can interact with it.
Like VMware, Cisco has gone to great lengths to build a partner
ecosystem, although Cisco stresses that APIC is an open platform,
implying that partnerships are not exclusive relationships. BMC,
Citrix, Embrane, F5, Microsoft, NetApp, PuppetLabs, Red Hat, Splunk
and several others are already listed as working with Cisco on ACI
integration of a variety of applications.
Cisco is sometime criticized for the high cost of its solutions, and
has made a point of keeping ACI acquisition costs low. Capex for the
Nexus 9000 switches is reportedly quite reasonable. Within Cisco,
the 9000s are seen as a viable migration path from the aged
Catalyst 6500 platform.
In conjunction with the Nexus 9000 switching products that offer
high density 40G Ethernet, Cisco has introduced a 40GbE BiDi LCterminated optic that allows 40GbE to run over a single pair of
multimode OM3 grade fiber. As most 40GbE optics require 12
strands of fiber, the BiDi strategy gives customers a migration path
from 10GbE to 40GbE that doesnt require a complete overhaul of
their fiber cabling plant. Cisco customers making an investment in
Nexus 9000 switches to build their ACI foundation can costeffectively move to 40GbE at the same time.
Customers invested in the Nexus 7000 product line will be glad
to know that ACI support is roadmapped for the latter half of 2014.
The obvious downside of ACI is that it requires compatible network
hardware to do what it does. While ACI appears to be one of the
most complete architectural approaches yet to software defined
networking, even if ACI wins significant mindshare, implementation
will be slow as ACI depends on the right hardware to function.
Most network gear has a five to seven year life, so even with
reasonable acquisition costs, many organizations still depreciating
recent hardware purchases are going to find ACI a tough sell. The
promised Nexus 7000 integration with ACI will go a long way to
speeding up ACI adoption, if Cisco can pull off the integration
successfully.

Philosophically, NSX and ACI are rather different. On the one hand,
NSX touts rich virtual switch functionality, abstracting the network

using a controller and overlays. On the other, ACI melds both


hardware and software into a policy-driven network infrastructure
built around the needs of specific applications.
Both approaches will impact IT operations. Are these solutions and
SDN in general worth exploring? Yes. NSX and ACI are evidence that
software defined networking is real, providing a technological
foundation that will allow speedy, reliable application delivery for
organizations.
Sure, itll change how engineers exercise the art of networking. But
sometimes change is good.

From http://www.networkworld.com/article/2172922/sdn/sdnshowdown--examining-the-differences-between-vmware-s-nsx-andcisco-s-aci.html

More Related
The SDN Face-Off-VMware NSX vs. Cisco ACI

Vous aimerez peut-être aussi