Académique Documents
Professionnel Documents
Culture Documents
Cisco's ACI?
Do you have no idea about choosing the Cisco's ACI or VMware's
NSX? Its not easy. Each of them has their own advantages. Which
one is yours? It may depend on your network requirements. Many
experts have their own opinions on the differences between
VMware's NSX and Cisco's ACISDN Strategy. So in this article we
collected some main reviews from the professionals.
What is the VMware NSX?
Brad Hedlund, VMware engineering architect, described the goal of
NSX succinctly: We want you to be able to deploy a virtual network
for an application at the same speed and operational efficiency that
you can deploy a virtual machine.
NSX tackles this lofty goal by provisioning hypervisor virtual
switches to meet an applications connectivity and security needs.
Virtual switches are connected to each other across the physical
network using an overlay network, which is no mean feat.
So how does VMware accomplish this? There are several key
elements, all of which revolve around a distributed virtual switch
(vSwitch).
Sitting at the network edge in the hypervisor, the vSwitch handles
links between local virtual machines. If a connection to a remote
resource is required, the vSwitch provides access to the physical
network. More than just a simple bridge, the NSX vSwitch is also a
router, and if needed, a firewall.
If the vSwitch is the heart of the NSX solution, the NSX controller is
the brain. Familiar in concept to those who are comfortable with
SDN architectures, the NSX controller is the arbiter of applications
and the network. The controller uses northbound APIs to talk to
applications, which express their needs, and the controller programs
all of the vSwitches under NSX control in a southbound direction to
meet those needs. The controller can talk OpenFlow for those
southbound links, but OpenFlow is not the only part of the solution,
or even a key one. In fact, VMware de-emphasizes OpenFlow in
general.
With NSX, the controller could run as a redundant cluster of virtual
machines in a pure vSphere environment, or in physical appliances
for customers with mixed hypervisors.
Hedlund put it this way: When you put NSX into the picture with
network virtualization, youre separating the virtual infrastructure
from the physical topology. With the decoupling and the tunneling
between hypervisors, you dont necessarily need to have Layer 2
between all of your racks and all of your VMs. You just need to have
IP connectivity. You could keep a Layer 2 network if thats how you
like to build. You could build a Layer 3 fabric with a Layer 3 top of
rack switch connected to a Layer 3 core switch providing a scaleout, robust, ECMP IP forwarding fabric. Now the Layer 2 adjacencies,
the logical switching and the routing is all provided by the
programmable vSwitch in the hypervisor.
In other words, the network hardware does not have to use MPLS,
802.1q VLANs, VRFs, or other network abstractions to create
securely separated, multi-tenant networks. Instead, the NSX
controlled vSwitch handles this by tunneling hypervisor-tohypervisor traffic in an overlay. The underlying networks
responsibility is merely to forward the overlay traffic.
For engineers thinking this forwarding model through, broadcast,
multicast, and unknown unicast (BUM) traffic that requires flooding
might seem to pose a problem, as BUM frames would be hidden
from the underlying network hardware by the overlay. Hedlund says
that, at the edge hypervisor, we have visibility into all of the end
hosts. When a VM turns on, we know its IP address and MAC address
right away. We dont have to glean that or learn that through
networking protocols." Since all the endpoints are known to NSX,
theres no requirement for unknown unicast flooding. Multicast and
broadcast packets are copied from hypervisor to hypervisor.
Overlays are not all there is to the NSX network virtualization
message, though. Scott Lowe, VMware engineer architect, says one
of the huge value-adds for NSX is we can now bring L4-L7 network
services into the virtual networks and be able to provide these
services and instantiate them and manage them as part of that
virtual network.
And by L4-L7 network services, he means distributed firewalls and
load-balancers. As a part of NSX, VMware offers these additional
components because it allows for greater network efficiency. In
traditional network models, centralized firewalls and load-balancers
must have traffic steered to them for processing. For host-to-host
traffic contained within a data center, this means the direct path
between hosts must be ignored in favor of the host-to-host path that
includes the network appliance.
NSX addresses this issue by placing these services in line at the
network edge, as a part of the hypervisor vSwitch traffic flow.
Whats more, these services are managed by the NSX controller,
reducing the elements a network operator is responsible for.
Cisco ACI
The name Cisco chose for its SDN effort, Application Centric
Infrastructure (ACI), is significant because it sends a message. With
As with many SDN models, APIC sits in between applications and the
network, translating what applications need into a network
configuration meeting those needs. Cisco says that APIC is open, in
that the APIs to access APIC data are to be made available to
anyone wishing to write to them. In fact, customers will be able to
download open device packages that allow network hardware not
currently part of an ACI infrastructure to be exposed to APIC.
A new Cisco virtual switch, called the Application Virtual Switch
(AVS), supports multiple hypervisors and extends ACIs
programmatic network control into the virtualization layer. While the
Nexus 9000 products are the physical switches ACI will be
programming, AVS is the virtual switch. Customers of Ciscos
Nexus 1000V virtual switch should be aware; however, that AVS
is a different piece of software, and a migration will be necessary for
environments desiring a wholesale commitment to ACI.
As with NSX, an overlay is a key element of the solution, in
this case VXLAN. However, while NSX uses overlays to connect
hypervisors no matter where they are in the network, ACI uses
VXLAN in a way most customers will never see. In ACI, VXLAN is a
transport that carries traffic between Nexus 9000 leaf and spine
switches. Cisco has tweaked VXLAN slightly, using a proprietary
extension to label the VXLAN header in way thats useful to the
Nexus 9000 hardware, but is otherwise transparent to network
operators.
As with NSX, multi-hypervisors are supported, including those from
Microsoft, VMware, Red Hat and Citrix. With multi-hypervisor
support, VMware and Cisco have recognized that customers dont
want to be locked into specific virtualization platforms, but still want
to be able to automate their network virtualization.
A major difference between ACI and NSX is that Cisco is
emphasizing hardware in addition to software. Software by itself
wont cut it, in the Cisco point of view. Frank DAgostino, senior
director at Insieme (now Cisco), says, Were going to deliver a
platform thats relevant to the application, whether its physical,
virtual, a Linux container or legacy, we need to accommodate all of
that.
DAgostino says, the battle isnt about a vSwitch or a physical
switch. The battle is about how you do service enablement on top of
these things, and how easy it is to stand up these things and audit
them after day one.
Although some pundits mock ACI as hardware defined networking,
that criticism perhaps misses the point. Even for those who wish to
de-emphasize hardware through commoditization, the fact remains
Philosophically, NSX and ACI are rather different. On the one hand,
NSX touts rich virtual switch functionality, abstracting the network
From http://www.networkworld.com/article/2172922/sdn/sdnshowdown--examining-the-differences-between-vmware-s-nsx-andcisco-s-aci.html
More Related
The SDN Face-Off-VMware NSX vs. Cisco ACI