Vous êtes sur la page 1sur 6

Extreme Networks:

Building Cloud-Scale Networks


Using Open Fabric Architectures
A SOLUTION WHITE PAPER
Cloud-Scale Networks White Paper 2
WHITE PAPER
TABLE OF CONTENTS
Introduction 2
Open Fabric-Based Approach to Data
Center Networks 3
Industry Open Initiatives 5
Conclusion 6
Abstract
Several technology inection points are coming together that are fundamentally
changing the way networks are architected, deployed and operated both in the
public cloud and private cloud. Requirements are rapidly changing and driving
new approaches to building data center networks. Extreme Networks is enabling
next generation data centers with an open, standards-based, high-speed, inter-
connectivity fabric architecture. This white paper outlines Extreme Networks Open
Fabric approach to data center networking including high speed connectivity, low
latency, multi-path mesh-type connectivity, high resiliency and support for network
and storage convergence. The paper goes on to explain ways to evolve your
data center network with virtualization and automation and how to harness the
benets of lower power consumption and other features of the Extreme Networks
architecture toward a greener data center.
Introduction
Several technology inection points are coming together that are fundamentally
changing the way networks are architected, deployed and operated both in
the public cloud as well as the private cloud. From performance, to scale, to
virtualization support and automation to simplied orchestration, the requirements
are rapidly changing and driving new approaches to building data center networks.
On one hand the need for network performance is ever increasing. With
computational density intensifying, the number of virtual machines (VMs) per
server is growing fast. Where earlier 4-8 VMs per server were common, today cloud
providers are moving to 16, 32 or more VMs per server. This is driving increased
bandwidth demand directly at the server-network edge. As this bandwidth demand
at the server-network edge grows, 10 GbE is expected to become the preferred
connectivity speed there, with the aggregation or core moving to 40 GbE.
Additionally, with more enterprises looking to the cloud to add capacity
on demand, trafc patterns within the cloud are assuming more east-west
characteristics with VM-VM trafc within a server, within a rack, and across racks,
all increasing. This increasing east-west trafc pattern mix is driving increased
requirements for low latency and high speed connectivity within the cloud.
Finally, the conuence of network and storage trafc on a common Ethernet fabric
is driving the need for more predictable network performance in order to ensure
that storage performance is not compromised due to the transient burstiness
inherent in data trafc patterns.
Building Cloud-
Scale Networks
Cloud-Scale Networks White Paper 3
All of the above factors are driving toward higher performance,
lower latency, and more predictable network architectures for
both private and public clouds.
In parallel, the move to a more dynamic data center is well
underway where VMs can be moved at will within and across
data centers (indeed at times even without user intervention
based on criteria such as capacity and utilization) and users
can dynamically add and reduce capacity on-demand. This type
of dynamism is driving a strong requirement for the network
infrastructure to scale on demand, provide the capability for
customization and automation, as well as integration with server
virtualization technologies to reduce manual provisioning and
conguration. The notion of orchestrating the data center where
server, network and storage resources can all be provisioned in
an automated manner is gaining momentum.
An Open Fabric-Based Approach to
Data Center Networks
Given the changing landscape and evolving requirements, data
center architectures are moving toward a fabric-based approach
to providing high speed, low latency connectivity within the data
center. Several attributes typically dene a fabric-type network
architecture:
High-speed connectivity
Low latency from port to port
Multi-path mesh-type connectivity
High resiliency
However, in todays cloud environments there are several
additional attributes that become part of the network fabric.
These include:
Support for network and storage convergence
Support for virtualization, automation and customization
Ease of network provisioning, conguration
and management
Low power
One of the key attributes of cloud providers is that infrastructure
is their cost of goods sold. As such,
the ability to maintain pricing leverage remains a key driver to
cloud providers ability to reduce the cost
of their infrastructure. This leads toward an open and
interoperable approach to building network
infrastructure rather than a vendor-specic proprietary
architecture. Furthermore, with technology changing rapidly
in the data center, the lock-in of proprietary vendor-specic
technology can be very costly as technology directions shift
rapidly in this dynamic landscape.
The industry is rapidly moving toward open architectures for
the cloud with several industry consortiums paving the way
by providing reference models, technologies and tools for
building large, scale-out architectures. Indeed, the technology
components for an open fabric for the cloud are
already in place. Some of these components are described below.
OPEN, STANDARDS-BASED, HIGH-SPEED, INTER-
CONNECTIVITY FABRIC
From a fabric interconnectivity perspective,
standards-based 10 GbE and 40 GbE interconnectivity fabrics
are fast becoming the mainstay of the data center network.
With the server edge moving to 10GbE, the aggregation layer is
moving to 40 GbE. This requires high density 10 GbE as well as
high density, high-performance 40 GbE connectivity solutions.
Along with density and capacity, low latency and low carbon
footprint are becoming key requirements. Extreme Networks
BlackDiamond X series chassis will ofer up to 768 wire speed
10 GbE ports or up to 192 wire speed 40 GbE ports in just a 1/3
rack size form factor. This level of non-blocking performance and
density is industry leading and the basis for building cloud-scale
connectivity fabrics. Using this model, servers can directly attach
to a high density 10 GbE End-of-Row (EoR) solution (such as
the BlackDiamond X series), or may connect to a tier of Top-of-
Rack (ToR) switches (such as Extreme Networks Summit X670)
with the TOR switches then connecting over multiple 40 GbE
links to the aggregation or core layer.
5624-01
M-LAG
LAG or NIC
Teaming
Up to 768
10 GbE Servers
Up to 128,000
Virtual Machines
5625-01
M-LAG
NIC
Teaming
LAG
10G
40G
10G
40G
Up to 4,560
10 GbE Servers
Up to 128,000
Virtual Machines
FIgure 1 FIgure 2
Cloud-Scale Networks White Paper 4
Most of the open, standards-based solutions coming on the
market today for high-speed interconnectivity also provide low
latency cut-through switching capabilities. For example the
Summit X670 ToR switch ofers latency of around 800-900 sec
while the BlackDiamond X chassis will ofer port-to-port latency
of well below 3 sec. This combination allows building single tier
or two-tier network fabrics that ofer very low end-to-end latency.
To address resiliency, where dual uplinks are used from the
TOR to the aggregation tier, solutions such as M-LAG may be
used for active-active redundancy. Similarly if servers need
to be dual homed to the ToR or EoR tier, NIC teaming can be
used in combination with an M-LAG-type approach for active-
active redundancy. While M-LAG itself is proprietary, the tier
that dual-homes into the M-LAG switches simply uses standard
link aggregation. For example, servers can use link aggregation
(or NIC teaming as it is commonly called) to dual-home into
two TOR switches which present themselves as a single switch
to servers via M-LAG. (Reference: Exploring New Data Center
Network Architectures with Multi-Switch Link Aggregation
(M-LAG)). If a true multi-homed architecture is to be used,
for example where 4 uplinks may connect to 4 diferent
switches, a standards track protocol such as TRILL (Transparent
Interconnection of Lots of Links) may be used to provide
Layer 2-based multipath capability. However, with data centers
typically dual-homing connections at each layer, an M-LAG-type
approach should sufce.
In addition to high-speed, low-latency, and high-density fabric
connectivity, many of the interoperable standards-based
solutions on the market today are also low carbon footprint
switching infrastructures. This is particularly important as the
cloud network transitions from a 1 GbE edge to a 10 GbE edge
and 40 GbE core, since the power footprint of the network
increases signicantly with this transition. Where earlier 10 GbE
ports would consume 10W-30W per port, today that number is
dropping rapidly to around 3W-10W per port. For example, the
BlackDiamond X chassis will consume around 5W per
10 GbE port.
OPEN APPROACH TO VIRTUAL
MACHINE SWITCHING
The broad adoption of server virtualization has been instrumental
in enabling the cloud model to gain acceptance. However, along
with the benets of virtualization come a set of challenges. For
example addressing VM switching through virtual switches gives
rise to several challenges. From the complexity of dealing with
multiple hypervisor technologies, to providing security between
VMs within the hypervisor, to software (CPU)-based switching
between VMs that could lead to unpredictable performance, the
list of potential issues runs large. The IEEE 802.1Qbg working
group is looking at this problem and is dening new forwarding
modes that allow switching VM trafc directly in the network
switch. (Reference: VEPA: An Answer to Virtual Switching). The
ability to support this new forwarding mode in the network
infrastructure provides an open standards track approach to
simplifying VM switching. For example, data center networking
products from Extreme Networks support the ability to switch
VMs in hardware at wire speed. The Summit X670 as well as the
BlackDiamond X series products will be able to switch up to 128k
VMs in hardware at wire speed. The ability to leverage standards
track technology provides investment protection without the
lock-in associated with proprietary, vendor-specic technologies.
In addition, the ability to address VM conguration and mobility
in a hypervisor agnostic manner is also becoming important.
The IEEE 802.1Qbg working group is looking at extensions that
address some of the network mobility challenges around VMs
through the concept of dening network proles associated
with VMs which can then be moved as a VM moves. Extreme
Networks XNV technology supports the notion of Virtual
Port Proles (VPPs) for VMs which can then be applied to VMs
on any hypervisor. This provides a hypervisor-agnostic way of
provisioning network characteristics for VMs. In addition, XNV
can also automatically migrate VPPs and enforce them on any
target switch to which the VM moves.
OPEN APPROACH TO CONVERGENCE
IN THE DATA CENTER
Much interest has been focused around network and storage
convergence in the data center. Storage and network
convergence is becoming a reality today using 10 G or higher
speed Ethernet in conjunction with iSCSI-based storage which
works natively over an Ethernet based TCP/IP infrastructure.
FCoE- based storage convergence is a little further out in terms
of its adoption and interoperability. However, in both cases, the
availability of standards-based Data Center Bridging (DCB)
technology is a key facilitator to enabling this convergence. DCB
allows partitioning of trafc into multiple trafc classes on a
common Ethernet fabric and then assigns priority to those trafc
classes, as well as specifying bandwidth parameters to trafc
classes. In essence, using open, standards-based DCB technology,
storage trafc can be merged on to a common data Ethernet
LAN while maintaining a degree of separation between the two.
Indeed, not just storage trafc but other classes of trafc may
also run on a common Ethernet fabric while providing isolation
between the various trafc classes. As an example, management
trafc or Vmotion trafc may also be moved to a common
converged Ethernet fabric and leverage DCB capabilities
to ensure reliable and predictive trafc behavior. As such,
leveraging DCB standards-based technology provides a solid
foundation for network and storage convergence that allows
scaling deployments in the cloud while continuing to provide
the predictability, performance, and latency required to meet
storage performance. This technology is available today from
several vendors, across switches, initiators and targets. Extreme
Networks has been shipping DCB-enabled infrastructure
for some time now and has participated in various industry
interoperability forums focused around DCB.
Cloud-Scale Networks White Paper 5
LEVERAGING OPENFLOW TO SIMPLIFY
CLOUD-SCALE PROVISIONING
OpenFlow is a relatively new industry-backed technology that
centralizes the intelligence in the network while keeping the
data path distributed. By centralizing the intelligence, OpenFlow
provides a platform upon which a diverse set of applications
can be built and used to program, provision and manage the
network in a myriad of diferent ways. Within the context of
a data center fabric, OpenFlow holds the promise of taking
complex functions such as trafc provisioning in converged data
center networks, logical network partitioning in public and hybrid
cloud environments, as well as user and VM provisioning in highly
virtualized data centers, providing a centralized and simplied
approach to addressing all of these at scale.
OpenFlow is in its early stages in terms of applications and
adoption. It will take time for the technology to mature. But,
it holds a lot of promise as a platform upon which smart
applications can be built for the next generation data center
fabric. In efect, OpenFlow provides an open source platform
upon which users can customize, automate, and innovate to
provide new ways to address some of the challenges in the data
center. It is important to note that the benets of OpenFlow
are not limited to data center and cloud scale networks. Indeed,
applications are being built on the enterprise and campus side
of the network as well using OpenFlow technology. But within
the context of the data center and the cloud infrastructure,
OpenFlow holds particular promise for both simplifying and
automating complex provisioning tasks and easing the burden
on network administrators.
There are two pieces to the OpenFlow solution. The rst is the
availability of switching and fabric infrastructure that support the
OpenFlow protocol to provision ow entries in the fabric. The
second is the OpenFlow Controller which controls and programs
the OpenFlow switching infrastructure. OpenFlow controllers are
being made available through open source initiatives, as well as
through commercial vendors who are building specic solutions
for problems using their own OpenFlow controllers. Extreme
Networks is implementing OpenFlow technology in its data
center switching infrastructure and working with a variety of
diferent OpenFlow controllers to bring unique and diferentiated
solutions to market.
Industry Open Initiatives
With so many diferent technology inection points coming
together in parallel, the acceptance and adoption of open,
interoperable technologies in the cloud is gaining momentum.
Not just within the networking space, but across servers,
networking and storage. While there have been several industry
driven open initiatives, two of them stand out as getting
signicant mindshare.
OPEN NETWORKING FOUNDATION
The Open Networking Foundation (ONF) is a newly formed
organization that is focused on technology for software-dened
networking, of which OpenFlow is a key technology. The member
list of the ONF is a broad spectrum of both the producers
i.e. vendors, as well as consumers of networking technology.
Software dened networking and OpenFlow hold the promise of
revolutionizing the networking industry with an open-source,
industry-backed platform upon which innovative solutions that
address key industry problems can be built.
Extreme Networks is a member of ONF. Extreme Networks
currently plans to implement OpenFlow technology in its
operating system ExtremeXOS for its data center line of
switches and also participated in a live OpenFlow demonstration
at the 2011 Las Vegas Interop Conference. Extreme Networks
will work with a variety of OpenFlow controllers from open
source controllers to commercial controllers to bring to market
solutions that address a variety of needs such as virtual network
provisioning for cloud customers.
OPENSTACK
OpenStack is an open source software development community
that delivers a very scalable cloud operating system. The
OpenStack community has three parallel tracks:
OpenStack Compute: OpenStack Compute is open source
software designed to provision and manage large networks
of VMs, creating a redundant and scalable cloud computing
platform. It provides the software, control panels, and APIs
required to orchestrate a cloud, including running instances,
managing networks, and controlling access through users
and projects. OpenStack Compute strives to be both
hardware and hypervisor agnostic, currently supporting a
variety of standard hardware congurations and the major
hypervisors.
OpenStack Object Storage: OpenStack Object Storage
is open source software for creating redundant, scalable
object storage using clusters of standardized servers to
store petabytes of accessible data.
OpenStack Image Service: OpenStack Image Service
provides discovery, registration, and delivery services for
virtual disk images.
WWW.EXTREMENETWORKS.COM
http://www.ExtremeNetworks.com/contact Phone +1-408-579-2800
2014 Extreme Networks, Inc. All rights reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc.
in the United States and/or other countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks
please see http://www.extremenetworks.com/about-extreme/trademarks.aspx. Specications and product availability are subject to change without notice. 1792-1213
Cloud-Scale Networks White Paper 6
Extreme Networks is part of the OpenStack community focusing on bringing
solutions to network provisioning in large cloud-scale networks. The challenge
with compute provisioning in large cloud-scale environments is that traditionally
the network has been missing from the provisioning piece. However, the network
fabric is increasingly becoming a key piece of the solution since a users quality of
experience depends largely on the networks ability to service that user, meet the
users SLAs, as well as provide isolation, protection and security between users. As
a result, any solution to compute provisioning in the cloud now needs to include the
network fabric. Extreme Networks is working with the OpenStack community toward
an open approach to solving provisioning problems in the cloud infrastructure.
Conclusion
Several trends, from virtualization, to convergence, to power, are driving newer
architectures and technologies in the data center. Very scalable, efcient, and
high-performance fabric-based architectures can be built and deployed using
open, interoperable and industry accepted approaches. From high-density 10 GbE
to high-density 40 GbE, to support for virtualization and convergence, as well as
redundancy and multipath capability, open industry accepted solutions are gaining
a foothold in building cloud-scale architectures.
Various industry consortiums are driving the acceptance of open and interoperable
solutions. Extreme Networks participation in these industry consortiums and its
commitment to furthering the cause of open networking technology nds further
validation in its newly announced products such as the BlackDiamond X series and
the Summit X670 which may help pave the way toward building open, fabric-based
network architectures for the cloud infrastructure.

Vous aimerez peut-être aussi