Vous êtes sur la page 1sur 19

Sponsored by

RETHINKING THE DATA CENTER NETWORK

eGuide
At the heart of the enterprise, the data center network is the core of all corporate communications. But as
application environments mature, becoming more services-oriented, those flows are richer and more
compute-intensive and bandwidth-hungry than many legacy networks can handle efficiently. Top that challenge off
with server consolidation, server virtualization and the trend toward convergence of data and storage on a single
fabric. The pressure on the data center network is coming from all sides. Today, many enterprise IT professionals
are rethinking their traditional approaches to the network. In these articles, Network World and its sister publication
InfoWorld explore how to approach networking today, starting with the basics and moving on from there.

IN THIS eGUIDE
2 Everything 5 Four Trends Shape 9 Emerging IEEE 13 10G Ethernet 17 Seven Resolutions 19 Data Center
You Need to Know the New Data Center Ethernet Standards Shakes Net Design for Network Network Resources
About Building Solid, IT execs adapt to a new reality Could Soothe Data to the Core Management Additional tools, tips
Reliable Networks as x86 virtualization trans- Center Headaches Shift from three- to two-tier One analyst’s advice on how and documentation
A networking primer on the forms the data center forever Under development is a way architectures driven by need to keep your edge
fundamentals, from the right to offload policy, security and for speed, server virtualization,
switches to the right network management processing from unified switching fabrics
monitoring techniques virtual switches

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

EVERYTHING YOU NEED TO KNOW ABOUT


BUILDING SOLID, RELIABLE NETWORKS
By Paul Venezia • InfoWorld

the event that it’s not, you’re likely better off bonding mul-
A networking primer on the fundamentals, from the right tiple 1Gbit links rather than upgrading to 10G for those

switches to the right network monitoring techniques closets. As 10G drops in price, this will change, but for
now, it’s far cheaper to bond several 1Gbit ports than to
While almost every part of a modern data center can be con- The size of the organization will determine the size and ca- add 10G capability to both the core and the edge.
sidered mission critical, the network is the absolute founda- pacity of the core. In most infrastructures, the data center core In the likely event that VoIP will be deployed, it may be bene-
tion of all communications. That’s why it must be designed is constructed differently from the LAN core. If we take a hypo- ficial to implement small modular switches at the edge as well,
and built right the first time. After all, the best servers and thetical network that has to serve the needs of a few hundred allowing Power over Ethernet (PoE) modules to be installed in
storage in the world can’t do anything without a solid network. or a thousand users in a single building, with a data center the same switch as the non-PoE ports. Alternatively, deploying
To that end, here are a variety of design points and in the middle, it’s not uncommon to find that there are big trunked PoE ports to each user is also a possibility. This allows
best practices to help tighten up the bottom end. switches in the middle and aggregation switches at the edges. a single port to be used for VoIP and desktop access tasks.
Ideally, the core is composed of two modular switching In the familiar hub-and-spoke model, the core connects
Core considerations platforms that carry data from the edge over gigabit fiber, to the edge aggregation switches with at least two links,
The term “network” applies to everything from LAN to SAN located in the same room as the server and storage infra- either connecting to the server infrastructure with direct
to WAN. All these variations require a network core, so structure. Two gigabit fiber links to a closet of, say, 100 copper runs or through server aggregation switches in
let’s start there. switch ports is sufficient for most business purposes. In each rack. This decision must be determined site by site,

2 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

due to the distance limitations of copper cabling. essential to proper network operation. A full discussion of transactional traffic. iSCSI networks can be built using
Either way, it’s cleaner to deploy server aggregation these two technologies is beyond the scope of this guide, the same Ethernet switches that handle nor­mal network
switches in each rack and run only a few fiber links back but correct configuration of these two elements will have a traffic – although iSCSI networks should be confined into
to the core than try to shoehorn everything into a few huge significant effect on the resiliency and proper operation of their own VLAN at the least, and possibly built on a spe-
switches. In addition, using server aggregation switches will any Layer-3 switched network. cific set of Ethernet switches that separate this traffic for
allow redundant connections to redundant cores, which will performance reasons.
eliminate the possibility of losing server communications in Minding the storage Make sure to choose the switches used for an iSCSI
the event of a core switch failure. If you can afford it and Once the core has been built, you can take on storage net- storage network carefully. Some vendors sell switches
your layout permits it, use server aggregation switches. working. Although other technologies are available, when that perform well with a normal network load but bog
Regardless of the physical layout method, the core switches you link servers to storage arrays, your practical choice will down with iSCSI traffic due to the internal structure of the
need to be redundant in every possible way: redundant power, probably boil down to a familiar one: Fibre Channel or iSCSI? switch itself. Generally, if a switch claims to be “enhanced
redundant interconnections, and redundant routing protocols. Fibre Channel is generally faster and delivers lower laten- for iSCSI,” it will perform well with an iSCSI load.
Ideally, they should have redundant control modules as well, cy than iSCSI, but it’s not truly necessary for most applica- Either way, your storage network should mirror the main
but you can make do without them if you can’t afford them. tions. Fibre Channel requires specific FC switches and costly network and be as redundant as possible: redundant
Core switches will be responsible for switching nearly FC HBAs in each server – ideally two for redundancy – switches and redundant links from the servers (whether
every packet in the infrastructure, so they need to be bal- while iSCSI can perform quite well with standard gigabit FC HBAs, standard Ethernet ports, or iSCSI accelerators).
anced accordingly. It’s a good idea to make ample use of copper ports. If you have transaction-oriented applica- Servers do not appreciate having their storage suddenly
Hot Standby Routing Protocol (HSRP) or Virtual Routing tions such as large databases with thousands of users, disappear, so redundancy here is at least as important as
Redundancy Protocol (VRRP). These allow two discrete you can probably choose iSCSI without affecting perfor- it is for the network at large.
switches to effectively share a single IP and MAC address, mance and save a bundle.
which is used as the default route for a VLAN. In the event Fibre Channel networks are unrelated to the rest of the Going virtual
that one core fails, those VLANs will still be accessible. network. They exist all on their own, linked only to the Speaking of storage networking, you’re going to need some
Finally, proper use of Spanning-Tree Protocol (STP) is main network via management links that do not carry any form of it if you plan on running enterprise-level virtualiza-

3 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

tion. The ability for virtualization hosts to migrate virtual is still the safest bet and less prone to human error. If you be used, and bonding these interfaces will not necessarily
servers across a virtualization farm absolutely requires sta- can physically separate that traffic by adding interfaces to result in performance improvements on a per-server basis.
ble and fast central storage. This can be FC, iSCSI, or even the virtualization hosts, then do so. However, if you require significant back-end server-to-serv-
NFS in most cases, but the key is that all the host servers Each pair of interfaces should be bonded using some er communication, such as front-end Web servers and back-
can access a reliable central storage network. form of link aggregation, such as Link Aggregation Control end database servers, it’s advisable to dedicate that traffic
Networking virtualization hosts isn’t like networking Protocol (LACP) or 802.3ad. Either should suffice, though to a specific set of bonded links. They will likely not need to
a normal server, however. While a server might have a your switch may support only one form or the other. Bond- be trunked, but bonding those links will again provide load-
front-end and a back-end link, a virtualization host might ing these links establishes load-balancing as well as balancing and redundancy on a host-by-host basis.
have six or more Ethernet interfaces. One reason is per- failover protection at the link level and is an absolute re- While a dedicated management interface isn’t truly a
formance: A virtualization host pushes more traffic than a quirement, especially since you’d be hard-pressed to find requirement, it can certainly make managing virtualization
normal server due to the simple fact that as many as doz- a switch that doesn’t support it. hosts far simpler, especially when modifying network pa-
ens of virtual machines are running on a single host. The In addition to bonding these links, the front-end bundle rameters. Modifying links that also carry the management
other reason is redundancy: With so many VMs on one should be trunked with 802.1q. This allows multiple VLANs traffic can easily result in a loss of communication to the
physical machine, you don’t want one failed NIC to take a to exist on a single logical interface and makes deploying virtualization host.
whole bunch of virtual servers offline at once. and managing virtualization farms significantly simpler. You So if you’re keeping count, you can see how you might
To combat this problem, virtualization hosts should can then deploy virtual servers on any VLAN or mix of VLANs have seven or more interfaces in a busy virtualization host.
be constructed with at least two dedicated front-end on any host without worrying about virtual interface configu- Obviously, this increases the number of switch ports required
links, two back-end links, and, ideally, a single manage- ration. You also don’t need to add physical interfaces to the for a virtualization implementation, so plan accordingly. The
ment link. If this infrastructure will service hosts that live hosts just to connect to a different VLAN. increasing popularity of 10G networking – and the dropping
in semi-secure networks (such as a DMZ), then it may The virtualization host storage links don’t necessarily cost of 10G interfaces – may enable you to drastically re-
be reasonable to add physical links for those networks need to be either bonded or trunked unless your virtual duce the cabling requirements so that you can simply use a
as well, unless you’re comfortable passing semi-trusted servers will be communicating with a variety of back-end pair of trunked and bonded 10G interfaces per host with a
packets through the core as a VLAN. Physical separation storage arrays. In most cases, a single storage array will management interface. If you can afford it, do it.•

4 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

FOUR TRENDS SHAPE THE NEW DATA CENTER


By Beth Schultz • Network World

latter, for example.


IT execs adapt to a new reality as x86 virtualization transforms the data center forever “In either case, you connect this pipe and then you can
get as many virtual Ethernet and Fibre Channel connec-
Thanks to x86 server virtualization and its follow-on tech- work adapters, you really have to sit back and think about tions as you want out of it,” says Logan Harbaugh, an in-
nologies, the state-of-the-art enterprise data center looks how you want to run your operations and remember that dependent analyst and member of the Network World Lab
vastly different than it did even a year ago. you have options. You’re not tied down to any one path. Alliance. “The architectures are similar, as there’s a limit
And moving from old school to next-generation isn’t just You can go down one road today and change directions to how much they can vary and still provide some level of
about hardware and software – it’s a call for a new way of tomorrow,” Fife says. functionality.”
thinking about the data center, as well. Here are four of the major trends in today’s data center: I/O virtualization simplifies the hardware scenario in
“Some people are so accustomed to one application, the data center rather considerably, reducing the number
one server and a methodology that locks you in to one TREND NO. 1: I/O virtualization of connections running to each device while increasing
way of thinking that they’re having a hard time fully under- At Wholesale Electric Supply, Fife is capitalizing on the flexibility. Take into consideration VMware’s best practices
standing the new data center,” says Bill Fife, director of ability to virtualize I/O, one of the latest of several signifi- recommendation that you assign a 1G port per virtual ma-
technology for Wholesale Electric Supply Co., in Houston. cant technology trends shaping the new data center. chine (VM). With newer 24-core servers, you could theo-
“But now with thin replication and replays and synchro- I/O virtualization, also known as I/O aggregation, splits retically run at least 24 and maybe as many as 50 VMs
nization to disaster recovery sites, and virtual machines interconnections across either 10-gigabit InfiniBand or on a single piece of hardware, which in turn would mean
being able to move files from data store to data store and Ethernet links. Xsigo Systems’ virtual I/O Director uses the needing 50 1G ports, Harbaugh says.
having multiple data stores on the server, and adding net- former and Cisco’s Nexus 5000 and 7000 switches the Realistically, even if you could get six four-port Ether-

5 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

net boards, you’d still only be able to support 24 VMs. – but I still see people buying a lot of Fibre Channel be- same vendor’s gear, Newman says. Interoperability is un-
“The nice thing about I/O virtualization is that everything cause they’re told it’s the way to go, even though our tests proven as yet.
shares the one InfiniBand or 10G Ethernet connection as actually show that the network often isn’t the bottleneck,” Scott Engel, director of IT infrastructure, Transplace,
lots of 1G pipes.” he says. “What you can do with Fibre Channel you can a third-party logistics provider in Dallas, identifies FCoE
At Wholesale Electric, Fife is using Xsigo’s virtual I/O do with 10G Ethernet and get equivalent or better per- as one of the two biggest networking and infrastructure
Director to decouple processing, storage and I/O. “By do- formance, even if that’s not the belief of SAN buyers and changes coming to the company’s data center over the
ing so we’ve essentially built our own cloud because we vendors.” next year. The other is 10G to the servers, he says.
can assign processor, RAM, disk and I/O on an as-needed This is early days for FCoE, but plenty of folks are look- Indeed, Newman says, the real tipping point in the
basis, and then, when they’re no longer needed, get rid of ing at the technology, says David Newman, president data center will happen over the next 12 to 18 months
it all and do something else,” he says. “There are no rigid of Network Test, an independent test firm, and Network when 10G replaces 1G Ethernet on server motherboards.
guidelines within which we have to operate. We can be World Lab Alliance member. If nothing more, the technol- “That’ll have all sorts of follow-on effects, enabling data-
extremely flexible.” ogy has cost in its favor, he says. storage convergence is just one,” he says.
“Besides the capital cost of the equipment, there’s Watch for this year to be the first with “appreciable
TREND NO. 2: the operational expense issue. People who run plain old numbers” of 40G switch ports shipping, Newman says.
Data and storage convergence Ethernet cost less than people who know Fibre Channel,” Fatter network pipes will be needed to accommodate the
Today’s data centers typically have distinct data and stor- Newman says. “On economic grounds, it’ll be cheaper to higher-speed server connections.
age networks, and nobody much likes that situation. “As provision FCoE than running separate infrastructures.”
soon as people can recombine those two networks, that’s Today, Brocade and Cisco have FCoE-capable switches TREND NO. 3:
what they’re going to do,” says Joel Snyder, senior partner that fully support all prioritizations and new mechanisms Faster processors, greater consolidation
with consulting firm Opus One and another member of the on Ethernet for delivering Fibre Channel-like service levels, By now, most enterprises have server consolidation sto-
Network World Lab Alliance. and other vendors are coming into the fray, as well. So ries to share, spun around a virtualization theme. They tell
“My belief and, yes, hope is that we’ll get rid of pure Fi- building a working, end-to-end FCoE network that handles of impressive physical-to-virtual server ratios, often in the
bre Channel and go to Fibre Channel over Ethernet [FCoE] data and storage is possible today – at least using the double digits. But consolidation in the data center is just

6 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

“We’re not talking about the container itself, but the concept, being able to say ‘I need
IT’S THE INSIDE eight racks of servers, four racks of storage, a rack and half of networking, and here’s the
THAT MATTERS power and cooling it will consume,’ and optimize that way.”
—Doug Oathout, vice president of converged infrastructure, HP

beginning, some say. Nehalem-EX. That chip is expected out by mid-year. one potential alternative to building out new or extending
The maturity and comfort levels around virtualization “If you start at the chip level, the ability to deliver more existing data centers. “Software routes around failures,
are growing, which means enterprises are showing the performance per processor core but also pack four times and maybe you’d replace that truck with a new one every
willingness to put more and more VMs on a single system, as many cores onto a single chip gives a vast amount of three years or so,” he says.
says Steve Sibley, an IBM Power Systems manager. Within new capacity and capability to put more virtual servers The data center-in-a-box concept is one that bears
the year, he adds, the Power 750 will support up to 320 onto a single platform without sacrificing performance or watching, agrees Doug Oathout, vice president of con-
VMs on a single server, the Power 770 and 780 up to 640 capability of the overall system,” Sibley says. “That design verged infrastructure at HP. Companies already are using
VMs and plans for up 1,000 VMs. point is enabling systems or offerings that give clients the data centers like pods or trailers outside their facilities,
The ability to support higher numbers of VMs per phys- ability to consolidate even more than they used to on a optimizing server, storage, networking, cooling and power
ical server comes on the back of faster processors, of single platform at much cheaper prices than ever before.” distribution resources for that size container, he says.
course. In IBM’s case, the company recently introduced “Now we see the performance-optimization trend moving
the Power7, an eight-core chip that delivers four times TREND 4: Infrastructure optimization inside the data center.”
the virtualization capability, scalability and performance Will your data center strategy one day include a semi This is not to say the data center is going to turn into
than its predecessor, Sibley says. The high-end Power7- tractor-trailer full of hands-off gear parked in some spot parking lot full of semis. But enterprises that run out of
based Power 780 and 770 servers will come with up to selected for optimal cooling and power supply? space, electricity, cooling and capacity today can take the
64 Power7 cores, for example. Dan Kusnetzky, vice president of research operations container concept and move that type of asset inside the
Intel, too, is readying an eight-core chip, code-named at The 451 Group, says he can imagine so – at least as data center, Oathout says. “We’re not talking about the

7 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

container itself, but the concept, being able to say ‘I need Oathout says. building blocks within the data center. It’s mindboggling
eight racks of servers, four racks of storage, a rack and “There’s so much more waste when you build a data how much more efficient that is compared to building a
half of networking, and here’s the power and cooling it will center to the ultimate capacity vs. building it to what it monolithic data center that has mega watts and 100,000
consume,’ and optimize that way.” needs to do, so you could almost call this a retrofitting square feet of space yet is incapable of supporting the
Piecing together a data center section by section is trend,” Oathout adds. “I’m going to optimize what I’ve got, equipment you need for your next workload.”
far less costly than the traditional go-for-broke approach, doing it with localized power, cooling and energy for the
and delivering power and cooling a section at a time is specific work I want to get done in this environment. Then Schultz is a freelance IT writer in Chicago. You can reach
far more efficient than moving it across a long distance, I take the next step, with multiple pods, instantiations or her at bschultz5824@gmail.com.

8 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

EMERGING IEEE ETHERNET STANDARDS


COULD SOOTHE DATA CENTER HEADACHES
By Jim Duffy • Network World

“There needed to be a way to communicate between the


Under development is a way to offload policy, security and management hypervisor and the network,” says Jon Oltsik, an analyst
processing from virtual switches at Enterprise Systems Group. “When you start thinking
about the complexities associated with running dozens of
Even as Cisco, HP and others are increasingly invading and blade servers and put it back onto physical Ethernet VMs on a physical server the sophistication of data center
each other’s turf in the data center, they are also joining switches connecting storage and compute resources. switching has to be there.”
forces to push through new Ethernet standards that could The IEEE draft standards boast a feature called Virtual But adding this intelligence to the hypervisor or host would
greatly ease management of those increasingly virtualized Ethernet Port Aggregation (VEPA), an extension to physi- add a significant amount of network processing overhead
IT nerve centers. cal and virtual switching designed to eliminate the large to the server, Oltsik says. It would also duplicate the task
The IEEE 802.1Qbg and 802.1Qbh specifications are de- number of switching elements that need to be managed of managing media access control address tables, aligning
signed to address serious management issues raised by the in a data center. Adoption of the specs would make man- policies and filters to ports and/or VMs and so forth.
explosion of virtual machines in data centers that tradition- agement easier for server and network administrators by “If switches already have all this intelligence in them, why
ally have been the purview of physical servers and switches. requiring fewer elements to manage, and fewer instances would we want to do this in a different place?” Oltsik notes.
In a nutshell, the emerging standards would offload signifi- of element characteristics – such as switch address ta- VEPA does its part by allowing a physical end station
cant amounts of policy, security and management process- bles, security and service attribute policies, and configu- to collaborate with an external switch to provide bridging
ing from virtual switches on network interface cards (NIC) rations – to manage. support between multiple virtual end stations and VMs,

9 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

and external networks. This would alleviate the need for 2010. Specifically, bg addresses edge virtual bridging: an VEPA allows an external bridge – or switch – to perform
virtual switches on blade servers to store and process ev- environment where a physical end station contains mul- inter-VM hairpin forwarding of frames, something stan-
ery feature – such as security, policy and access control tiple virtual end stations participating in a bridged LAN. dard 802.1Q bridges or switches are not designed to do.
lists (ACLs) – resident on the external data center switch.

Diving into IEEE draft standard details


GETTING VIRTUALIZED DATA CENTERS UNDER CONTROL
The IEEE’s emerging 802.1Q bg and bh standards are designed to address manageability of the growing population of
Together, the 802.1Qbg and bh specifications are de- virtual machines in data centers. They are intended to better align the capabilities of physical Ethernet switches in the edge and
signed to extend the capabilities of switches and end sta- core of data center networks with virtual switches in the server so that operations and management of these elements does
not overwhelm server and network administrators. Here’s a look at key capabilities of the emerging standards:
tion NICs in a virtual data center, especially with the pro-
liferation and movement of VMs. Citing data from Gartner, Virtual Ethernet Port Multichannel Remote Replication
officials involved in the IEEE’s work on bg and bh say 50% Aggregation (VEPA)
of all data center workloads will be virtualized by 2012. Switch Switch Switch
Some of the other vendors involved in the bg and bh Port Port Port
work include 3Com (now HP), Blade Network Technolo- Server Server
Port Extender
gies, Brocade, Dell, Extreme Networks, IBM, Intel, Juni-
per Networks and QLogic. While not the first IEEE speci-
fications to address virtual data centers, bg and bh are
amendments to the IEEE 802.1Q specification for virtual VM VM VM VM VM VM
VM VM VM VM VM VM
LANs and are under the purview of the organization’s
802.1 Data Center Bridging and Interworking task groups. Enables VMs to use external switch to Creates virtual switch ports for Defines a new tag format and uses
access features such as ACLs, policies, simultaneous switching of traffic from port extenders for replicating packets
The bg and bh standards are expected to be ratified VLAN assignments, security, etc. Allows multiple VMs. Adjacent switches use to a remote switch for control and
around mid-2011, according to those involved in the IEEE hairpin turns on same switch port for tags to replicate frames for multicast feature access/assignment.
inter-VM communications. applications.
effort, but pre-standard products could emerge in late

10 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

Cisco and HP are leading proponents of the IEEE effort despite the fact that
OF LIKE MINDS Cisco is charging hard into HP’s traditional server territory while HP is ramping
up its networking efforts ....

“On a bridge, if the port it needs to send a frame on VMware to have a policy follow a VM as it moves. This says Joe Pelissier, technical lead at Cisco.
is the same it came in on, normally a switch will drop multichannel capability attaches a tag to the frame that “It greatly reduces the number of things you have to
that packet,” says Paul Congdon, CTO at HP ProCurve, vice identifies which VM the frame came in on. manage and simplifies management because the control-
chair of the IEEE 802.1 group and a VEPA author. “But But another extension was required to allow users to ling switch is doing all of the work,” Pelissier says.
VEPA enables a hairpin mode to allow the frame to be deploy remote switches – instead of those adjacent to
forwarded out the port it came in on. It allows it to turn the server rack – as the policy controlling switches for the Cisco, HP say they’re in synch
around and go back.” virtual environment. This is where 802.1Qbh comes in: It What’s still missing from bg and bh is a discovery protocol
VEPA does not modify the Ethernet frame format but allows edge virtual bridges to replicate frames over mul- for autoconfiguration, Pelissier says. Some in the 802.1
only the forwarding behavior of switches, Congdon says. tiple virtual channels to a group of remote ports. This will group are leaning toward using the existing Logical Link
But VEPA by itself was limited in its capabilities. So HP enable users to cascade ports for flexible network design, Discovery Protocol (LLDP), while others, including Cisco
combined its VEPA proposal with a Cisco’s VN-Tag pro- and make more efficient use of bandwidth for multicast, and HP, are inclined to define a new protocol for the task.
posal for server/switch forwarding, management and broadcast and unicast frames. “LLDP is limited in the amount of data it can carry and
administration to support the ability to run multiple vir- The port extension capability of bh lets administrators how quickly it can carry that data,” Pelissier says. “We
tual switches and multiple VEPAs simultaneously on the choose the switch they want to delegate policies, ACLs, need something that carries data in the range of 10s
endpoint. filters, QoS and other parameters to VMs. Port extenders to 100s of kilobytes and is able to send the data faster
This required a channeling scheme for bg, which is will reside in the back of a blade rack or on individual rather than one 1,500 byte frame a second. LLDP doesn’t
based on the VN-Tag specification created by Cisco and blades and act as a line card of the controlling switch, have fragmentation capability either. We want to have the

11 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

capability to split the data among multiple frames.” the same thing: reducing the number of managed data VEPA form the lowest layer of implementation, and you
Cisco and HP are leading proponents of the IEEE effort center elements and defining a clear line of demarcation can move all the way to more complex solutions such as
despite the fact that Cisco is charging hard into HP’s tradi- between NIC, server and switch administrators when mon- Cisco’s VN-Tag.”
tional server territory while HP is ramping up its networking itoring VM communications. And the proposals seem to have broad industry support.
efforts in an attempt to gain control of data centers that have “This isn’t the battle it’s been made out to be,” Pelissier says. “We do believe this is the right way to go,” says Dhriti-
been turned on their heads by virtualization technology. Though Congdon acknowledges he initially proposed man Dasgupta, senior manager of data center marketing
Cisco and HP say their VEPA and VN-Tag/multichannel VEPA as an alternative to Cisco’s VN-Tag technique, the at Juniper. “This is putting networking where it belongs,
and port extension proposals are complementary despite two together present “a nice layered architecture that which is on networking devices. The network needs to
reports that they are competing techniques to accomplish builds upon one another where virtual switches and know what’s going on.”•

12 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

10G ETHERNET SHAKES NET DESIGN TO THE CORE


By Jim Duffy • Network World

to IT users and vendors. “This time, speed needs to be


Shift from three- to two-tier architectures driven by need for speed, server coupled with lower latency, abandoning spanning tree
virtualization, unified switching fabrics and support for the new storage protocols. Networking in
the data center must evolve to a unified switching fabric.”
The emergence of 10 Gigabit Ethernet, virtualization and verges storage protocols onto Ethernet also requires a A three-tier architecture of access, aggregation and
unified switching fabrics is ushering in a major shift in very low-latency, lossless architecture that lends itself to a core switches has been common in enterprise networks
data center network design: three-tier switching architec- two-tier approach. Storage traffic cannot tolerate the buff- for the past decade or so. Desktops, printers, servers and
tures are being collapsed into two-tier ones. ering and latency of extra switch hops through a three-tier LAN-attached devices are connected to access switches,
Higher, non-blocking throughput from 10G Ethernet architecture that includes a layer of aggregation switch- which are then collected into aggregation switches to
switches allows users to connect server racks and top-of- ing, industry experts say. manage flows and building wiring.
rack switches directly to the core network, obviating the All of this necessitates a new breed of high-performance, Aggregation switches then connect to core routers/
need for an aggregation layer. Also, server virtualization is low-latency, non-blocking 10G Ethernet switches now hitting switches that provide routing, connectivity to wide-area
putting more application load on fewer servers due to the the market. And it won’t be long before these 10G switches network services, segmentation and congestion man-
ability to decouple applications and operating systems are upgraded to 40G and 100G Ethernet switches now that agement. Legacy three-tier architectures naturally have
from physical hardware. the IEEE has ratified those standards. a large Cisco component – specifically, the 10-year-old
More application load on less server hardware requires “Over the next few years, the old switching equipment Catalyst 6500 switch – given the company’s dominance
a higher-performance network. needs to be replaced with faster and more flexible switch- in enterprise and data center switching.
Moreover, the migration to a unified fabric that con- es,” says Robin Layland of Layland Consulting, an adviser Cisco says a three-tier approach is optimal for segmen-

13 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

tation and scale. But the company also supports two-tier like require unique network attributes, according to Nick Lip- ing, highly reliable and faultless with low and predictable la-
architectures should customers demand it. pis, an adviser to network equipment buyers, suppliers and tency for broadcast, multicast and unicast traffic types.
“We are offering both,” says Senior Product Manager service providers. Network performance has to be non-block- “New applications are demanding predictable perfor-
Thomas Scheibe. “It boils down to what the customer
tries to achieve in the network. Each tier adds another two
hops, which adds latency; on the flipside it comes down to
FORK IN THE ROAD
Virtualization, inexpensive 10G links and unified Ethernet switching fabrics are catalyzing a migration from three-tier Layer 3
what domain size you want and how big of a switch fabric data center switching architectures to flatter two-tier Layer 2 designs, which subsume the aggregation layer into the access
you have in your aggregation layer. If the customer wants layer. Proponents say this will decrease cost, optimize operational efficiency and simplify management.

to have 1,000 10G ports aggregated, you need a two-tier


design big enough to do that. If you don’t, you need an- Three tier Core Two tier
Core
other tier to do that.”
Blade Network Technology agrees: “Two-tier vs. three-
tier is in large part driven by scale,” says Dan Tuchler, vice
Aggregation
president of strategy and product management at Blade
Network Technologies, a maker of blade server switches
for the data center. “At a certain scale you need to start
adding tiers to add aggregation.”
But the latency inherent in a three-tier approach is inade-
quate for new data center and cloud computing environments
that incorporate server virtualization and unified switching
fabrics that converge LAN and storage traffic, experts say.
Applications such as storage connectivity, high-perfor- Access Access/Aggregation
mance computing, video, extreme Web 2.0 volumes and the

14 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

New data centers require cut-through switching – which is not a new concept – to
THE OLD significantly reduce or even eliminate buffering within the switch. Cut-through
SWITCHEROO switches can reduce switch-to-switch latency from 15 to 50 microseconds to 2 to 4.
—Robin Layland, principal, Layland Consulting

mance and latency,” says Jayshree Ullal, CEO of Arista Most current switches are store-and-forward devices access switching layer has been subsumed into the serv-
Networks, a privately held maker of low-latency 10G Eth- that store data in large buffer queues and then forward it ers themselves, Lippis notes.
ernet top-of-rack switches for the data center. “That’s to the destination when it reaches the top of the queue. “In this model there is no third tier where traffic has
why the legacy three-tier model doesn’t work. Most of the “The result of all the queues is that it can take 80 micro- to flow to accommodate server-to-server flows; traffic is
switches are 10:1, 50:1 oversubscribed,” meaning dif- seconds or more to cross a three-tier data center,” he says. either switched at access or in the core at less than 10
ferent applications are contending for limited bandwidth New data centers require cut-through switching – which microseconds,” he says.
which can degrade response time. is not a new concept – to significantly reduce or even Because of increased I/O associated with virtual
This oversubscription plays a role in the latency of to- eliminate buffering within the switch, Layland says. Cut- switching in the server there is no room for a blocking
day’s switches in a three-tier data center architecture, through switches can reduce switch-to-switch latency from switch in between the access and the core, says Asaf
which is 50 to 100 microseconds for an application re- 15 to 50 microseconds to 2 to 4, he says. Somekh, vice president of marketing for Voltaire, a maker
quest across the network, Layland says. Cloud and virtual- Another factor negating the three-tier approach to data of Infiniband and Ethernet switches for the data center.
ized data center computing with a unified switching fabric center switching is server virtualization. Adding virtualization “It’s problematic to have so many layers.”
requires less than 10 microseconds of latency to function to blade or rack-mount servers means that the servers them- Another requirement of new data center switches is to
properly, he says. selves take on the role of access switching in the network. eliminate the Ethernet spanning tree algorithm, Layland
Part of that requires eliminating the aggregation tier in a Virtual switching inside servers takes place in a hyper- says. Currently all Layer 2 switches determine the best
data center network, Layland says. But the switches themselves visor and in other cases the network fabric is stretched to path from one endpoint to another using the spanning
must use less packet buffering and oversubscription, he says. the rack level using fabric extenders. The result is that the tree algorithm.

15 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

Only one path is active, the other paths through the fab- world,” Layland says. ber and cost of interface adapters by half, Layland notes.
ric to the destination are only used if the best path fails. The Finally, cost is a key factor in driving two-tier architec- And by eliminating the need for an aggregation layer of
lossless, low-latency requirements of unified fabrics in virtu- tures. Ten gigabit Ethernet ports are inexpensive – about switching, there are fewer switches to operate, support,
alized data centers requires switches using multiple paths $500, or twice that of Gigabit Ethernet ports yet with 10 maintain and manage.
to get traffic to its destination, Layland says. These switches times the bandwidth. Virtualization allows fewer servers to “If you have switches with adequate capacity and
continually monitor potential congestion points and pick the process more applications, thereby eliminating the need you’ve got the right ratio of input ports to trunks, you don’t
fastest and best path at the time the packet is being sent. to acquire more servers. need the aggregation layer,” says Joe Skorupa, a Gartner
“Spanning tree has worked well since the beginning And a unified fabric means a server does not need sepa- analyst. “What you’re doing is adding a lot of complexity
of Layer 2 networking but the ‘only one path’ [approach] rate adapters and interfaces for LAN and storage traffic. and a lot of cost, extra heat and harder troubleshooting
is not good enough in a non-queuing and non-discarding Combining both on the same network can reduce the num- for marginal value at best.” •

16 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

SEVEN RESOLUTIONS FOR NETWORK MANAGEMENT


By Jim Frey • Network World

(and kudos) can be gained by recognizing a problem


One analyst’s advice on how to keep your edge before calls start coming in to the help desk. And the
first line of defense here is understanding what motivates

2
Here are some suggestions for “resolutions” [as you look] Become more application-aware. users to call the help desk – their experience in using
forward to the road ahead: How can you really be in tune with the business or (or trying to use) the applications and services which IT
organization you are supporting if you don’t know provides. User quality-of-experience data can be gained

1
Build an understanding of where and how well the really important apps and services via on-client agents, synthetic traffic generators (whether
IP videoconferencing. are running? And in the converse, how can you understand internally managed or externally subscribed), or by pas-
It’s big, it’s bad, and it’s going to change your if the loads your network is carrying are even relevant or just sively monitoring traffic and comparing request/response
life, especially when desktop videoconferencing starts to so much streaming audio keeping remote office workers patterns. Best practices employ a mix of these, but any
catch on. Videoconferencing is real-time, requires priority entertained during the business day? Look to NetFlow (or one is better than none.
QoS, low latency and many times more bandwidth than similar) data or packet-based monitoring tools to give you

4
VoIP. If you had to shake a few skeletons out of the wir- this perspective. Think proactive/preventative.
ing closet when you rolled out VoIP, you better be ready Similar to #3, but more broadly speaking, an

3
for a lot more skeletons. Start by finding out what type Start tracking user experience. ounce of problem prevention is worth at least a
of videoconferencing is being used or is planned for your Even if you love the thrill of firefighting and trou- pound of frantic troubleshooting cure. And there are lots
workforce, and figure out how much load this will create bleshooting gnarly performance issues across of options here. One of the most effective is to get better
on your network before it starts a viral ramp-up. distributed, n-tier architectures, the greatest satisfaction change control in place, thus preventing the “oops” mo-

17 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

Take advantage of the fact that you can help to measure IT service delivery in a way
AN OPEN EXCHANGE that the other guys can’t – in context with everything else that is going across the
wire – and share that data openly and freely.

ments when the upgrade you roll out breaks something can’t – in context with everything else that is going across compliance auditing, and predictive analytics.
else (or a lot of other things). Others include using service the wire – and share that data openly and freely. Many

7
mapping and assessing health and risk on a sustained times, network-facing data can be the most effective Figure out how to leverage
basis, or using predictive analytics tools to help you sniff place to start the triage process when no one else is able virtualization.
out the important early warning signs of pending issues to get to the root of a problem. One of the more interesting evolutions of man-
hidden in all of that performance monitoring data you’ve agement technology is the growth in the number of hy-

6
been collecting. Embrace automation. pervisor platforms in place around your network. What
With the onslaught of virtualization (a.k.a. “serv- started purely as a computing system concept has rap-

5
Make friends with the system er hide and seek”), mobility (a.k.a. “client hide idly spread to network equipment, so you can now deploy
admins and app support guys. and seek”) and composite Web applications (a.k.a. – you management tools to new places as virtual images or vir-
OK, maybe that’s two resolutions, but it’s all guessed it – “application hide and seek”) you won’t be tual appliances quickly and easily. Keep these in mind
about getting along better. Unless you are in the minority, able to keep up with all of the moving parts without auto- when you are trying to work out how to achieve better
your cross-organization working relationships usually look mating discovery and upkeep of relationship recognition distribution of management tools and instrumentation.•
more like a Big Fat Greek Wedding than one big happy and modeling. Automation is also available for respond-
family. Take advantage of the fact that you can help to ing to well-known event scenarios with pre-scripted ac- Frey is a senior analyst with Enterprise Management
measure IT service delivery in a way that the other guys tions, change management for configuration roll-backs, Associates.

18 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by

DATA CENTER NETWORK RESOURCES


1
ProCurve Networking by HP

Interconnecting the Innovation Through End-to-End Unified


WHITE P APER Intelligent EDGE Networking Solutions
ROI of Switched Ethernet Networking Solutions for the
Midmarket HP networking for midsize companies
Sponsored by: HP ProCurve Solution brief
Randy Perry Abner Germanow
August 2009 Customer challenge Innovative HP solutions
Businesses are struggling to increase revenue and In response to these challenges, HP networking
retain customer loyalty due to constant competitive enables businesses to do more with less. Through
Executive Summary
pressure to innovate quickly and deliver greater value a broad portfolio of secure unified networking
New generations of network equipment continue to be more reliable than previous to their customers. Businesses look to technology to products and solutions, HP helps businesses to

www.idc.com
Freeing your
generations. Meanwhile, the applications running across the network have become help spur growth, cut costs, and stay ahead of the reduce complexity, enhance business agility, and
more ubiquitous and more demanding. Underlying this cycle, the network has competition. manage costs.
become much more important to businesses of all sizes — including midmarket firms
Businesses are forced to undergo digital Reduced complexity

network infrastructure — and in all industries.

F.508.935.4015
transformation to remain competitive and enable The network is the foundation for a converged
Driven by the financial crisis, midmarket firms are taking a close look at all budget line new services like supply chain automation, business infrastructure and for advanced applications like
Experience innovation with end-to-end unified solutions items. They demand solutions that provide more than sufficient functionality for their analytics, manufacturing systems automation, ordering unified communications and collaboration.
current networking needs and also leave plenty of headroom to scale their network in systems, and customer relationship management. The
HP simplifies the network with a unified approach—
the years to come, in terms of both bandwidth and functionality. At the same time, solution to IT sprawl lies in a converged infrastructure
wired and wireless hardware, software, management,
they want these network systems to be cost effective to deploy and run. where IT silos are brought together into pools of

P.508.872.8200
Introduction ............................................................................................................... 2 and security. HP’s integrated, easy-to-use management
One company striving to address these needs is HP. HP ProCurve networking virtualized assets, shared by many applications and
features reduce the time and complexity of planning,
HP ProCurve business white paper: Executive summary .............................................. 2 Networks in Transition ................................................................................................. 2 services. The network is the foundation of a converged
products include a broad line of LAN core switches, LAN edge switches, and wireless deploying, and operating wired and wireless LANs
LAN and network security solutions that are all brought together under a unified Challenge among change ..................................... 2 Performance............................................................................................................ 2 infrastructure. The systems and applications are
from a single management platform. This eliminates

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA


management suite. To determine the return on investment (ROI) associated with Redefining the economics of networking Proprietary vs. open standards............................... 2
Value-driven solutions ........................................... 2
Scalability ............................................................................................................... 3 all interconnected over a critical IT infrastructure—
the network.
the need to maintain multiple management stations,
implementation of an HP ProCurve network solution, IDC conducted a study of Security.................................................................................................................. 3 different device and configuration policies, and
medium-sized to large organizations with an HP ProCurve implementation up and
Advanced networking that break IT barriers and redefine the value Breaking the barriers of networking ........................ 3 Quality of Service (QoS)............................................................................................ 3 Today, businesses seem shackled by a mix of legacy access rights; it also eliminates the need to learn
running in their production environment. IDC estimates that these businesses were of networking Conclusion .......................................................... 4 Business Efficiency ................................................................................................... 3 network infrastructures that are difficult to expand, different systems.
able to achieve a 473% ROI; a three-year (discounted) benefit of $38,466 per 100 For more information ............................................ 4 lack interoperability, and cost too much. With the
ProCurve’s Approach to Network Design: Move Intelligence to the EDGE ............................. 4 Security is built into the network fabric as opposed
users; and payback on their initial investment within 5.7 months. booming demand for mobile and fixed access to
ProCurve’s Intelligent EDGE Solutions ............................................................................ 5 to bolting it on. The HP ProActive Defense strategy
multiple applications and services, the rapid evolution
Network Infrastructure Growth Drivers in ProCurve Intelligent EDGE Switches ............................................................................ 5 enforces security at the network edge, where users
of wired and wireless technology, and the proliferation
Today's Midmarket Environments ProCurve Interconnect Fabric ..................................................................................... 6 and devices connect. Policies are enforced consistently
of WLAN devices and applications, businesses simply
for wired and wireless users alike from the same
The IT industry in general and the networking market in particular are finally showing Intelligent EDGE Networks versus Traditional Core Networks ............................................. 6 lack the ability to scale their networks efficiently, much
management system. Through unified core-to-edge
signs of stabilizing after the financial crisis of late 2008/early 2009. Looking forward, Traditional Core Networks ......................................................................................... 6 less manage and secure them.
switches, businesses can enable a consistent command
IDC anticipates that networking will rebound more strongly than other areas of IT Intelligent EDGE Networks......................................................................................... 7 Due to these rapid changes, businesses struggle to line interface, common software, and interchangeable
spending, driven by the fact that the recession has not changed the fundamental
The Value of Deploying an Intelligent EDGE Network ........................................................ 8 remain agile and ensure business continuity. Network modules. In addition, HP’s optimized WLAN solutions
reasons for businesses to continue investing in their networks. Major drivers for
Minimize Investment Risk.......................................................................................... 8 executives relentlessly seek a clear and documented provide simple and cost-effective migration to IEEE
Issue 1 midmarket firms to continue investing in networking equipment include:
return on investment for every project. The networks 802.11n environments that scale to deliver high-speed
Improve Network Performance and Availability ............................................................. 8
2 Freeing your network infrastructure `Migration of voice and video to IP. As businesses look to reduce expenses by must be simple to manage to support such diverse WLAN capacity as businesses grow.
Increase Network Security......................................................................................... 8
4 The Convergent Policies of Wired and Wireless LANs adopting technologies such as videoconferencing and voice over IP, the increasing applications and adapt to changing needs, and they
amount of voice and video traffic is creating new challenges for the network. Bolster Choice and Flexibility ..................................................................................... 8 must be cost-effective to support business objectives.
6 About HP
Response times for Web sites or applications of up to a second used to be Summary................................................................................................................... 9
acceptable, but the human eye and ear can detect delays measured in milliseconds. For More Information ................................................................................................. 10
Simply throwing bandwidth at the problem is insufficient as the mix of application
demands on the network rises. Midmarket firms must incorporate new levels of
bandwidth and intelligence into their network to handle these more complex quality-
of-service requirements.

Featuring research from

White Paper Research Report White Paper White Paper Solution Brief
Freeing Your Network IDC: ROI of Switched Redefining the Economics Interconnecting the Innovation Through
Infrastructure featuring Ethernet Networking of Networking Intelligent EDGE End-to-End Unified
Gartner Solutions Networking Solutions
See why Gartner and the network in- Learn how building upon the ProCurve
The evolution of networks has trans- IDC interviewed several organiza- dustry have positioned HP as a leader Adaptive EDGE Architecture, moving Through a broad portfolio of
formed business while simultaneous- tions to determine their future and the fastest-growing enterprise intelligence and functionality to the secure unified networking
ly creating legacy systems that have networking strategies. Learn how Ethernet LAN networking vendor. HP’s edge of the network and interconnecting products and solutions, HP helps
become difficult to manage. Learn they built a foundation to meet commitment to industry-standards all devices with Interconnect Fabric of- businesses to reduce complexity,
about the benefits of converged infra- network switch demands with an can help organizations optimize their ferings, can help companies effectively enhance business agility, and
structure and revised policies to take average 5.7 month payback and networks. establish a secure, mobile, multi-service manage costs. Review this Solu-
systems into the next generation. zero annual maintenance fees. infrastructure. Minimize investment risk tion Brief for midsize companies.
Read more >> and ensure maximum value — immedi-
Read more >> Learn more >> ately and well into the future. Read more >>

Learn more >>

19 of 19

Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management

Vous aimerez peut-être aussi