Vous êtes sur la page 1sur 18

SAN WHITE PAPER

STORAGE AREA NETWORK

Achieving Enterprise SAN Performance with the Brocade 48000 Director

A best-in-class architecture enables the widest range of efficiency, performance, and flexibility advantages.

The Brocade 48000 Director is the industrys highest-performing platform for supporting enterprise-class Storage Area Network (SAN) operations. With its intelligent fifth-generation ASICs and new hardware and software capabilities, the Brocade 48000 provides a reliable foundation for fully connected multiprotocol SAN fabrics, FICON solutions, and Meta SANs capable of supporting thousands of servers and storage devices. This paper describes how IT organizations can leverage the benefits of this SAN director to maximize performance, flexibility, and data availability in mission-critical environments. In addition to summarizing the architectural advantages of the Brocade 48000, this paper explains how the various blades used in the platform can help optimize performance to address specific requirements. For more information about SAN design or additional Brocade solutions, such as the Brocade Multiprotocol Router, visit the Brocade Bookshelf at www.brocade.com/products/sanadmin_bookshelf.

OVERVIEW
In May 2005, Brocade introduced the Brocade 48000 Director (see Figure 1), a third-generation SAN director and the first in the industry to provide 4 Gbit/sec Fibre Channel capabilities. Since that time, the Brocade 48000 has become a key component in thousands of data centers around the world. Figure 1. The Brocade 48000 Director in a 384-port configuration.

Compared to competitive offerings introduced in 2006, the Brocade 48000 is the industrys fastest and most advanced SAN director, providing numerous advantages: The platform scales from as few as 16 to as many as 384 4 Gbit/sec ports in a single domain. The central memory architecture used in Brocade Application Specific Integrated Circuits (ASICs) is never subject to Head of Line Blocking (HoLB). The product design enables simultaneous uncongested operation on all ports as long as simple best practices are followed. The platform can provide over 1.5 Tbit/sec (3 Tbit/sec full duplex) of usable switching capacity in a chassis designed to support even higher port speeds in the future. In addition to providing the highest levels of performance, the Brocade 48000 features a modular high-availability architecture that supports five-nines environments. Moreover, the platforms industry-leading power and cooling efficiency help reduce ownership costs while maximizing rack density. The Brocade 48000 uses just 2.9 watts per port in its largest configuration (.75 watts per gigabit). This is twice as efficient as its predecessor, and up to six times more efficient than competitive products. This efficiency not only reduces data center electric billsit reduces cooling requirements and minimizes or eliminates the need for data center infrastructure upgrades, such as new PDUs, power circuits, and larger HVAC units. In addition, the highly integrated architecture uses fewer components per board, which improves key reliability metrics such as Mean Time Between Failure (MTBF).

How Is Fibre Channel Bandwidth Measured?


Fibre Channel is a full-duplex network technology, meaning that transmission can occur in both directions simultaneously. However, much like a highway speed limit sign, the name of the rated standard (for example, 4 Gbit/sec) refers only to the bandwidth going in one direction. While a 4 Gbit/sec link could be considered 8 Gbit/sec full duplex, this is an uncommon usage and potentially confusing. When considering the aggregate switching bandwidth of a SAN director, it is best to use the same point of reference as the Fibre Channel specification. Any bandwidth measurement doubled to reflect full-duplex capabilities should always be explicitly labeled as such. Full-duplex transmission speeds are included in this paper only to provide a point of comparison to other vendors that double aggregate measurements.

The Brocade 48000 is also highly flexible, supporting Fibre Channel, FICON, FCIP with IPSEC, and iSCSI today, and additional protocols in the future. IT organizations can easily mix various Fibre Channel blade options to build an architecture that has the optimal price/performance ratio to meet the requirements of specific SAN environments. As of late 2006, the Brocade 48000 supports the following blades: Control processor CPU plus 256 Gbit/sec (512 Gbit/sec full duplex) backplane switching module 16-port 4 Gbit/sec Fibre Channel blade (FC4-16 or 16-port blade) 32-port 4 Gbit/sec Fibre Channel blade (FC4-32 or 32-port blade) 48-port 4 Gbit/sec Fibre Channel blade (FC4-48 or 48-port blade) 16-port 4 Gbit/sec Fibre Channel routing blade with two FCIP ports (FR4-18i or router blade) with FICON support 8-port 4 Gbit/sec Fibre Channel blade with eight iSCSI ports (FC4-16IP or iSCSI blade) Even though it provides all of these enterprise-class capabilities, the Brocade 48000 has plug-and-play setup characteristics, and IT organizations can follow a few simple guidelines to maximize its performance and availability. This paper describes the directors internal architecture and how to utilize the director and its blades to address particular business requirements.

BROCadE 48000 PLaTFORM aSIC FEaTURES


There are many different ways to build a director: the possibilities include shared memory architectures, crossbars, or bus designs. High-speed switches for both Ethernet and Fibre Channel use shared memory designs for the highest performance, and commodity Ethernet switches often use crossbars to lower development costs. Whatever the method, large modular switches need some kind of internal connectivity between discrete components (blades, modules, or linecards) over a midplane or backplane. The Brocade 48000 features an internal Channeled Central Memory Architecture (CCMA) fabric of Fibre Channel ASICs capable of switching at 256 Gbit/sec (512 Gbit/sec full duplex) per chip. Each Brocade Condor ASIC has thirty-two 4 Gbit/sec ports that can be combined into virtual interfaces of any size, up to the full capacity of the chip. The Brocade shared memory architecture leverages the same protocol as the front-end ports, enabling back-end ports to avoid latency due to protocol conversion overhead. When a frame enters the ASIC, the destination address is read from the header, which enables routing decisions to be made even before the whole frame has been received. This allows the ASICs to perform cut-through routing. In other words, a frame can begin transmission out of the correct destination port on the ASIC even before the initiating device has finished transmitting it. Only Brocade offers a SAN architecture that can make these types of switching decisions at the port level, thereby enabling local switching and the ability to deliver 1.5 TB of bandwidth in the system. Local latency is 0.8 s and blade-to-blade latency is 2.4 s, the fastest latency in the industry. As a result, the Brocade 48000 has the lowest delay and highest performance of any Fibre Channel product in the industry.

Because port-blade Condor ASICs can act as independent switching engines, the Brocade 48000 can leverage localized switching within a port group in addition to switching over the backplane. On the 16- and 32-port blades, local switching is performed within 16-port groups and, on the 48-port blade, local switching is performed within 24-port groups. Unlike Unlike competitive competitive offerings, frames being switched offerings, frames being within port groups do not need to traverse the backplane. This enables every port on switched within port high-density blades to communicate at full groups do not need to 4 Gbit/sec speed with port-to-port latency traverse the backplane. of just 800 ns, 25 times better than the next-fastest SAN director on the market. The Brocade 48000 also has 1024 bufferto-buffer credits within each Condor ASIC to support longer-distance configurations. Similarly, hardware-enforced zoning resources provide more flexible hardware-enforced zone sets as well as increased security between the connected devices in a shared network. The Condor ASIC also enhances Brocade Inter-Switch Link (ISL) Trunking features with 32 Gbit/sec frame-level trunks (up to eight 4 Gbit/sec links in a trunk) and Dynamic Path Selection (DPS) for exchange-level and device-level balancing between trunk groups. Up to eight trunks can be balanced for 256 Gbit/sec (512 Gbit/sec full duplex). (A Fibre Channel exchange is generally equivalent to a SCSI operation.) Furthermore, Brocade has significantly improved frame-level trunking: trunks are now masterless. If any trunk member drops, the trunk will not have to re-build. The trunk bandwidth will drop proportionally but it will remain active.

BROCadE 48000 PLaTFORM aRCHITECTURE


In the Brocade 48000, each port blade has Condor ASICs that expose a certain number of ports for connectivity and a certain number of ports to the control processors via the backplane. The director uses an ASIC layout analogous to a fat-tree core/edge topology. The fat-tree layout is symmetrical: all ports have equal access to all other ports. The director can switch frames locally if the destination port is on the same ASIC as the source. This is an important feature for high-density environments, because it allows over-subscribed blades to achieve full uncongested line rate performance. No other director offers local switching: traffic must traverse the crossbar ASIC even if traveling to a neighboring porta trait that ultimately degrades performance. The flexible Brocade 48000 architecture utilizes a wide variety of blades for increasing port density or introducing multiprotocol capabilities. IT organizations can easily mix the various blades in the Brocade 48000 to address unique business requirements and ensure an optimal price/performance ratio. The following blades are available (with more planned): 16-port Fibre Channel blade 32-port Fibre Channel blade 48-port Fibre Channel blade Fibre Channel routing and FCIP blade Fibre Channel and iSCSI blade

16-port Fibre Channel Blade


On the 16-port blade, all ports have 64 Gbit/sec (128 Gbit/sec full duplex) of possible external input, and the same internal bandwidth available. In other words, the blade has a 1:1 subscription ratio. It is useful for extremely high-performance servers, supercomputing environments, high-performance shared storage subsystems, and SANs with unpredictable traffic patterns. The 16-port blade is highly integrated with just one active switching component (the ASIC) and associated support componentsa design that results in lower power and cooling requirements as well as a higher MTBF. Figure 2 shows a functional block diagram and photograph of the 16-port blade, illustrating the efficiency of the design. Figure 2. 16-port blade design.

16 4 Gbit/sec ports 1:1 Subscription Ratio at 4 Gbit/sec ASIC

32 Gbit/sec pipe

64 Gbit/sec to Control Processor/Core

32 Gbit/sec pipe

Figure 3 shows how the blade positions in the Brocade 48000 are connected to each other using 16-port blades in a 128-port configuration.

Figure 3. Overview of a 128-port configuration.

On the left is an abstract cable-side view of the director, showing the ten slots populated with 16-port blades. On the right is a high-level diagram of how the slots interact with each other over the backplane. Each thick line represents 32 Gbit/sec (64 Gbit/sec full duplex) of internal links connecting the port blades with the control processor blades. The control processor blades contain the ASICs that switch between the port blades. Every port blade is connected to both control processors, and the aggregate bandwidth of these internal links is equal to the aggregate bandwidth available on external ports. Each port blade has sixteen 4 Gbit/sec ports (64 Gbit/sec, or 128 Gbit/sec full duplex) available externally, and sixteen 4 Gbit/sec ports (64 Gbit/sec, or 128 Gbit/sec full duplex) provided by internal links to the backplane. The every port blade to every control processor blade mesh design makes it analogous to a core/edge network, and the 1:1 internal/external bandwidth ratio makes it a fat-tree or non-over-subscribed layout. It is possible to locally switch between ports on the blade, but with a 1:1 backplane subscription ratio, it is not necessary to do so. The primary utility of local switching on this blade is to reduce port-to-port latency in order to provide the highest possible performance: crossing the backplane causes a 2.4 s latency, whereas locally switched frames cross the switch in only 800 ns. However, even 2.4 s is still 50 times faster than a read-cache hit on a disk array, and is much faster than any competing director product.

Switching Speed Defined


When describing SAN switching speed, vendors typically use the following measurements: Milliseconds (ms): One thousandth of a second Microseconds (s): One millionth of a second Nanoseconds (ns): One billionth of a second

32-port Fibre Channel Blade


The 32-port blade is designed with a 16:8 subscription ratio at 4 Gbit/sec for non-local traffic, and a 1:1 ratio at 2 Gbit/sec for any traffic pattern. If some or all of the attached servers and storage devices run at 2 Gbit/sec, or if I/O profiles are bursty, the 32-port blade typically provides the same performance as the 16-port blade. Figure 4 shows a functional block diagram and photograph of the 32-port blade. Figure 4. 32-port blade design.

Power and Control Path

16 4 Gbit/sec Local Switching Group 16:8 Over-subscription 32 Gbit/sec Pipe ASIC 64 Gbit/sec to Control Processor ASIC 32 Gbit/sec Pipe

16 4 Gbit/sec Local Switching Group 16:8 Over-subscription

ASIC

Figure 5 shows how the blade positions in the director are connected to each other using 32-port blades in a 256-port configuration. Figure 5. Overview of a 256-port configuration.

When connecting a large number of devices that need sustained 4 Gbit/sec line rates, IT organizations can use locality to avoid congestion. The blade is divided into two 16-port groups for local switching. The physically lower 16 ports (ports 0 to 7 and ports 16 to 23) form one group and the upper ports (ports 8 to 15 and ports 24 to 31) form the other group. Figure 6 illustrates the internal connectivity between 32-port blades and the control processors.

Port Blade 1 16 4 Gbit/sec Condor ASIC Condor ASIC 32 Gbit/sec pipe (64 Gbit/sec full-duplex)

Figure 6. 32-port blade internal connectivity.

16 4 Gbit/sec

32 Gbit/sec pipe (64 Gbit/sec full-duplex) Each line is a 16 Gbit/sec frame-balanced pipe (32 Gbit/sec full-duplex)

Core CP 0 (slot 5)

Core CP 1 (slot 6)

There are two ASICs on each port blade, and each ASIC has a group of 16 outward-facing ports. For each group, there are two internal 8 Gbit/sec connections to each of the two control processors, for a total of 32 Gbit/sec (64 Gbit/sec full duplex) in backplane switching capacity. Traffic is balanced across the paths, such that the four 8 Gbit/sec connections form this virtual 32 Gbit/sec backplane pipe. Any combination of the 16 outward-facing ports in a group can use up to the full backplane bandwidth without congestion. This workload balancing and the resulting optimized performance represent the automatic behavior of the architecture and require no administration. If more than 32 Gbit/sec of total throughput is needed for each 16-port group, high-priority connections can be localized within the groupensuring that up to 16 devices or ISLs have ample bandwidth to connect to devices on other blades. Such connections do not use the backplane bandwidth. Likewise, localized traffic does not count against the subscription ratio and cannot be impacted by traffic from other devices. Regardless of the number of devices communicating over the backplane, locally switched devices are guaranteed 4 Gbit/sec bandwidth. This Brocade-unique technology for local switching helps preserve bandwidth to reduce the possibility of congestion in higher-density configurations.

48-port Fibre Channel Blade


At 24:8, the 48-port blade has a higher backplane over-subscription ratio but also has larger port groups to take advantage of locality. The backplane connectivity of this blade is identical to the 32-port blade. The only difference is that, rather than just 16 ports per ASIC, the 48-port blade exposes 24 outward-facing ports (96 Gbit/sec or 192 Gbit/sec full duplex of local switching per ASIC). This blade is especially useful for high-density SAN deployments where: Large numbers of servers need to be connected to the director Some or all hosts are running below line rate much of the time Potential localization of most traffic flows is achievable Figure 7 shows a functional block diagram and photograph of the 48-port blade.

Figure 7. 48-port blade design.

24 4 Gbit/sec Local Switching Group 24:8 Over-subscription

Power and Control Path

32 Gbit/sec Pipe ASIC 64 Gbit/sec to Control Processor ASIC 24 4 Gbit/sec Local Switching Group 24:8 Over-subscription 32 Gbit/sec Pipe

10

Fibre Channel Routing and FCIP Blade


The Brocade FR4-18i routing blade consists of sixteen 4 Gbit/sec Fibre Channel ports with enhanced routing processors and two Gigabit Ethernet ports for FCIP. The FCIP ports support up to eight virtual tunnels, and up to 32 virtual tunnels are supported in a Brocade 48000 with two blades. In addition, the Brocade FR4-18i routing blade supports line-rate performance, fast write, compression, encryption, tape pipelining, and FICON. The locality groups are ports 0 to 7 and ports 8 to 15. Figure 8 shows a functional block diagram and photograph of this blade.

FCIP and Routing

Figure 8. Fibre Channel routing and FCIP blade design.

Power and Control Path

8 4 Gbit/sec local switching group

Fibre Channel Switching

32 Gbit/sec pipe Compression ASIC 64 Gbit/sec to Control Processor

8 4 Gbit/sec local switching group

ASIC

32 Gbit/sec pipe 2 Gbit Ethernet ports

Encryption

11

Fibre Channel and iSCSI Blade


The Fibre Channel and iSCSI blade consists of eight 4 Gbit/sec Fibre Channel ports and eight iSCSI-over-Gigabit Ethernet ports. All ports switch locally within the 8-port group. The iSCSI ports act as a gateway to translate iSCSI hosts onto the Fibre Channel fabric. Because each port supports up to 64 iSCSI initiators, one blade can support up to 512 servers. Populated with four blades, a single Brocade 48000 can fan-in 2048 servers. Figure 9 shows a functional block diagram and photograph of this blade.

Figure 9. iSCSI blade design.

iSCSI and Ethernet Block

Power and Control Path Fibre Channel Switching 8 Gbit Ethernet ports

32 Gbit/sec pipe

8 4 Gbit/sec Fibre Channel ports

ASIC

64 Gbit/sec to Control Processor

32 Gbit/sec pipe

12

THE BENEFITS OF a CORE/EdGE NETWORK dESIGN


The core/edge network topology has emerged as the design of choice for large-scale, highly available, high-performance SANs constructed with multiple switches of any size, from any vendor. The Brocade 48000 uses an internal architecture analogous to a core/edge fat-tree topology, which is widely recognized as being the highest-performance arrangement of switches. The Brocade 48000 is not actually a fat-tree network of discrete switches, but thinking of it in this way provides a useful visualization. While IT organizations could build a network of 32-port switches with similar performance characteristics to the Brocade 48000 (128-port version), it would require twelve Brocade 4100 switches arranged in a fat-tree fashion. This network would require more complex cabling, management of twelve discrete switching elements, non-trivial power and cooling, three times the number of SFPs to support ISLs, and installation of a non-trivial cable plant. As a result, this would make the switch-based design difficult to cost-justify. In contrast, the Brocade 48000 delivers the same high level of performance without the associated disadvantages, bringing fat-tree performance to IT organizations that could not previously cost-justify the investment or overhead.

The shared memory architecture enables the entire director to be a single domain and a single hop in a Fibre Channel network.

It is important to understand that the internal ASIC connections within a Brocade 48000 are not E_Ports connecting a network of switches. The shared memory architecture enables the entire director to be a single domain and a single hop in a Fibre Channel network. Likewise, ASICs within a Brocade 48000 do not connect via E_Ports. When a port blade is removed, a fabric reconfiguration is not sent across the network, thereby simplifying operations.

However, unlike an actual fat-tree network, the Brocade 48000: Is easier to deploy and manage than the analogous network of switches Simplifies the cable plant by eliminating the ISLs and SFP media Is far more scalable, because it does not consist of a large number of independent domains Is less expensive in terms of both initial and ongoing costs Has far fewer active components and therefore much higher reliability Does not run switch-to-switch protocols (E_Port) between blades Provides multiprotocol support within a single chassis Is capable of achieving greater performance due to internal routing optimizations

13

PERFORMaNCE IMPaCT OF CONTROL PROCESSOR FaILURE MOdES


Any types of failureswhether a control processor or core ASICon the Brocade 48000 are extremely rare. According to reliability statistics from Brocade OEM Partners, Brocade 48000 control processors have a calculated Mean Time Between Replacement (MTBR) rate of 337,000 hours (more than 38 years) based on real-world field performance. However, even in the rare occurrence of a failure, the Brocade 48000 is designed for fast and easy control processor replacement. This section describes potential failure scenarios and how the Brocade 48000 is designed to minimize the impact on performance and provide the highest level of system availability. The Brocade 48000 has two control processor blades, each of which contains a CPU and a group of ASICs that provide the core switching capacity between port groups. The control processor functions are active-passive (hot-standby) redundant while the switching functions are active-active. The control processor that has the active processor is known as the active control processor blade, but both active and standby control processors have active core ASIC elements. In some scenarios, such as failure handling, it is necessary to move routes from one control processor to another. This section describes those scenarios and their impact on data traffic and applications. The ASICs and CPU blocks are separated in both hardware and software except for a common DC power source. Figure 10 shows a functional block diagram and photograph of the control processor blade, illustrating the efficiency of the design and the separation between the ASICs and CPU blocks. Figure 10. Control processor blade design. Control Processor Block Control Processor Power

Modem Management Port Serial Management Port Ethernet Management Port

Control Path to Blades

ASIC

256 Gbit/sec (512 Gbit/sec full duplex) to Backplane Blades

ASIC Switching Power Switching Block

14

Failure in a Control Processor Blade


If the processor section of the active control processor blade fails, this affects only the management plane: the core ASICs are functionally separate and continue switching frames without interruption. It is possible that a control processor block can fail completely while the core ASICs continue to operate without degradation, or vice versa. A control processor failure has no effect on the data plane: the standby control processor would automatically take over and the switch would continue to operate without dropping any data frames. Only during the short duration of the service procedure when the control processor is physically replaced would there be a temporary degradation of available bandwidth. In most real-world cases, even during this short service procedure, application performance would not be degraded. For example, this would not affect locally switched flows, and if the traffic that needs to traverse the control processors is less than the capacity of the maximum system-wide bandwidth, no congestion would occur. Given the very high MTBF of the blade and the fact that the outage can and should be scheduled during a time favorable to operations, this characteristic would not have a noticeable effect in real-world SANs.

aSIC Failure or Blade Removal


If either control processor blade has a core ASIC element failure, or if the blade is removed, the director no longer has access to one set of core ASICs. All conversations being performed by these ASICs would be moved to the other blade. What happens in a particular network depends on many factors. For example, the possibility of OOD depends on the fabric-wide In-Order Delivery (IOD) flag: if the flag is set, no OOD would occur. If it is not set, the application impact of OOD would depend on the HBA, target, SCSI layer, file system, and application characteristics. Generally, this flag is set during installation by the OEM or reseller responsible for supporting the SAN fabric, and is optimized for the application environment. Most known currently shipping applications can withstand these OOD behaviors.

15

Core Element Failure


The potential impact of a core element failure to overall system performance is straightforward. If half of the core elements go offline due to a hardware failure, half of the aggregate backplane switching capacity would be offline until the condition is corrected. A Brocade 48000 with just one core element can still provide 256 Gbit/sec (512 Gbit/sec full duplex) of backplane switching bandwidth, or 32 Gbit/sec (64 Gbit/sec full duplex) to every director slot. Note that in best-case scenarios, one competitive SAN director has only 48 Gbit/sec (96 Gbit/sec full duplex) of bandwidth per slot. Data flows would not necessarily become congested in the Brocade 48000 with one core element failure. The worst case is that data flows might become congested, but this requires that the director already be running at or near 100 percent of capacity on a sustained basis. On systems with the most typical I/O patterns, the aggregate usage of the available backplane bandwidth typically would not even be at 50 percent. In such environments there would be no impact, even if the problem persisted for an extended period of time. However, very few environments have all ports running at 4 Gbit/secwith a 100 percent load on all data flows all the timeand use no local switching for any data flows. Even in a case where the failure of a control processor with an MTBF of 337,000 hours occurs, performance degradation would last only until repairs are completed. Such repairs or replacement could be completed in as little as five minutes.

16

SUMMaRY
With an aggregate chassis bandwidth (backplane plus local switching) nearly three times that of competitive products, the Brocade 48000 is congestion-free in real-world cases that reflect the vast majority of SAN-based applications. Although congestion might occur in unique situations, it would be infrequent, low-level, and unlikely to impact application performance. Only in the worst contrived cases would congestion be noticeable at the application level. Even in those cases, congestion could be eliminated with very little effort by using local switching. The Brocade 48000 is designed to meet the most demanding performance requirements of a director-class SAN solution. As demonstrated by Brocade testing, the Brocade 48000: Delivers 4 Gbit/sec line-rate performance in full-duplex operation on all ports simultaneously Does not suffer from Head of Line Blocking (HoLB) Supports local switching for the highest-performance applications Is designed for maximum performance with real-world SAN traffic patterns Supports multiprotocol blades and applications Is designed to support future speeds and protocols For more information about the Brocade 48000, visit www.brocade.com. For more information about SAN design or other Brocade solutions, such as the Brocade Multiprotocol Router, visit the Brocade Bookshelf at www.brocade.com/products/sanadmin_bookshelf.

17

SAN WHITE PAPER

Corporate Headquarters San Jose, CA USA T: (408) 333-8000 info@brocade.com

European Headquarters Geneva, Switzerland T: +41 22 799 56 40 emea-info@brocade.com

Asia Pacific Headquarters Singapore T: +65-6538-4700 apac-info@brocade.com

2007 Brocade Communications Systems, Inc. All Rights Reserved. 01/07 GA-WP-879-01 Brocade, the Brocade B-weave logo, Fabric OS, File Lifecycle Manager, MyView, Secure Fabric OS, SilkWorm, and StorageX are registered trademarks and the Brocade B-wing symbol and Tapestry are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. FICON is a registered trademark of IBM Corporation in the U.S. and other countries. All other brands, products, or service names are or may be trademarks or service marks of, and are used to identify, products or services of their respective owners. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.

Vous aimerez peut-être aussi