Vous êtes sur la page 1sur 17

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/273830919

Storage Area Networks Fundamentals

Chapter · December 2007

CITATIONS READS

0 4,669

1 author:

Vladimir Riabov
Rivier University
151 PUBLICATIONS 498 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Hypersonic Aerothermodynamics View project

Rarefied Gas Dynamics View project

All content following this page was uploaded by Vladimir Riabov on 22 March 2015.

The user has requested enhancement of the downloaded file.


Storage Area Network Fundamentals
Vladimir V. Riabov, Rivier College

Introduction 1 Adaptive Network Storage Architecture (ANSA) 11


SAN Overview 1 Storage Resource Management (SRM) 11
What Is a SAN? 1 Standards 11
Benefits of SANs 3 American National Standards Institute (ANSI) 11
SAN Applications 4 Distributed Management Task Force (DMTF) 11
SAN Architecture and Infrastructure 4 Storage Systems Standards Working Group
SAN Design and Management Issues 6 (SSSWG) 11
SAN Operating System Software Components 7 Internet Engineering Task Force (IETF) 11
SAN Security Issues 7 Storage Networking Associations, Initiatives,
SAN Technologies and Solutions 7 Forums, and Coalitions 11
Fibre-channel Topologies 8 SNIA (Storage Networking Industry Association) 11
InfiniBand Solutions 9 Fibre Channel Industry Association (FCIA) 12
Crossroads Systems with a Storage Router 9 Fibre Alliance (FA) 12
Brocade’s Configurations 9 Jiro 12
Other Storage Networking Technologies 9 National Storage Industry Consortium (NSIC) 12
VI (Virtual Interface) Architecture 9 The SAN Market, Vendors, and Service Providers 12
Direct Access File System 10 Evolution of the SAN Market 12
IP Storage Technologies 10 SAN Vendors and Service Providers 13
SANs over IP 10 Conclusion 14
Storage over IP (SoIP) 10 Glossary 14
Fabric Shortest Path First (FSPF) 10 References 14

INTRODUCTION reliability are critical for a company’s business. Many


enterprise solutions (e.g., ATTO FibreBridge products,
The volume and value of enterprise data have been grow-
rack mount solutions, ATTO FibreCenter3400R/D, host
ing faster than the speed at which traditional backup utili-
bus adapters, the ATTO Diamond array, Compaq Storage
ties’ effectiveness has been increasing. Enterprises have
Works products, EMC Connectrix solutions, LSI Logic
become more dependent on their online systems and can-
E4600 Storage System) are available today. They can be ef-
not ensure the availability of these data by relying only on
fectively used in multiple platform storage-infrastructure
traditional, network-bottlenecked, server-attached storage
solutions for data-intensive applications such as
systems. Solutions to this problem have been offered by
e-commerce, online transaction processing, electronic
storage area network (SAN) technology, which provides
vaulting, data warehousing, data mining, Internet/intranet
enterprises with serverless zero-time-windows backup.
browsing, multimedia audio/video editing, high-definition
In this new backup approach, backed-up data are removed
television (HDTV) streaming, and enterprise database
to a secondary remote storage device, and the enterprise
management applications (Riabov 2004).
server becomes off-loaded, permitting high-performance
This chapter describes fundamentals of storage area
continuous access to both applications and data. The ob-
networks (SANs), their architectural elements (interfaces,
jective of a SAN is to allow multiple servers access to a
interconnects, and fabrics), technologies (fibre-channel-
pool of data storage in which any server can potentially
arbitrated loop transport protocol, Brocade’s configura-
access any storage unit. In this environment, management
tions, InfiniBand switched-fabric architecture, Crossroad
plays a large role in providing security guarantees (with
systems with a storage router, virtual interface architec-
authorization to access particular storage devices) and se-
ture, direct access file system, Internet protocol (IP) stor-
quencing or serialization guarantees (with authorization
age technologies, SANs over IP, fibre channel over IP,
to access a particular device at certain time-point) (Server
Internet fibre-channel protocol, Internet SCSI, storage
Clusters 2003).
over IP, fabric shortest path first protocol, storage resource
A SAN fabric is usually based on fibre-channel technol-
management, and adaptive network storage archi-tecture),
ogy that allows up to 10-km long-distance connections.
solutions, standards, associations, initiatives, forums,
This feature has a significant advantage in a campus envi-
coalitions, vendors, and service providers.
ronment, where reliable backup resources can be shared
among several divisions. For example, fibre-channel SANs
for a health care enterprise allow 24/7 continuous opera-
tion, patient record backup, and medical image archiving
SAN OVERVIEW
(Farley 2001). What Is a SAN?
The SAN becomes a key element of the enterprise en- A SAN (storage area network) is a networked high-speed
vironment in which data availability, serviceability, and infrastructure (subnetwork) that establishes direct access
1

bid44609_ch147.indd 1 5/11/07 8:56:34 PM


2 STORAGE AREA NETWORK FUNDAMENTALS

by servers to an interconnected group of heterogeneous distributed networking architectures of storage-centric


data-storage devices such as optical disks, redundant ar- enterprise information processing, SAN technology rep-
rays of inexpensive disks (RAIDs), and tape backups (or resents a significant step toward a fully networked secure
tape libraries). SANs are effective for storing large amounts data storage infrastructure that is radically different from
of information and backing up data online in e-commerce, traditional server-attached storage (Clark 2003). In addi-
online transaction processing, electronic vaulting, data tion, SANs provide improved options for network storage,
warehousing, data mining, multimedia Internet/intranet such as a creation of remote or local, dedicated or shared
browsing, and enterprise-database-managing applica- data storage networks with access capability faster than
tions. SANs provide additional capabilities (fault tolerance, network attached storage (NAS) (Preston 2002).
remote management, clustering, and topological flexibility) A SAN can be divided into four separate segments
to mission-critical, data-intensive applications (Troppens, (SAN School 2006): (1) all the hardware (e.g., switches, ca-
Erkens, and Müeller 2004). A SAN is typically a part of bles, disk arrays, tape libraries, etc.), (2) protocols (fibre-
an enterprise network of computing resources (Sachdev channel arbitrated loop [FC-AL] and fibre-channel switch
and Arunkundram 2002). A simple model of the storage fabric [FC-SW]) that are used for communicating between
area network as a networked high-speed infrastructure is the hardware parts, (3) hardware and software vendors,
shown in Figure 1. and (4) various computer applications that benefit from
A SAN can be considered as an extended and shared using SAN. A SAN can be viewed as a composition of three
storage bus within a data center, consisting of various stor- layers (see Figure 1): (1) host layer that includes host bus
age devices and specific interfaces (e.g., fibre channel, en- adaptors (HBAs), drivers, host operating systems, and
terprise system connection [ESCON], high-performance interface software, (2) fabric layer (hubs, switches, fab-
parallel interface [HIPPI], small computer system inter- ric operating systems, and cabling, and (3) storage layer
face [SCSI], or serial storage architecture [SSA]) rather (disks, tapes, and advanced storage software).
than Ethernet (Peterson 1998). In order to be connected SANs are based on the storage-centric information-
to the enterprise network, the SAN uses technologies sim- processing paradigm, which enables any-to-any connec-
ilar to those of local area networks (LANs) and wide area tivity of computers (servers) and storage devices over a
networks (WANs): switches, routers, gateways, and hubs high-speed enterprise network of interconnected fibre-
(see Figure 1). WAN carrier technologies, such as asyn- channel switches that form the SAN fabric. The incidence
chronous transfer mode (ATM) or synchronous optical of unconnected clusters of information is eliminated or
networks (SONET), can be used for remote archival data significantly reduced by SANs. According to this con-
storing and backup. As an important element of modern cept, a SAN resides behind the server and provides any

LAN/WAN
client
Intranet/ Intranet/
campus campus
network network
Load balancer Load balancer

Server UNIX
UNIX Server
Server Server
Server
FC-to-IP link

Internet Fibre Channel


switch (FC) switch
NT Cluster
Hub
SAN fabric
Manager

Figure 1: Storage area network as a net-


SAN Fibre Channel
Tapc Tapc Disk Appliance Storage Systems
worked high-speed enterprise infrastructure
librarics backups arrays (NT ⫽ Windows NT)

bid44609_ch147.indd 2 5/11/07 8:56:35 PM


SAN OVERVIEW 3

users or devices on the enterprise network (“clients”) with There are several key reasons for implementing a SAN
fast access to an array of data-storage devices. Multiple (InfraStor 2002). The first three concern business issues
servers and workstations from various vendors, running of return on the investment in data storage, as well as the
different operating systems, can all be connected to a SAN protection of existing investments:
by using fiber-optic cabling (either a single or dual con-
nection) and a special fibre-channel card called a “host • SANs are cost-effective (reduced cost of storage man-
bus adaptor” (HBA) (Anderson 2006). agement, including backup and recovery; increased
A SAN can be viewed as multihost connected- user productivity; cost-effective implementations of
and-shared enterprise storage. Adding new storage de- high-availability disaster protection, using remote clus-
vices and server elements resolves traditional network ters and remote mirrored arrays).
bottlenecks and small-scale problems of interfaces, such • SANs reduce business risk (faster disaster recovery;
as SCSI and NAS, and can easily expand the scale of the reduced revenue loss from down-time; reduced lost-
SAN (Thornburgh and Schoenborn 2001). Another advan- opportunity costs).
tage of SAN technology is that backups can be made over • Legacy investments are protected (SANs can be im-
the SAN fibre-channel subnet, and, in this case, backup plemented without abandoning existing storage infra-
traffic is totally removed from the enterprise network. structures such as devices using SCSI connections).
SANs can offer high availability of operations because
of a fully redundant architecture supported by three build- The next four reasons for using SANs address critical
ing blocks: server clustering, multipathing, and storage technical issues that face data-center managers at a time
replication (Anderson 2006). Server clustering, load bal- when the volume of data to be managed and made availa-
ancing, and autorecovering after a primary server failure ble in many organizations is increasing at an annual rate
are the principal elements of high-availability computing of 60% (InfraStor 2002):
with a SAN. Multi-pathing uses redundant components to
guard against failure of any connection component (HBA, • SANs provide scalability (add servers and storage inde-
switch, cable, or array controller) by rerouting traffic from
pendently).
one component to the other. Small enterprises widely use
storage replication between servers connected to a SAN
• SANs allow flexibility (reconfigure storage and servers
over an IP network. This enables one-to-one, one-to-many, dynamically without interrupting their services; load
and many-to-many replication, and data are synchronized sharing and redistribution); any additional tape or disk
in incremental blocks to minimize network traffic. storage subsystems can be added to the SAN fabric
The SAN represents a new segment of the information dynamically.
services industry called storage solution providers (SSP). • SANs enhance overall system performance (more ef-
However, isolated SANs cannot realize all SSP services, fective use of existing server compute cycles; real-time
such as real-time data replication, failover, storage host- backup without impacting LAN/WAN; multiple server-
ing, and remote vaulting. to-storage paths; networked storage arrays that can
outperform bus-attached storage; compatibility with
parallelized database applications).
Benefits of SANs • SANs are an integral part of any high-availability plan
A SAN makes physical storage capacity a single, scalable (facilitation of shared online spares and remote back-
resource and allows the flexible allocation of virtualized up or mirroring; reduced down-time requirements;
storage volumes (e.g., RAIDs, just a bunch of disks/drives storage independent of the application and accessible
[JBODs], and EMC, Sun, and Dell storage devices). Cen- through alternative data paths such as found in clustered
tralization of data storage into a single pool allows stor- systems).
age resources and server resources to grow independently,
and allows storage to be dynamically assigned from the With modern SAN architectures, enterprise data are
pool. The SAN can manage backup tasks that were a huge available from 99.99% of the time to 99.999% of the time.
administrative and computer-resource burden under old This translates to down-time ranging from 53 minutes per
storage architectures. Common infrastructure for attach- year to 5 minutes per year. Also, the SAN implementation
ing storage allows a single common management model allows performing data replication to remote sites for sig-
for configuration and deployment (Server Clusters 2003). nificantly faster recovery in the event of a disaster. With
The storage management cost savings can be higher than SANs, data replication can be applied to storage at the
80%. A cost-effective, scalable SAN enhances overall sys- controller, switch, or operating system level; this provides
tem performance. It can integrate legacy SCSI devices, the capacity to create multiple copies of critical data and
which allows increasing their systemwide effective usable then move those copies to other parts of the SAN, or over
capacity by up to 30% (InfraStor 2002). a wide area network for remote protection (Storage Area
SANs are an integral part of large financial services en- Networks 2006).
terprises, Internet service providers (ISPs), government The implementation of a SAN can realize significant
organization, research laboratories, electronic publish- overall cost savings in data-center operations and can
ers, digital video production groups, TV-broadcasting sta- increase user productivity. The opportunity to avoid es-
tions moving to digital services, educational institutions, calating costs depends on decentralization of data and
or any organization with increasing data storage needs. applications. A key element in the consolidation of data

bid44609_ch147.indd 3 5/11/07 8:56:35 PM


4 STORAGE AREA NETWORK FUNDAMENTALS

storage must include the implementation of a basic also able to share the resources among several consoli-
SAN infrastructure in order to provide the following dated servers (Riabov 2004).
(InfraStor 2002): A cluster of interconnected servers may be connected
to common storage devices in the SAN environment and
• Bandwidth to service clients\; be accessible to all clients. Modern enterprises use this
• Maintenance of data availability without impacting clustering technology to resolve several challenging appli-
LAN bandwidth\; cation problems (Barker and Massiglia 2001, 244)—i.e.,
• Scalability for long-term, rapid growth with protection providing customers, partners, and employees with con-
of legacy investments tinuous application service, even if the enterprise systems
fail, and supporting application performance growth as
• Flexibility to provide optimal balance of server and
demand grows, without service disruption to customers.
storage capacity
Clusters provide load balancing, high availability, and
• Manageability for ease of installation and maintain- fault tolerance and support application scaling. In some
ability implementations, the clustered servers can be managed
• Shared access to data resources for real-time backup from a single console. Clustering methods are effectively
and recovery used in e-commerce, online-transaction processing, and
• Administrators can support over 40% more storage in other Web applications, which handle a high volume of
a SAN environment than in direct-attached environ- requests.
ments (Storage Area Networks 2006). SAN methods have their roots in two low-cost tech-
nologies: SCSI-based storage and the NAS-based con-
Distributed environments require high-cost maintenance cept. They both successfully implement storage-network
in terms of staff resources. The consolidation of distrib- links, but are limited to a low volume of data flows and
uted NT-based storages to a virtualized SAN-based re- rates. SCSI still remains the most popular “bus-attached”
source can save 80% or more of the costs of management server-storage connection in SAS systems, especially at
(InfraStor 2002). the stage of transition from SCSI bus devices to fibre-
channel switches using the SCSI-fiber protocol converter
in a new enterprise storage (“data center”) environment.
SAN Applications In a NAS system, storage elements (i.e., a disk array)
SAN applications cover the following areas of data trans- are attached directly to any type of network via a LAN
fer (Peterson 1998): (1) externalization of data storage out interface (e.g., Ethernet) and provide file access services to
of the server—SAN-attached storage (SAS) and NAS-with- computer systems. If the NAS elements are connected
SAN-interconnects network architectures; (2) clustering, a to SANs, they can be considered as members of the SAS
redundant process that provides failover, high availability, system. The stored data may be accessed by a host com-
performance, and scalability through the use of multiple puter system using file access protocols such as network
servers as a data pipe and allows data storage resources file system (NFS) or common Internet file system (CIFS).
to be shared; (3) data protection solutions for backup, re- SANs provide high-bandwidth block storage access
mote clustering, file mirroring, and replicating and jour- over long distance via extended fibre-channel links. How-
naling file systems by creating data storage redundancy ever, such links are generally restricted to connections
on a dynamic basis; (4) data vaulting, which is the proc- between data centers. NAS access is less restricted by
ess of transferring archived data to less expensive media; physical distance because communications are via TCP/
(5) data interchange from one storage system to another IP (InfraStor 2001). NAS controls simple access to files
or between different environments; and (6) disaster recov- via a standard TCP/IP link. A SAN provides storage access
ery, which is similar to data interchange, moving copies to client devices, but does not impose any inherent re-
of data offsite, and is built on remote vaulting (backup) strictions on the operating system or file system that may
processes or on remote array mirroring or clustering. be used. For this reason, SANs are well suited to high-
Several new applications benefit from 2 Gb/s fibre- bandwidth storage access by transaction-processing and
channel SANs (Hammond-Doel 2001): multimedia audio/ database-management system (DBMS) applications that
video servers that provide the ability to stream higher- manage storage access by themselves. NAS, which has the
resolution files, medical imaging, prepress that speeds up inherent ability to provide shared file-level access to multi-
design and file preparation, and video editing of uncom- ple operating system (OS) environments, is well suited for
pressed HDTV data. requirements such as Web file services, computer-aided
The first effective application of SANs has been server- design (CAD) file access by combined WinNT/2000, UNIX,
less backup, which provides enterprises with full-time and LINUX devices, and wide-area streaming video dis-
information availability. All backup-related tasks have tribution (InfraStor 2001). A balanced combination of
been relegated to the SAN. Large enterprises can store and these approaches will dominate in the future. See more
manage huge amounts of information (several terabytes details in Chapter 149. Also see Chapter XXX.
or more) in the SAN high-performance environment. En-
terprise servers are connected to storage devices (e.g.,
RAIDs) via a high-speed interconnection, such as a fibre SAN Architecture and Infrastructure
channel. The SAN any-to-any communication principle SAN architectures have been changed evolutionarily,
provides the ability to share storage resources and alter- adapting to new application demands and expanding
native paths from server to data storage device. A SAN is capacities. The original fibre-channel-based SANs were

bid44609_ch147.indd 4 5/11/07 8:56:35 PM


SAN OVERVIEW 5

simple loop configurations based on the fibre-channel capability for fibre-channel networks, which can be used
arbitrated loop (FC-AP) standard. Low-cost loops are to interconnect two SAN switches. This configuration in-
easily expanded and combined with up to 126 hosts and creases overall data throughput across the SAN even if
devices, but they are difficult to deploy and have signifi- servers and disk subsystems continue to operate via 1-Gb
cant limitations on overall bandwidth. Requirements of channels.
scalability and new functionality had transformed SANs The fibre-channel (FC) data-storage industry has be-
into fabric-based switching systems (fibre-channel switch gun the transition to the 10-Gbps interface standard. The
fabric [FC-SW]) that could support up to 16 million hosts Technical Committee T11 of the International Commit-
and devices. Numerous vendors offered different proprie- tee for Information Technology Standards (INCITS) has
tary solutions of problems based on fabric switching. As a developed the 10GFC project proposal (also known as
result, immature and competing standards created vari- FC-PI-2) that describes the fibre-channel physical layer
ous interoperability problems. Homogeneous high-cost for the transport of data at a rate of approximately
SANs were developed. Ottem (2001) refers to this phase 10 Gbps (Technical Committee T11 2006).
as the “legacy proprietary fabric switch phase.” The mod- The new 4-Gbps technology for storage arrays has been
ern established architectural approach is associated with announced (Lyon and Sturgeon 2006). It will significantly
a standards-based “Open” 2-Gb fabric switch that pro- improve the sequential operations, such as database back-
vides all the benefits of fabric switching, but is based on ups, long database loads, and streaming media. SAN im-
new industry standards (FC-SW-2) and an interoperabil- plementations that make extensive use of interswitch links
ity architecture that runs at twice the speed of legacy fab- (ISLs) will also benefit from 4-Gbps technology (Business
ric. The standards-based switches provide heterogeneous Brief 2006). It allows reducing the number of ISLs re-
capability, fault isolation, and rerouting. The introduc- quired and freeing up ports otherwise used for bandwidth
tion of industry standards, as a substitute for proprietary aggregation.
standards, has reduced the price of SAN components and Characteristics of four generations of SANs are sum-
management costs of running a SAN. marized in Table 1.
The Open 2-Gb fibre channel allows doubled SAN A SAN system consists of software and hardware com-
speeds, enables greater flexibility in configuring SANs for ponents that establish logical and physical paths between
a wide range of applications, and is especially useful stored data and applications that request them (Sheldon
for managing 1.5-Gb high-definition video data. In HDTV 2001). The data transforms, which are located on the
applications, a single fiber can carry a full high-definition paths from storage device to application, are the four
video stream without having to cache, buffer, or compress main abstract components (Barker and Massiglia 2001,
the data. Other examples (Ottem 2001) include storage 128): the disks (viewed through ESCON, FCP, HIPPI,
service providers that must deliver block data from and SCSI, and SSA interfaces as abstract entities), volumes
to users at the highest possible speeds and e-commerce (logical/virtual disk-like storage entities that provide their
companies that have to minimize transaction times. clients with identified storage blocks of persistent/re-
The 2-Gb fibre channel provides the high-speed backbone trieved data), file systems, and application-independent

Table 1: Four Generations of SANs

Fabric Main Characteristics Applications


First-Generation SANs 1-Gb loop FC-AL protocol; SCSI replacement
1-Gb speed;
enabled first SANs
Second-Generation SANs 1-Gb proprietary; FC-SW protocol; LAN-free backup;
legacy fabric 1-Gb speed; high-availability
proprietary switch-to-switch clustering
connections;
expensive
Third-Generation SANs 2-Gb open fabric Open FC-SW-2 protocol; Serverless backup;
2-Gb speed; heterogeneous storage
standards-based switch-to-switch consolidation;
connections; high-definition video data;
competition-driven price virtualization
reductions
Fourth-Generation SANs 4–10-Gb fabric 10GFC-IP-2 protocol; Medical imaging;
4–10-Gb speed streaming video;
data mining;
database backups;
long database loads

bid44609_ch147.indd 5 5/11/07 8:56:36 PM


6 STORAGE AREA NETWORK FUNDAMENTALS

database-management systems. In a system with a stor- typically support 16, 32, 64, or even 128 ports. Switches
age area network, five different combinations (Barker and can detect failed or congested connections and reroute
Massiglia 2001) of these data transforms and correspond- data to the correct device. Linked (cascaded) switches
ing transformation paths serve different applications increase the number of available SAN connections. This
and system architectures by various physical system ele- provides greater performance and resilience against indi-
ments. The disk abstraction is actually the physical disk vidual connection failures (Anderson 2006).
drive. The abstract volume entity is realized as an exter- Routers and bridges perform protocol transformations
nal or embedded RAID controller, as an out-of-band or in SANs that allow different interconnect technologies to
in-band SAN appliance, or as a volume manager serving a interoperate. The most common SAN fabrics are switched
database or an application. Storage servers (such as NAS fibre channel, switched SCSI, and switched SSA, all of
devices), database servers, and application servers may which physically link the interconnects and determine
contain the abstract file systems. These devices and servers the SAN’s performance and scalability (Simitci 2003).
can be clustered to increase scaling and application avail- Some fabrics embed operating systems that provide for
ability. In that case, their volume file and management SAN security, monitoring, and management. Hosts are
systems should be cluster-aware (Barker and Massiglia connected to the fibre-channel SAN through HBAs, which
2001). consist of hardware and interface drivers. Fibre-channel
Any SAN-based client–server system consists of three HBAs support negotiation with network-attached devices
architectural components: interfaces, interconnects or and switches and allow the host to minimize its CPU
network infrastructure components (switches, hubs, rout- overhead.
ers, bridges, gateways, multiplexers, extenders, and direc- SAN architectures are of two types: (1) “proprietary,”
tors), and fabrics. The SAN interfaces are fibre channel, those that are sourced from a single manufacturer who
ESCON, HIPPI, SCSI, and SSA. The SAN interconnects serves as the single point of contact for SAN hardware,
link these storage interfaces together, making various net- software, and services, and (2) “open” architectures that
work configurations. consist of hardware and software from more than one
The SAN infrastructure (fabric) includes the hard- manufacturer (Storage Area Networks 2003). Proprietary
ware, cabling, and software components that transmit SAN architectures are usually limited in flexibility, because
data within the SAN. Fibre-channel switches and host a hardware manufacturer could have limited experience in
bus adapters (HBAs) form the foundation, enabling serv- some specialized areas, such as tape library, software con-
ers and other storage devices to connect to one another figuration, or storage management. Open-SAN architec-
(Anderson 2006). Since 1999, most HBAs support both FC- tures are more flexible. The issue of SAN integration with
AL and FC-SW standards. However, HBA’s configuration multivendor technologies is critical to ensure functional-
is not automatic in many cases, and it is important to ver- ity, interoperability, and optimal performance. Ensuring
ify from the switch side that the hosts are operating in the data integrity guarantees and enforcing security policies
appropriate mode (Server Clusters 2003). for access rights to a particular storage device is a core
The fundamentally different fibre-channel topologies part of the infrastructure (Server Clusters 2003).
(point-to-point, FC-AL, and FC-SW) use different compo- See more details in Chapter 149.
nents to provide the infrastructure: (1) hubs, (2) switches,
and (3) bridges and routers. Hubs are the simplest form
of fibre-channel devices and are used to connect devices SAN Design and Management Issues
and hosts into arbitrated loop configurations. Hubs are The main benefit of SANs is that storage can be man-
typically half-duplex and have four, eight, twelve, or six- aged as a centralized pool of resources. The data storage
teen ports, allowing up to sixteen devices and hosts to infrastructure must be designed to provide highly avail-
be attached. However, the bandwidth on a hub is shared able access so that the loss of one or more components in
by all devices on the hub (Server Clusters 2003). Because the storage fabric does not lead to servers being unable to
of these performance constraints, hubs are used in small access the application data. This principle is implemented
and/or low-bandwidth configurations. in the following ways (Server Clusters 2003):
Switches increase the overall SAN bandwidth by
connecting system elements for data transmission and • There is no single point of failure of cables or compo-
provide the advantages of the centralized storage reposi-
nents such as HBAs, switches, or storage controllers.
tories with shared applications and central management
(IP Storage Area Networks 2006). A fabric switch pro-
• Transparent and dynamic path detection and failover are
vides the full fibre-channel bandwidth to each port inde- implemented at the host. These features are supported
pendently. Typical switches allow ports to be configured by multipath drivers running on the host to present a
in either an arbitrated-loop or a switched-mode fabric. In single storage view to the application across multiple,
an arbitrated-loop configuration, the ports are typically independent HBAs.
full bandwidth and bidirectional, allowing devices and • Almost all components (HBAs, switches, and controllers)
hosts to communicate at full fibre-channel speed in both have built-in hot-swap and hot-plug utilities for adopt-
directions simultaneously. Switches are the basic infra- ing interface cards, memory, CPUs, and disk drives.
structure used for large, point-to-point, switched fabrics.
In this mode, a switch allows any device to communicate Different SAN designs are based on the following topolo-
directly with any other device at full fibre-channel speed gies: (1) multiple independent fabrics, (2) federated fab-
(1 Gbps or 2 Gbps) (Server Clusters 2003). Switches rics, and (3) core backbone (Server Clusters 2003).

bid44609_ch147.indd 6 5/11/07 8:56:36 PM


SAN TECHNOLOGIES AND SOLUTIONS 7

In the case of a multiple fabric configuration, each host appropriate tools for a simplified installation and man-
or device is connected to multiple fabrics, but switches agement experience.
are not linked. In the event of the failure of one fabric,
hosts and devices can communicate using the remaining
fabric. In this configuration, each fabric should have the
SAN Security Issues
same zoning and security information to ensure a consist- The specific security issue of technology containment as-
ent view of the fabric regardless of the chosen communi- sociated with a SAN also has to be addressed (Storage
cation port. Also, hosts and devices must have multiple Area Networks 2003). For example, Windows NT servers
adapters, which are treated as different storage buses. Ad- would claim every available logical unit number (LUN)
ditional multipathing software (e.g., EMC PowerPath™ or visible to them. Technology containment keeps servers
Compaq SecurePath™) is required to ensure that the host from gaining unauthorized or accidental access to un-
gets a single view of the devices across the two HBAs. designated areas, such as data access and fabric manage-
In the case of a federated fabric, individual hosts and ment security, within the SAN.
devices are connected to at least two switches, which are The data access and security methodologies include:
linked together. This configuration has many management (1) fabric zoning, which provides a fabric port-and-host/
advantages (e.g., only one set of zoning information and storage-level point of logical partitioning and can help
one set of security information is managed; fabric itself ensure that different OS types or applications are parti-
can route around various link failures and switch failures, tioned on the SAN (Server Clusters 2003); (2) LUN Mask-
etc.). However, a disadvantage of this approach means that ing, which is configured at the RAID storage subsystem
management errors can propagate to the entire fabric. level and helps ensure that only designated hosts assigned
A core backbone configuration is frequently used to that single storage port could access the specified RAID
(Server Clusters 2003). The core of the fabric is built us- LUN; and (3) persistent binding, which forces a host to
ing highly scalable, high-performance switches where the see a specific storage-subsystem port as a particular SCSI
interswitch connections provide high-performance com- target and helps ensure that a specific storage-subsystem
munication (e.g., 8 to 10 Gbps). Redundant edge switches port on the SAN is always seen as the same SCSI target
can be cascaded from the core infrastructure to provide ID on the host, across the host and fabric, and through-
high numbers of ports for storage and hosts devices. As out storage configuration changes. The advantages and
in the two previous configuration cases, additional multi- disadvantages of these methods are described in Storage
pathing software is required to ensure that the host gets Area Network Security (2003).
a single view of the devices across the two HBAs. Also, The zoning security mechanism and access control are
management errors can propagate to the entire fabric. implemented at the switch level in the fabric. A port (either
a host adapter or a storage controller port) can be config-
ured as part of a zone. Only ports in a given zone can com-
SAN Operating System Software municate with other ports in that zone. At the same time,
Components the zoning mechanism also limits the traffic flow within a
given SAN environment (Server Clusters 2003).
The SAN software plays an important role in providing
The fabric protection and management security tech-
an environment for various business and management
nologies include: (1) fabric-to-fabric security technolo-
applications, called “system applications” (Barker and
gies, which allow access control lists (ACLs) to permit
Massiglia 2001, 13). These include clustering, data repli-
or deny the addition of new switches to the fabric. The
cation, and data copying. The management applications
identity of the new switch may be validated by public key
(zoning, device discovery, allocation, RAID subsystems,
infrastructure (PKI) methods; (2) host-to-fabric security
and others) manage the complex environment of distrib-
technologies, which apply ACLs at the port-level on the
uted systems. These applications can significantly reduce
fabric, and permit or deny a particular host’s FC HBA to
the cost and improve the quality of enterprise information
attach to that port, and prevent an unauthorized intruder
services.
host from attaching to the fabric via any port; (3) man-
In particular, SAN management software provides the
agement-to-fabric technologies, which use PKI and other
following services (Anderson 2006):
encryption methods and ensure secure management in
console-to-fabric communication; and (4) configuration
• Configuration and optimization of individual compo-
integrity technologies, which ensure that propagated fab-
nents of the best setup
ric configuration changes only come from one location at
• Monitoring of the entire network for performance bot- a time, and are correctly propagated to all switches on
tlenecks and areas of potential failure the SAN fabric with integrity. SAN security issues are dis-
• Automation of time-consuming tasks such as data cussed in Storage Area Network Security (2003) in detail.
backup
• Management of all available storage system compo-
nents as one virtual pool
SAN TECHNOLOGIES AND
• Statistics overview SOLUTIONS
The SAN infrastructures support multiple protocols, such
Microsoft Corp. has launched the Simple SAN Program as SCSI, simple network management protocol (SNMP),
(Storage Management 2006) to ensure that customers visual interface (VI), enterprise system connection/fiber
deploying Windows Server 2003–based SANs have the connection (ESCON/FICON), TCP/IP, and session security

bid44609_ch147.indd 7 5/11/07 8:56:36 PM


8 STORAGE AREA NETWORK FUNDAMENTALS

authentication and initialization protocol (SSAIP), over the range from 133 Mbps to 4 Gbps over low-cost copper
a single physical connection. This unique capability pro- cabling (shielded twisted-pair wire or coaxial cable with
vides the SAN system with the coupled functionality of an serial communications d-shell connector, 9 pins [DB-9]
interface to storage devices and a server interconnect. and high-speed serial data connector [HSSDC]) or higher-
In the early 1990s, fibre channel was developed by cost multimode fiber-optic cable. Fibre-channel fabrics
the Fibre channel Systems Initiative (FCSI) and adopted have transceivers, called “gigabit interface converters”
later by the American National Standards Institute (GBICs), which convert optical to electrical signals to
(ANSI) X3T11 Committee as a high-speed interface for cable connectors. Fibre-channel technology supports dis-
connecting storage devices to servers and other network tances up to 10 km (see Table 2).
configurations. These interconnect standards (FC-PH Optical and copper media are used in various SAN
2001) provide SANs with the vital properties of connec- fibre-channel infrastructure applications (FCIA Roadmap
tivity, bandwidth, interconnectivity, protocol efficiency, 2006) including Inter Switch Links, SAN Edge Links,
distance range, recoverability, failure tolerance, and cost Back End Links, and Inter Device Links (see Table 3).
options. The fibre-channel standards specify electrical
and optical transmission media as well as conventions
for signaling and transmission/functional protocols. Op- Fibre-channel Topologies
tical media (with SC, LC, and mechanical transfer regis- Fibre-channel methods have the means to implement
tered jack [MT-RJ] connectors) support reliable signaling three topologies: (1) point-to-point links, (2) shared band-
over long distances. Fibre channel provides data rates in width loop circuits known as fibre-channel arbitrated

Table 2: Maximum Distance for Fibre-channel Media Types

Data rate, MBps 12.5 25 50 100 200 400


Nominal signaling rate, Gbaut 0.133 0.266 0.531 1.063 2.126 4.252
T11 Specification completed, Year 1994 1994 1994 1996 2000 2003
Market availability, Year 1994 1994 1995 1997 2001 2005
9-␮m single-mode fiber – – – 10 km 10 km 10 km
50-␮m multimode fiber 1.5 km 1.5 km 500 m 500 m 300 m 150 m
62.5-␮m multimode fiber 1.5 km 1.5 km 350 m 300 m 150 m 70 m
Video coaxial cable 100 m 100 m 71 m 50 m – –
Miniature coaxial cable 42 m 28 m 19 m 14 m – –
Shielded twisted pair 80 m 57 m 46 m 28 m – –
Twinax 93 m 66 m 46 m 33 m – –

Sources: FC-PH (2001) and Stallings (2000).

Table 3: Fibre-channel Infrastructure Applications

Interswitch SAN Edge Back-End Interdevice


Market Length Link Link Link Link
Metropolitan 5 km and up ⫹
Area Network [Fiber]
Multi Building (campus) 300 m–5 km ⫹
[Fiber]
Single Building (Local) 30 m–5 km ⫹
[Fiber]
Datacenter or Rack 0 m–100 m ⫹ ⫹ ⫹
[Fiber]
Datacenter or Rack 0 m–15 m ⫹ ⫹ ⫹
[Copper]
Backplane [Copper] 0.6 m ⫹

Source: FCIA Roadmap (2006).

bid44609_ch147.indd 8 5/11/07 8:56:36 PM


OTHER STORAGE NETWORKING TECHNOLOGIES 9

loops (FC-ALs), and (3) bandwidth-switched fibre-channel implements a switched-fabric architecture with the
fabrics (FC-SW) that provide SANs with the ability to packet-switching communication protocol (PSCP) that
do bandwidth multiplexing by supporting simultaneous relates to the virtual interface (VI) architecture method.
data transmission between various pairs of devices. Any SANs, parallel processing systems, and systems area
storage device on the loop can be accessed through a networks can effectively use InfiniBand as a high-
fibre-channel switch (FC-SW) or hub. The fibre-channel performance/low-latency interconnect. See Chapter XXX.
switch can support entry-level (8 to 16 ports) to enter-
prise-level (64 to 128 ports) systems. Under ANSI X3T11
standards, up to 126 storage devices (nodes) can be
Crossroads Systems with a Storage Router
linked in the fibre-channel arbitrated loop (FC-AL) con- InfiniBand technology has been successfully implemented
figuration, with the storage interface bandwidth about by Crossroads Systems, Inc., which promotes storage so-
100 Mbps for transferring large files. More than seventy lutions based on protocol-independent connectivity at
companies, including industry-leading vendors of disk speeds of gigabit per second and unparalleled manage-
arrays and computer and networking systems, support the ability for various storage devices. Crossroads’ storage
FC-AL voluntary standards. FC-AL topology is used pri- routers (e.g., Crossroads™ 10000) support peer opera-
marily to connect disk arrays and FC devices. Originally tions between storage devices and multiprotocol servers
developed as the high-speed serial technology of choice on fibre-channel storage networks.
for server-storage connectivity, the FC-AL method is ex-
tended to the FC-SL standard that supports isochronous Brocade’s Configurations
and time-deterministic services, including methods of
Brocade Communication Systems, Inc., has developed
managing loop operational parameters and quality-of-
an intelligent fabric services architecture that creates a
service (QoS) definitions, as well as control. The FC-VI
scalable and secure environment for enterprise mission-
regulation establishes a fibre channel-virtual interface ar-
critical storage applications such as data backup and busi-
chitecture (FC-VIA) mapping standard. See Chapter 148
ness continuity. The Brocade SANs (SilkWorm™ family
for more information about various implementations of
of fabric switches and software) provide enterprises with
the technology in various network configurations, includ-
any-server-to-any-storage-device connectivity and con-
ing SANs.
solidate storage resources and servers, as well as sharing
Because of the high cost of the FC interconnect com-
backup resources (Beauchamp, Judd, and Kuo 2002).
ponents and separation of storage and servers at the wide
area network scale (resulting in slow capabilities of WAN-
SANs with fibre channel), alternatives to FC technologies
have been developed. The ipStorage technology (Barker
OTHER STORAGE NETWORKING
and Massiglia 2001, 187) uses TCP/IP as a storage inter- TECHNOLOGIES
connect. The Internet Engineering Task Force (IETF) has The following emerging technologies introduce new
proposed the iSCSI (Internet SCSI) standards (Hufferd system architectural approaches in storage networking.
2002) that address the issues of long distances (WAN- SAN developers and users are trying to adapt them to
scale), reducing the interconnect cost, high security, and a new enterprise environment that is characterized by
complex storage network topologies. The iSCSI is layered host-level heterogeneous complexity, management flex-
on top of the TCP/IP protocol hierarchy and can instantly ibility, new TCP/IP network communication services, file-
access all modern transmission media and topologies. access-protocol developments, and the repartitioning of
TCP/IP and related protocols have been implemented in the functionality of file management systems.
the server-based systems that allow the most general stor-
age networks to be constructed with the iSCSI method
(iSCSI and IP-SAN Solutions 2006). The main challenge VI (Virtual Interface) Architecture
is a reduction of the iSCSI processor overhead of operat- The virtual interface (VI) architecture is a midlayer pro-
ing iSCSI packets below the fibre-channel overhead level. tocol specification that regulates virtual intercommunica-
See more details in Chapter 149. tion between applications running on different remote
servers (i.e., in a cluster). This method significantly re-
duces the latency and the volume of the system input/
InfiniBand Solutions output (I/O) operations by using message and data buffer
InfiniBand is a new emerging interconnect technology, pools that are insensitive to the heterogeneous operating
developed by the standards of the InfiniBand Trade As- environment or other applications. The reduction of the
sociation (founded by Compaq, Dell, Hewlett-Packard, I/O-related interrupts increases the CPUs’ time for process-
IBM, Intel, Microsoft, and Sun Microsystems) that of- ing various other system tasks. Developers of the VIA
fers the most general low-cost server system topologies technology (Compaq, Intel, Microsoft, etc.) use this ar-
(InfiniBand Trade Association 2006). It is expected that In- chitecture as an efficient way of message communication
finiBand interfaces will be embedded into all Intel-based between the SAN nodes at the application level, creating
servers (Barker and Massiglia 2001, 188–92; Intel Infini- only a small overhead of intercommunication between
Band Architecture 2006) and will allow Windows and the remote applications. This method has been success-
Linux servers to be available for resolving complex prob- fully implemented in database managers and NAS de-
lems of data centers by adopting clusters and multiple- vices (Barker and Massiglia 2001, 189–90). Several efforts
host enterprise RAID subsystems. InfiniBand technology (i.e., the direct access file system and network file system

bid44609_ch147.indd 9 5/11/07 8:56:36 PM


10 STORAGE AREA NETWORK FUNDAMENTALS

initiatives) have been made to improve file system per- addressed and successfully resolved in the iFCP and iSCSI
formance by using VIA-type advanced transport protocol approaches.
features. The Emulex Corporation promotes the GN9000/
VI™ 1 Gbps VI/IP PCI host bus adapter, which is based on Internet Fibre-channel Protocol (iFCP)
the virtual interface (version 1.0) architecture, supports The gateway-to-gateway iFCP supports a means of inte-
standard TCP/IP-reliable data delivery, IP routing, and the grating fibre-channel end devices into a single IP SAN. By
direct access file system standard, and speeds data access using iFCP, the fibre-channel fabric services can be pro-
over standard Gigabit Ethernet networks. vided to the remote FC devices over a TCP/IP network. The
iFCP IP storage switches can directly connect fibre-channel
storage arrays, HBAs, hubs, switches, and routers. The
Direct Access File System iFCP is a protocol stack that can be implemented in an
The direct access file system (DAFS) is a new file access/ IP storage controller interface or integrated into Gigabit
transfer protocol that is based on CIFS/NFS characteris- Ethernet IP storage network interface card (NIC) (known
tics and VIA-type transport protocol features. DAFS/VIA as ANSI X3T10 and X3T11 Standards) (Clark 2002, 126–
technology supports direct file transfer between the stor- 39). It supports any-to-any IP routing of storage data.
age system and clients. In a SAN environment, data can A mismatch in data communication rates between an
be directly transferred among a number of servers. iFCP-attached WAN link and fibre-channel fabric gener-
ates various flow-control issues that can be resolved by TCP
sliding-window algorithms. The Internet protocol security
IP Storage Technologies (IPSec), public or private keys, and zoning methods can
Another block-mode data mechanism has been used by the provide security across the Internet. One of the important
IETF IP Storage Working Group in developing standards applications of the iFCP technology is the support of
for a new IP-based transport-through-network technol- multiple TCP/IP connections for concurrent storage
ogy that encapsulates fibre-channel and SCSI high-speed transactions.
interfaces and provides direct access to data on disks,
tapes, and optical storage devices. IP storage technology Internet SCSI (iSCSI)
allows embedding low-cost SANs into IP-based enterprise In contrast to the FCIP concept, the iSCSI method, which
infrastructures over existing Gigabit Ethernet networks. follows the SCSI client/server model, is based on the imple-
See Chapter XXX. mentation of a light-switch technology in IP storage net-
working (Clark 2002, 139–49) and excludes fibre-channel
elements. The iSCSI servers (targets) are present in disk
SANs over IP arrays and client nodes (initiators) that occupy host plat-
To avoid the distance limitation of the fibre-channel in- forms. The iSCSI protocol over the TCP/IP layer is used
terconnects, enterprises build remote SANs that can be for block data transport between these entities over the
interconnected by means of the SAN-over-IP technology IP network. Data can be directly written into application
originally developed by the Computer Network Technol- memory through a steering and data synchronization
ogy Corporation. The distant SANs appear as local storage layer located below the iSCSI sublayer. IPSec, Keyberos
entities. This technology improves enterprise management authentication, public key, and other methods can provide
and data access, disaster recovery, business continuity, disk security across the Internet. SANs use the iSCSI adapters
mirroring, electronic tape vaulting, and wide area cluster- with TCP/IP offload engine (TOE) to minimize processing
ing. The Storage Networking Industry Association (SNIA) overhead and realize high-performance features of the
offers three technologies for integrating fibre-channel iSCSI technology. The enterprise solutions with IP SANs
SANs into the IP backbone. These methods include fibre can also support Gigabit and faster Ethernet on iSCSI-
channel over IP (FCIP), Internet fibre channel over IP switch infrastructures (IP SAN Appliances 2006).
(iFCP), and Internet SCSI (iSCSI). The FCIP, iFCP, and
iSCSI transport protocol descriptions are presented in Storage over IP (SoIP)
Clark (2002).
Based on the SoIP remote storage technology, the Nishan
Systems Corporation developed IP Storage switches of
Fibre channel over IP (FCIP) the IPS 4000 Series™ and a suite of storage management
FCIP is the simplest point-to-point IP tunneling solution software that allows configuration and monitoring of
for intercommunicating remote SANs with fibre-channel large-scale storage networks.
fabrics. The FCIP gateways establish TCP/IP connections
over a WAN path to transport the fibre-channel encap-
sulated frames. A typical discrepancy in data commu- Fabric Shortest Path First (FSPF)
nication rates between an FCIP-attached WAN link and FSPF is the open shortest path first (OSPF)-based stand-
fibre-channel fabric generates various flow control issues ard routing protocol for fibre channel that determines the
that can be resolved by TCP sliding-window algorithms. next shortest route for data traffic, updates the routing
Several FC-FCIP management issues cannot be prop- table, and detects the failed routes (Vacca 2002, 152). Op-
erly determined for the FCIP pipes because the TCP/IP tical, link, or switch failures can be effectively handled by
transport component ends at the external nodes of FSPF with minimal impact on the interconnected devices
the fibre-channel network. These problems have been in the FC/SAN environment.

bid44609_ch147.indd 10 5/11/07 8:56:37 PM


STORAGE NETWORKING ASSOCIATIONS, INITIATIVES, FORUMS, AND COALITIONS 11

Adaptive Network Storage Architecture a business’s management environment, using the Com-
mon Information Model; architectures and frameworks
(ANSA)
for desktop, laptop, and server management (desktop
The Procom Technology Corporation has developed management interface); standard data models for a net-
the ANSA approach, which delivers both block-level and work, its elements, policies, and rules (directory-enabled
file-level access to data. Procom’s NetFORCE 3000™ Se- networks); and functional calls for transaction monitor-
ries provides filer functionality (together with advanced ing (known as the application response measurement
features of security, high stability, backup, and recovery) [ARM] standard).
to enterprise storage that can result in high-performance
information-management systems. The ANSA technology
has been successfully applied to database management, Storage Systems Standards Working
data warehousing, e-mail delivering, and 24/7 rich-media Group (SSSWG)
applications. As a division of the Institute of Electrical and Electronics
Engineers (IEEE), the SSSWG develops models and ar-
Storage Resource Management (SRM) chitectures of storage systems, including SANs (SSSWG
The SRM technology provides applications for manag- 2006). The SSSWG project authorization requests (Barker
ing logical and physical storage-system resources (virtual and Massiglia 2001, 387–8) include the “Guide to Stor-
devices, disk volumes, file resources, storage devices and age System Design”; media management system (MMS)
elements, and appliances). SRM tools allow storage- architecture; session security, authentication, initializa-
system administrators to configure and monitor SANs and tion protocol (SSAIP) of the MMS; media management
other storage resources. During the administrative moni- protocol (MMP) for both client and administrative appli-
toring, the transport or storage data remain unchanged. cations; drive management protocol (DMP) of the MMS;
Vendors of the SRM tools, products, and services include library management protocol (LMP) of the MMS; the me-
SUN Microsystems (Sun StorEdge™), HighGround Sys- dia manager interchange protocol (MMIP) for informa-
tems, Inc. (Storage Resource Manager, Media Mirror), and tion exchange between autonomous media managers; the
Storage Computer Corp. (Storage Administrator) (Toigo media manager control interface protocol (MMCIP);
2001). the C language procedural interface for implementation
of the MMS’s components; MMS User mount commands
for establishing “command line interfaces” (CLI); MMS
STANDARDS standard administrative and operational commands for
American National Standards Institute administering and operating an MMS; and “MOVER”
(ANSI) specifications of a storage system data mover architecture
and its interfaces.
ANSI coordinates SAN voluntary standards (ANSI 2006).
The ANSI X3T10 and X3T11 working committees are as-
sociated with storage networking issues, including SCSI Internet Engineering Task Force (IETF)
I/O interface standards (X3T10) and fibre-channel inter- The IETF defines a variety of the transmission control pro-
face standards (X3T11). The first set of fibre-channel stand- tocol/Internet protocol (TCP/IP) standards that are widely
ards (ANSI X.3230–1994) (Vacca 2002, 75–8) describes used in the enterprise environment with SANs (IETF
standards for a switch fabric (FC-SW2), the interconnect 2006). The IETF standards related to storage network-
that supports high volumes of throughput and bandwidth ing (Barker and Massiglia 2001, 388) include the simple
for disk output and input, as well as a management in- network management protocol (SNMP) for managing and
formation base (MIB) management standard that permits monitoring devices and systems in a network; the Internet
fibre-channel devices (switches) to be managed by any protocol over fibre channel (IPoFC); and a policy for
vendor’s software, which includes an implementation of quality of services (QoS).
the simple network management protocol (SNMP). SAN
users can find a brief description of other X3T11 fibre-
channel standards in Barker and Massiglia (2001, 384–6).
STORAGE NETWORKING
Several FC SAN equipment vendors, including Brocade ASSOCIATIONS, INITIATIVES,
Communications Systems, have refined SAN standards
in the areas of management, discovery, data transport,
FORUMS, AND COALITIONS
and WAN connectivity. This allows the fibre-channel SAN The organizations discussed below promote storage net-
to become an integral part of the enterprise framework working technologies and products, develop standards,
(Vacca 2002, 78–9). undertake marketing activities in information technology
industry, educate, train, and create the knowledge base
for implementing SAN technology:
Distributed Management Task Force (DMTF)
The DMTF has introduced management standards for
computer systems and enterprise environment (DMTF
SNIA (Storage Networking Industry
2006). The SAN-related management standards (Barker Association)
and Massiglia 2001, 386–7) cover a set of the Web-based en- The SNIA, an international association of developers
terprise management (WBEM) XML-based technologies. of storage and networking products, is focusing on the
They support an object-oriented approach in developing creation of a forum of IT companies, system integrators,

bid44609_ch147.indd 11 5/11/07 8:56:37 PM


12 STORAGE AREA NETWORK FUNDAMENTALS

and application vendors for delivering architectures, edu- THE SAN MARKET, VENDORS,
cation, and services in storage networking, as well as defin-
ing the specifications and infrastructures, and proposing AND SERVICE PROVIDERS
standards for storage networking systems, including Data storage management has become the primary issue
SANs, SAN-attached storage (SAS), and network-attached for businesses in the past ten years. According to the In-
storage (NAS) (SNIA 2006). ternational Data Corporation (IDC), the redundant array
of independent disks (RAID) shipped in 2003 exceeded
250 ⫻ 1015 bytes (Storage Area Networks 2003). Data-rich
Fibre Channel Industry Association (FCIA) businesses continue to seek effective data storage and
The FCIA is an international organization of manufac- management solutions to handle this data growth. SAN
turers, systems integrators, developers, systems vendors, technologies allow existing enterprises to effectively man-
industry professionals, and end users. In June 2002, this age more transactions, customers, suppliers, and services.
organization included more than 190 members and affili- Company operations are significantly improved by provid-
ates in the United States, Europe, and Japan. The FCIA is ing continuous high availability through uninterruptible
committed to delivering a broad base of fibre-channel in- access to data, increasing scalability through multiple-
frastructure to support a wide array of industry applica- channel data transmission, and reducing the network and
tions within the mass storage and IT-based arenas. FCIA server’s CPU overhead. Additional opportunities for the IT
working groups focus on specific aspects of the technology enterprises are also associated with the Internet, which
that target markets, which include data storage network- allows them to increase the volume of data and rates of
ing and SAN management. The overview of fibre chan- their transmission.
nel’s SAN and networking applications and examples of
fibre-channel solutions for high-performance networks
of heterogeneous storage, server, and workstation re- Evolution of the SAN Market
sources can be found on the “technology” section of the According to the Gartner’s research study (Couture et al.
FCIA Web site (FCIA 2006). 2006), the worldwide storage service market will grow
from more than $23 billion in 2004 to more than $31 bil-
Fibre Alliance (FA) lion in 2009. The study conducted by the market research
firm iSuppli Corporation (Gardner 2004) shows that the
The FA is the networking industry consortium originally
SAN enterprise market has been growing at an annual
founded by a group of storage networking companies,
rate of 30% and promises to maintain that pace over the
including EMC Corporation, to develop and implement
next five years. Among others, two growth factors will
standards for managing heterogeneous fibre-channel-
dominate: rapidly dropping prices on SAN solutions and
based SANs. In collaboration with the IETF, this group
better record-keeping practices mandated by federal leg-
develops the definition of simple network management
islation. By 2004, practically all large enterprises have
protocol management information bases (SNMP MIB)
installed SANs. Nowadays, SANs are being deployed by
for storage network and device management (Fibre
a growing number of midsized and smaller companies,
Alliance 2006).
including about 10% of enterprises with revenues of
between $100 million and $300 million (Gardner 2004).
JIRO Microsoft and other software providers have long viewed
this market as a major growth opportunity, particularly for
Jiro is a Sun Microsystems technology that delivers intel-
regulatory-compliant software packages that help protect
ligent management services for networked devices. Using
confidentiality of patient records under the Health Insur-
the principles of Java and Jini platform-independent
ance Portability and Accountability Act (HIPAA), which
application development interfaces (Jini 2006), Jiro tech-
was introduced in April 2005 and requires the establish-
nology provides the architecture for connecting and man-
ment of an IT infrastructure. Another factor of the future
aging complex distributed environments such as storage
growing in the SAN installations is the Sarbanes-Oxley
area networks. The Jiro technology brings higher levels of
Act, which regulates how data are managed, archived,
interoperability, adaptability, and manageability to enter-
retrieved, and authenticated.
prise networks with storage resources (Jiro 2002).
According to the Jupiter Media Metrix Corporation
(Market Forecast Report 2002), the growth in Internet
National Storage Industry Consortium (NSIC) commerce is attributed to the increase in the U.S. online
Since April 1991, the NSIC has consolidated the efforts population by nearly 50%, from 141.5 million in 2001 to
of over fifty corporations, universities, and national labs 210.8 million by 2006, as well as to the growth of U.S. on-
in the field of digital information storage. The corporate line retail sales from $47.8 billion in 2002 to an estimated
members are major information storage manufacturers $130.3 billion in 2006. SANs have revolutionized the IT
and companies from the storage industry infrastruc- enterprise’s infrastructure and improved its e-business
ture, including SANs. As a nonprofit organization, NSIC applications, including e-commerce, e-mail, online trans-
supports precompetitive joint research projects, involv- action processing, data replication, and enterprise
ing collaboration among users and integrators of stor- database management. Global continuous delivery of
age systems, storage system and device manufacturers, multimedia secured information has become the main
storage component and media manufacturers, suppliers, service of modern e-business enterprises.
universities and national laboratories (National Storage By adding networking and intelligence features to
Industry Consortium 2006). data storage, fibre-channel SAN switches enable the

bid44609_ch147.indd 12 5/11/07 8:56:37 PM


THE SAN MARKET, VENDORS, AND SERVICE PROVIDERS 13

solution of several challenging e-business storage prob- According to New York–based market research firm Ac-
lems, such as linking high-performance workstation clus- cess Markets International (AMI) Partners Inc., SMBs in
ters, connecting high-performance tape storage on disk the United States spent $443 million on SANs in 2003,
farms, giving server farms a high-speed data-transmission and this figure is expected to grow to $1.5 billion by 2008.
pipe, clustering disk farms, and linking Ethernet, fiber Microsoft Corporation, together with business partners,
distributed data interface (FDDI), asynchronous transfer has developed the Simple SAN Program that supports the
mode (ATM), and token ring LANs to the backbone. Intel- iSCSI protocol, which allows SANs to be accessed over
ligent SAN systems allow improving enterprise perform- IP-based networks. New storage management capabili-
ance significantly, decreasing latency, supporting direct ties have been built into Windows Server 2003 (Storage
access to the storage shared by multiple servers, reducing Management 2006).
network traffic on the front-end network, and removing
storage management tasks from servers.
A new SAN market is open for small to medium busi- SAN Vendors and Service Providers
nesses (SMBs) that need flexible and scalable storage sys- Table 4 shows a list of SAN vendors, storage-networking
tems to ensure critical data that can be accessed quickly, service providers, and their products. The complete list of
securely, and cost effectively (Storage Management 2006). SAN deployment companies can be found in (Vacca 2002,

Table 4: SAN Vendors and Services

Company Product Type Source


ADVA Optical Storage Fiber Service Platform (FSP); DWDM (ADVA 2006)
Networking Technologies system
ATTO Technology, Inc. Fibre channel hub; Fibre channel, SCSI (ATTO 2006)
and IP storage solutions
Broadcom Corporation SAN system core logic input/output chips (Broadcom 2006)
Brocade Communication Systems, Inc. Fabric switches; SAN solutions (Brocade 2006)
Crossroads Systems, Inc. Modular storage router (Crossroads 2006)
Cutting Edge Clustered failover; IP SAN solutions (Cutting Edge 2006)
Dell Computer Enterprise storage solutions; RAIDs (Dell 2006)
EMC Corporation Networked storage solutions (EMC 2006)
Emulex Corporation VI/IP PCI host bus adapters; SAN (Emulex 2006)
storage switchers
FalconStor Software, Inc. Storage software; SAN infrastructure (FalconStor 2006)
Fujitsu Ltd. RAIDs (Fujitsu 2006)
Hewlett-Packard SAN management tools; storage systems (Hewlett-Packard 2006)
Hitachi Data Systems/GST SAN solutions (Hitachi 2006)
IBM Enterprise storage server; RAIDs; tape (IBM 2006)
libraries; storage software
Intel Corporation iSCSI HBAs; GBICs; RAID controllers (Intel Corporation 2006)
LSI Logic Storage Systems, Inc. SAN solutions; RAID adapters (LSI 2006)
McDATA Enterprise FC management tools (McDATA 2006)
Media Integration Fibre channel SAN; iSCSI backup (Media Integration 2006)
Microsoft Corporation SAN management and backup software (Microsoft 2006)
NEC Corporation SAN support products (NEC Corporation 2006)
Nishan Systems IP storage switches (Nishan Systems 2006)
QLogic Corp. SAN management software; HBAs (QLogic 2006)
SANRAD iSCSI-SAN and IP-SAN solutions (SANRAD 2006)
SUN Microsystems Storage systems; RAIDs; disk backup (Sun Microsystems 2006)
Unisys SAN solutions (Unisys 2006)
Xyratex RAID systems (Xyratex 2006)

bid44609_ch147.indd 13 5/11/07 8:56:37 PM


14 STORAGE AREA NETWORK FUNDAMENTALS

Others, 3%
Dell, 6%

Sun Microsystems, 7% EMC, 26%

LSI Storage, 7%

Hitachi, 8%

Hewlett-Packard , 25% Figure 2: Shares of leading companies


IBM, 18%
in the SAN market (Gardner 2004)

495–7) and on the Network Buyers Guide Web site (Net- FSPF: Fabric shortest path first; a routing protocol used
work Buyers Guide 2006). by fibre-channel switches.
The iSuppli survey (Gardner 2004) identifies seven IPoFC: Internet protocol over fibre channel.
leading providers of SAN solutions: EMC (26% of all iSCSI: Internet small computer systems interface.
shares), Hewlett-Packard (25%), IBM (18%), Hitachi (8%), JBOD: Just a bunch of disks; a term for a collection of
LSI Storage (7%), Sun Microsystems (7%), and Dell (6%) disks configured as an arbitrated loop segment in a
(see Figure 2). The additional information (based on the single chassis.
revenue analyses) about the top 10 largest storage com- LC Connector: A small form-factor fiber-optic connec-
panies for the period from 1999 to 2005 can be found in tor, which resembles a small SC connector. Lucent
(Kerekes 2005). Technologies developed the LC connector for use in
telecommunication environments. The LC connector
has been standardized as FOCIS 10 (Fiber Optic Con-
CONCLUSION nector Intermateability Standards) in EIA/TIA-604-10.
NAS: Network-attached storage.
SANs, networked high-speed infrastructures, enable
RAID: Redundant arrays of inexpensive disks; a tech-
e-business enterprises to significantly improve their 24/7
nology for managing multiple disks.
continuous scalable services. They have become a critical
SAN: Storage area network.
part of the enterprise network infrastructure. The above-
SAS: SAN-attached storage.
considered technologies and effective SAN solutions
SC Connector: A fiber-optic connector with a push-pull
allow companies to shift their focus from numerous IT
latching mechanism that provides for quick insertion
infrastructure problems to the successful performance of
and removal while also ensuring a positive connection.
their businesses and services.
The SC connector has been standardized as FOCIS 3
(Fiber Optic Connector Intermateability Standards) in
EIA/TIA-604-03.
GLOSSARY SoIP: Storage over IP; a storage technology developed
CIFS: Common Internet file system, also known as the by the Nishan Systems Corporation.
Microsoft server message block protocol; a network SSA: Serial storage architecture.
file system access protocol that is primarily used by VI: Virtual interface architecture; a midlayer protocol
Windows clients to communicate file access requests specification.
to Windows servers.
CIM: Common information model; an object-oriented
description of entities and relationships in the enter-
prise management environment. REFERENCES
DWDM: Dense wavelength division multiplexing; a ADVA optical storage networking technologies. 2006.
method that allows more wavelengths to use the same www.san.com (accessed April 4, 2006).
fiber. American National Standards Institute (ANSI). 2006.
FC-AL: Fibre channel-arbitrated loop transport www.ansi.org (accessed April 4, 2006).
protocol. Anderson, N. 2006. Build your company’s first SAN. White
FCSW: Fibre-channel switch. Paper. Hewlett-Packard, Inc. www.hp.com. (accessed
FC-VIA: Fibre channel–virtual interface architecture. April 4, 2006).

bid44609_ch147.indd 14 5/11/07 8:56:37 PM


REFERENCES 15

ATTO. 2006. SAN solutions. www.attotech.com/sans.html Hufferd, J. L. 2002. iSCSI: The universal storage connec-
(accessed April 4, 2006). tion. Boston: Addison-Wesley.
Barker, R., and Massiglia, P. 2001. Storage networking IBM Enterprise SAN Solutions. 2006. www.storage.ibm.
essentials: A complete guide to understanding and com/ibmsan (accessed April 4, 2006).
implementing SANs. New York: Wiley. ILogic Corporation. 2006. www.ilogic.com (accessed
Beauchamp, C., Judd J., and Kuo, B. 2002. Building SANs April 4, 2006).
with Brocade fibre channel fabric switches. Rockland, InfiniBand Trade Association. 2006. www.infinibandta.
MA: Syngress. org/home (accessed April 4, 2006).
Broadcom Corporation. 2006. www.broadcom.com (ac- InfraStor Technology Corp. 2001. SAN vs NAS. www.
cessed April 4, 2006). infrastor.com/tech/sanvsnas.htm (accessed March 24,
Brocade Communication Systems, Inc. 2006. (see the 2003).
SAN Info Center) www.brocade.com (accessed April 4, InfraStor Technology Corp. 2002. Introduction to SAN.
2006). www.infrastor.com/tech/SANTechIntro.htm (accessed
Business Brief. 2006. SAN next generation—moving to April 4, 2006).
4 Gbps (2006). White paper. Hewlett-Packard, Inc. Intel Corporation. 2006. SAN systems. www.intel.com
www.hp.com (accessed April 4, 2006). (accessed April 4, 2006).
Clark, T. 2003. Designing storage area networks: A practical Intel InfiniBand Architecture. 2006. www.intel.com/
reference for implementing fibre channel and IP SANs. technology/infiniband (accessed April 4, 2006).
Boston: Addison–Wesley. Internet Engineering Task Force (IETF). 2006. www.ietf.
Clark, T. 2002. IP SANS: An introduction to iSCSI, iFCP, org. (accessed April 4, 2006).
and FCIP protocols for storage area networks. Boston: IP SAN Appliances. 2006. Celeros, Inc. www.celeros.com
Addison–Wesley. (accessed April 4, 2006).
Couture, A., A. MacNeela, Y. Adachi, M, Caminos, and IP Storage Area Networks. 2006. RADirect Inc. www.rad-
R. DeSouza. 2006 March 16. Forecast: IT Storage Serv- direct.com/camp_ipsan_vswitch3000.html (accessed
ices, Worldwide, 2003–2009. Stamford, CT: Gartner. April 4, 2006).
Crossroads Systems, Inc. 2006. www.crossroads.com/ iSCSI and IP-SAN Solutions. 2006. SANRAD Inc. www.
about (accessed April 4, 2006). sanrad.com (accessed April 4, 2006).
Cutting Edge, Inc. 2006. www.cuttedge.com (accessed Jini 2006. www.jini.org (accessed April 4, 2006).
April 4, 2006). Jiro. 2002. www.sun.com/jiro (accessed September 12,
Dell, Inc. 2006. www.dell.com (accessed April 4, 2006). 2002).
Distributed Management Task Force (DMTF) 2006. www. Kerekes, Z. 2005 May 26. The top 10 biggest storage com-
dmtf.org (accessed April 4, 2006). panies in 2008. www.storagesearch.com/squeak-5.html
EMC Corporation. 2006. www.emc.com (accessed (accessed April 4, 2006).
April 4, 2006). LSI Logic Storage Systems, Inc. 2006. www.lsilogicstor-
Emulex Corporation. 2006. www.emulex.com (accessed age.com (accessed April 4, 2006).
April 4, 2006). Lyon, C., and Sturgeon, S. 2006. FCIA showcases “fast and
FalconStor Software, Inc. 2006. www.falconstor.com flexible“ 4GFC products at SNW. www.fibrechannel.
(accessed April 4, 2006). org/NEWS/fcia051018.html (accessed April 4, 2006).
Farley, M. 2001. Building storage networks, 2nd ed. Market Forecast Report. 2002. E-commerce market
New York: Osborne/McGraw–Hill. size and trends, 6. www.goecart.com/ecommerce_
FC-PH. 2001. Fibre channel physical interface, revision 13, solutions_facts.asp (accessed April 23, 2006).
December 9, 2001. www.t11.org/index.htm (accessed McDATA, Inc. 2006. www.mcdata.com (accessed April 4,
April 4, 2006). 2006).
FCIA (Fibre channel Industry Association). 2006. www. Media Integration, Inc. 2006. www.mediainc.com
fibrechannel.org (accessed April 4, 2006). (accessed April 4, 2006).
FCIA Roadmap. 2006. Fibre channel infrastructure National Storage Industry Consortium. 2006. www.nsic.
application matrix, FCIA. www.fibrechannel.org/ org (accessed April 4, 2006).
OVERVIEW/Roadmap.html (accessed April 4, 2006). NEC Corporation. 2006. SAN support products. www.nec.
Fibre Alliance. 2006. www.fibrealliance.org (accessed com(accessed April 4, 2006).
April 4, 2006). Network Buyers Guide. 2006. http://networkbuyersguide.
Fujitsu Ltd. 2006. RAIDs. www.fujitsu.com (accessed com (accessed April 4, 2006).
April 4, 2006). Nishan Systems. 2006. www.nishansystems.com
Gardner, W. D. 2004 January 6. Falling prices, regulations (accessed April 4, 2006).
drive SAN market. www.networkingpipeline.com Ottem, E. 2001. Third generation SANs: Open 2Gb fabric.
(accessed April 4, 2006). White Paper WP-2G0801. Gadzoox Networks, Inc.
Hammond-Doel, T. 2001. 2 Gb/s fibre channel SANs. Vixel www.infrastor.com/downloads/2Gb_whitepaper.pdf
Corporation. www.vixel.com/9000_docs/9000_wp.pdf (accessed April 4, 2006).
(accessed March 24, 2003). Preston, W. C. 2002. Using SANs and NAS. Sebastopol,
Hewlett-Packard, Inc. 2006. www.hp.com (accessed CA: O’Reilly.
April 4, 2006). Riabov, V. V. 2004. Storage area networks. In The Inter-
Hitachi Data Systems. 2006. SAN Solutions. www.hds.com net encyclopedia, Vol. 3, , 329–39, edited by H. Bidgoli.
(accessed April 4, 2006). New York: Wiley.

bid44609_ch147.indd 15 5/11/07 8:56:38 PM


16 STORAGE AREA NETWORK FUNDAMENTALS

Sachdev, P., and Arunkundram, R. S. 2002. Using storage www.datalink.com/whitepaper.cfm (accessed April 4,
area networks. Special edition. Indianapolis: Que. 2006).
SANRAD. 2006. iSCSI-SAN and IP-SAN solutions. www. Storage Management. 2006. Making SANs simple. In
sanrad.com (accessed April 4, 2006). Windows storage solutions. SearchStorage www.
SAN School. 2006. Search Storage Corp. www.searchstor- SearchStorage.com/MicrosoftStorage (accessed April
age.com/SANSchool (accessed April 4, 2006). 4, 2006).
Server Clusters: Storage Area Networks, Windows Sun Microsystems. 2006. Data storage solutions. www.
2000 and Windows Server 2003. 2003. White paper. sun.com/storage (accessed April 4, 2006).
Microsoft Corporation. www.microsoft.com (accessed Technical Committee T11. 2006. Fibre channel interfaces.
April 4, 2006) www.t11.org/index.htm (accessed April 4, 2006).
Sheldon, T. 2001. McGraw–Hill encyclopedia of networking Thornburgh, R. H., and Schoenborn, B. J. 2001. Storage
and telecommunications. New York: McGraw–Hill. area networks: Designing and implementing a mass
Simitci, H. 2003. Storage network performance analysis. storage system. Upper Saddle River, NJ: Prentice Hall.
New York: Wiley. Toigo, J. W. 2001. The Holy Grail of data storage manage-
SNIA (Storage Networking Industry Association) 2006. ment. Upper Saddle River, NJ: Prentice Hall.
www.snia.org (accessed April 4, 2006). Troppens, U., Erkens, R., and Müeller, W. 2004. Storage
SSSWG (Storage Systems Standards Work Group) 2006. networks explained: Basics and application of fibre
www.ssswg.org (accessed April 4, 2006). channel SAN, NAS iSCSI and InfiniBand. New York:
Stallings, W. 2000. Local and metropolitan area networks. Wiley.
6th ed. Upper Saddle River, NJ: Prentice Hall. Unisys. 2006. SAN solutions. www.unisys.com/index.htm
Storage Area Networks and Return on Investment. 2006. (accessed April 4, 2006).
White paper. Datalink Corporation. www.datalink. Vacca, J. 2002. The essential guide to storage area networks.
com/whitepaper.cfm (accessed April 4, 2006). Upper Saddle River, NJ: Prentice Hall.
Storage Area Network Security. 2003. Data access and fab- Xyratex. 2006. 4Gb RAID systems. www.xyratex.com
ric management. White paper. Datalink Corporation. (accessed April 4, 2006).

bid44609_ch147.indd 16 5/11/07 8:56:38 PM

View publication stats

Vous aimerez peut-être aussi