Vous êtes sur la page 1sur 29

RAID 0: This type features data stripping, which spreads parts of a file across multiple

drives. This is used to increase performance, but if one drive fails, the data in the array is
lost.

RAID 1: This type is used for data mirroring, in which data is written to two drives
simultaneously. This ensures that all data is duplicated on both drives, and if one drive
fails, the other will still have a backup. This also helps in increasing performance.

RAID 4: This type is similar to RAID 0, with the exception that if there is a disk failure,
the data from that drive can be recovered by a replacement disk that is created when a
fault is found. However, the creation process of the replacement disk can cause problems,
such as performance slow downs.

RAID 5: This is perhaps the most popular type of RAID array. This type features the
stripping of RAID 0, as well as error correction, resulting in a combination of excellent
performance and fault tolerance.

The use of RAID in personal computers is slowly on the rise. Previously, higher costs of
RAID-compatible hard drives made them undesirable to the general public. RAID is used
extensively throughout high end computers and in business computing environments; it is
slowly finding ground in the home as prices continue to decrease.

The combination of high performance and data protection makes RAID a hard choice to
turn down, especially because more and more people are depending on their computer's
hard drives to keep important data secure.

----------------------------------------------------------------------------------------------
Disk arrays are storage systems that link multiple physical hard drives into one large
drive for advanced data control and security. Disk arrays have several advantages over
traditional single-disk systems.

A hard disk, while being the vital center of any computer system, is also its weakest link.
It is the only critical device of a computer system that is not electronic, but relies on
intricate moving mechanical parts that often fail. When this happens, data is irretrievable
and unless a backup system has been employed, the user is out of luck. This is where disk
arrays make a difference.

Disk arrays incorporate controls and a structure that pre-empts disaster. The most
common disk array technology is RAID (Redundant Array of Independent Disks). RAID
utilizes disk arrays in a number of optional configurations that benefit the user.

One advantage of RAID disk arrays is redundancy of data writes so that if a file is
damaged or stored in a bad cluster or disk, it can be instantly and transparently replaced
from another disk in the array. RAID also allows hot-swapping of bad disks and
increased flexibility in scalable storage. Performance is also enhanced through a process
called "stripping."
There are many varieties of RAID, and though designed primarily for servers, disk arrays
have become increasingly popular among individuals because of their many benefits.
RAID is particularly suited for gamers and multimedia applications.

RAID controllers, built into motherboards, must set parameters for interacting with disk
arrays. The controller sets the performance parameter to match the slowest disk. If it were
to use the fastest disk as the benchmark, data would be lost when written to disks that
cannot support that speed. For this reason, all disks in the array should be the same brand,
speed, size and model for optimal performance. A mix of capacities, speeds and types of
disks will negatively impact performance. The best drives for disk arrays are SATA
(Serial ATA) RAID drives. These drives are optimized for RAID use and, being SATA,
are hot-swappable.

Using disk arrays can provide peace of mind while improving data security and
performance. Motherboards with built-in RAID controllers support certain types of
RAID. For example, an older or inexpensive motherboard might only support RAID 0
and RAID 1, while a newer or more expensive board might support RAID 1 through
RAID 5. Be sure to get a motherboard or third party RAID controller that supports the
RAID configuration you require for your disk array.

RAID levels

RAID stands for Redundant Array of Inexpensive Disks. A RAID system consists
of two or more disks working in parallel. They appear as one drive to the user,
and offer enhanced performance or security (or both).

The software to perform the RAID-functionality and control the hard disks can
either be located on a separate controller card (a hardware RAID controller) or it
can simply be a driver. Both Windows NT 4 and 2000 include a software RAID
solution. Hardware RAID controllers cost more than pure software but they also
offer better performance.

Most RAID-systems are based on SCSI, although implementations using IDE


disks or FC (fibre channel) disks also exist. There are even systems that use IDE
disks internally but that have a SCSI-interface for the host system.

There are different RAID levels, each suiting specific situations. RAID levels are
not standardized by an industry group. This explains why companies are
sometimes creative and come up with their own unique implementations.

Sometimes disks in a RAID system are defined as JBOD, which stands for 'just a
bunch of disks'. This means that those disks do not use a specific RAID level and
are used as if they were stand-alone disks. This is often done for disks that
contain swap files or spooling data.

RAID 0: striping

In a RAID 0 system, data are split up in blocks that get written across all the
drives in the array. By using multiple disks (at least 2) at the same time, RAID 0
offers superior I/O performance. This performance can be enhanced further by
using multiple controllers, ideally one controller per disk.

Advantages

RAID 0 offers great performance, both in read and write operations. There is no
overhead caused by parity controls.

All storage capacity can be used, there is no disk overhead.

The technology is easy to implement.

Disadvantages

RAID 0 is not fault-tolerant. If one disk fails, all data in the RAID 0 array are lost.
It should not be used on mission-critical systems.

Ideal use
RAID 0 is ideal for non-critical storage of data that have to be read/written at a
high speed, e.g. on a PhotoShop image retouching station

RAID 1: mirroring

Data are stored twice by writing them to both the data disk (or set of data disks)
and a mirror disk (or set of disks) . If a disk fails, the controller uses either the
data drive or the mirror drive for data recovery and continues operation. You
need at least 2 disks for a RAID 1 array.

RAID 1 systems are often combined with RAID 0 to improve performance. Such
a system is sometimes referred to by the combined number: a RAID 10 system.

Advantages

RAID 1 offers excellent read speed and a write-speed that is comparable to that
of a single disk.
In case a disk fails, data do not have to be rebuild, they just have to be copied to
the replacement disk.

RAID 1 is a very simple technology.

Disadvantages

The main disadvantage is that the effective storage capacity is only half of the
total disk capacity because all data get written twice.

Software RAID 1 solutions do not always allow a hot swap of a failed disk
(meaning it cannot be replaced while the server keeps running). Ideally a
hardware controller is used.

Ideal use

RAID-1 is ideal for mission critical storage, for instance for accounting systems. It
is also suitable for small servers in which only two disks will be used.

RAID 3

On RAID 3 systems, datablocks are subdivided (striped) and written in parallel on


two or more drives. An additional drive stores parity information. You need at
least 3 disks for a RAID 3 array.

Since parity is used, a RAID 3 stripe set can withstand a single disk failure
without losing data or access to data.

Advantages
RAID-3 provides high throughput (both read and write) for large data transfers.

Disk failures do not significantly slow down throughput.

Disadvantages

This technology is fairly complex and too resource intensive to be done in


software.

Performance is slower for random, small I/O operations.

Ideal use

RAID 3 is not that common in prepress.

RAID Level 4

RAID Level 4 stripes data at a block level across several drives, with parity
stored on one drive. The parity information allows recovery from the failure
of any single drive. The performance of a level 4 array is very good for
reads (the same as level 0). Writes, however, require that parity data be
updated each time. This slows small random writes, in particular, though
large writes or sequential writes are fairly fast. Because only one drive in
the array stores redundant data, the cost per megabyte of a level 4 array
can be fairly low.

RAID 5

RAID 5 is the most common secure RAID level. It is similar to RAID-3 except that
data are transferred to disks by independent read and write operations (not in
parallel). The data chunks that are written are also larger. Instead of a dedicated
parity disk, parity information is spread across all the drives. You need at least 3
disks for a RAID 5 array.
A RAID 5 array can withstand a single disk failure without losing data or access
to data. Although RAID 5 can be achieved in software, a hardware controller is
recommended. Often extra cache memory is used on these controllers to
improve the write performance.

Advantages

Read data transactions are very fast while write data transaction are somewhat
slower (due to the parity that has to be calculated).

Disadvantages

Disk failures have an effect on throughput, although this is still acceptable.

Like RAID 3, this is complex technology.

Ideal use

RAID 5 is a good all-round system that combines efficient storage with excellent
security and decent performance. It is ideal for file and application servers.

RAID 10: a mix of RAID 0 & RAID 1

RAID 10 combines the advantages (and disadvantages) of RAID 0 and RAID 1 in


a single system. It provides security by mirroring all data on a secondary set of
disks (disk 3 and 4 in the drawing below) while using striping across each set of
disks to speed up datatransfers.
RAID 0/5 or 50

RAID 0/5 is a dual level array that utilizes multiple RAID5 sets into a single
array. In RAID 0/5 array, a single hard drive failure can occur in each of
the RAID5 without any loss of data on the entire array. Keep in mind, as
the number of hard drives increase in an array, so too, does the increased
possibility of a single hard drive failure. Although there is an increased
write performance in RAID 0/5, once a hard drive fails and reconstruction
takes place, there is a noticeable decrease in performance, data/program
access will be slower, and transfer speeds on the array will be effected.
RAID Level Uses

Level 0 (striping)
Any application which requires very high speed storage, but does not
need redundancy. Photoshop temporary files are a good example.

Level 1 (mirroring)
Applications which require redundancy with fast random writes; entry-level
systems where only two drives are available. Small file servers are an
example.

Level 0/1 or 10 (mirroring and striping)


Dual level raid, combines multiple mirrored drives (RAID 1) with data
striping (RAID 0) into a single array. Provides highest performance with
data protection.

Level 5 (distributed parity)


Similar to level 4, but may provide higher performance if most I/O is
random and in small chunks. Database servers are an example.

Level 0/5 or 50
Dual level raid, combines multiple RAID 5 sets with data striping (RAID 0).
Increased reliability and performance over standard RAID5 that can stand
a multiple drive failure; one hard drive per RAID5 set.
1 Introduction

In recent years several technical developments have converged to a bigger than


ever need for extremely fast data links. High performance computers have
become the focus of much attention in the data communications industry.
Performance improvements have spawned increasingly data-intensive and high-
speed networking applications, such as multimedia and scientific visualization.
However, the existing network interconnects between computers and I/O devices
are unable to run at the speeds needed.

The intention of the Fibre Channel (FC) is to develop practical, inexpensive, yet
expendable means of quickly transferring data between workstations,
mainframes, supercomputers, desktop computers, storage devices, displays and
other peripherials. Fibre Channel is the general name of an integrated set of
standards [1] being developed by the American National Standards Institute
(ANSI).

There are two basic types of data communication between processors and
between processors and peripherials: channels and networks. A channel
provides a direct or switched point-to-point connection between the
communicating devices. A channel is tipically hardware-intensive and transports
data at the high speed with low overhead. In contrast, a Network is an
aggregation of distributed nodes (like workstations, file servers or peripherials)
with it's own protocol that supports interaction among these nodes. A network has
relatively high overhead since it is software-intensive, and consequently slower
than a channel. Networks can handle a more extensive range of tasks than
channels as they operate in an environment of unanticipated connections, while
channels operate amongst only a few devices with predefined addresses. Fibre
Channel attempts to combine the best of these two methods of communication
into a new I/O interface that meets the needs of channel users and also network
users.

Although it is called Fibre Channel, it's architecture doesn't represent neither a


channel nor a real network topology. It allows for an active intelligent
interconnection scheme, called a Fabric, to connect devices. All a Fibre channel
port has to do is to manage a simple point-to-point connection between itself and
the Fabric.

Fibre channel is a high performance serial link supporting its own, as well as
higher level protocols such as the FDDI, SCSI, HIPPI and IPI (see chapter 7).
The Fibre Channel standard addresses the need for very fast transfers of large
amounts of information. The fast (up to 1 Gbit/s) technology can be converted for
Local Area Network technology by adding a switch specified in the Fibre Channel
standard, that handles multipoint addressing. There is a perspective as an I/O
technology and a Local Area Network technology as well. Another advantage of
Fibre Channel is, that it gives users one port that supports both channel and
network interfaces, unburdening the computers from large number of I/O ports.
FC provides control and complete error checking over the link [2] [3].

2 Fibre Channel topology

In Fibre Channel terms the switch connecting the devices is called Fabric. The
link is the two unidirectional fibres transmitting to opposite directions with their
associated transmitter and receiver. Each fibre is attached to a transmitter of a
port at one end and a receiver of another port at the other end. When a Fabric is
present in the configuration, the fibre may attach to a node port (N_Port) and to a
port of the Fabric (F_Port).

Since Fibre channel system relies on ports logging in with each other and the
Fabric, it is irrelevant whether the Fabric is a circuit switch, an active hub or a
loop. The topology can be selected depending on system performance
requirements or packaging options. Possible FC topologies include point-to-point,
crosspoint switched or arbitrated loop (Figure 1).

Figure 1 Fibre Channel topologies

FC operates at a wide variety of speeds (133 Mbit/s, 266 Mbit/s, 530 Mbit/s, and
1 Gbits/s) and on three types of both electrical and optical media. Transmission
distances vary depending on the combination of speed and media. The single
mode fibre optic media using longwave laser light source gives the highest
performance (10 km maximum distance at 1 Gbit/s) [2].
3 FC-0 layer

FC is structured as a set of hierarchical functions (Figure 2).


The lowest level (FC-0) defines the physical link in the system, including the
fibre, connectors, optical and electrical parameters for a variety of data rates.
Figure 3 shows the schematic of the Fibre Channel optical link [2].

Figure 2 Fibre Channel structure

The system bit error rate (BER) at the supported media and speeds is less than
10exp-12 [1]. The physical level is designed for the use of large number of
technologies to meet the widest range of system requirements. An end-to-end
communicating route may consist of different link technologies to achieve the
maximal performance and price efficiency.

3.1 Open Fibre Control

The FC-0 specifies a safety system - the Open Fibre Control system (OFC) - for
SW laser data links, since the optical power levels exceed the limits defined by
the laser safety standards. If an open fibre condition occurs in the link, the
receiver of the Port the fibre is connected detects it and pulses its laser at a low
duty cycle that meets the safety requirements. The receiver of the other port (at
the other end of the fibre) detects this pulsing signal and also pulses it's
transmitter at a low duty cycle. When the open fibre path is restored both ports
receive the pulsing signals, and after a double handshaking procedure the
connection is automatically restored within a few seconds [1].
Figure 3 FC optical link

4 FC-1 layer

FC-1 defines the transmission protocol including serial encoding and decoding
rules, special characters and error control. The information transmitted over a
fibre is encoded 8 bits at a time into a 10 bit Transmission Character. The primary
rationale for use of a transmission code is to improve the transmission
characteristic of information across a fibre. The transmission code must be DC
balanced to support the electrical requirements of the receiving units. The
Transmission Characters ensure, that short run lengths and enough transitions
are present in the serial bit stream to make clock recovery possible [1] [2].

4.1 FC-1 character conversion

An unencoded information byte is composed of eight information bits


A,B,C,D,E,F,G,H and the control variable Z. This information is encoded by FC-1
into the bits a,b,c,d,e,i,f,g,h,j of a 10-bit Transmission Character. The control
variable has either the value D (D-type) for Data characters or the value K (K-
type) for special characters. Each valid Transmission Character has been given a
name using the following convention: Zxx.y, where Z is the control variable of the
unencoded FC-1 information byte, xx is the decimal value of the binary number
composed of the bits E, D, C, B, and A, and y is the decimal value of the binary
number composed of the bits H,G of the unencoded FC-1 information byte in that
order. For example the name of the FC-1 Transmission Character composed of
the hexadecimal "BC" special (K-type) code is K28.5.

The information received is recovered 10 bits at a time and those Transmission


Characters used for data (D-type) are decoded into the one of the 256 8-bit
combinations. Some of the remaining Transmission Characters (K-type) referred
to as special characters, are used for protocol management functions. Codes
detected at the receiver that are not D- or K- type are signaled as code violation
errors [1].
4.2 Coding rules

Each data byte or special character has two (not necessarily different)
transmission codes. The data bytes and special characters are encoded into
these codes respectively, depending on the initial Running Disparity (RD). The
RD is a binary parameter, which is calculated upon the balance of ones and
zeros in the sub-blocks (the first six bits and the last four bits) of a transmission
character. A new RD is calculated from the transmitted character at both the
transmitter and the receiver. If the detected character has opposite RD the
transmitter should have sent, (depending on the RD of the previous bit stream)
the receiver indicates a disparity violation condition. A Transmission Word is
composed of four contiguous transmission characters [1].

5 FC-2 Layer

The Signaling Protocol (FC-2) level serves as the transport mechanism of Fibre
Channel. The framing rules of the data to be transferred between ports, the
different mechanisms for controlling the three service classes (see chapter 5.7)
and the means of managing the sequence of a data transfer are defined by FC-2.
To aid in the transport of data across the link, the following building blocks are
defined by the standard [1] :

 Ordered Set
 Frame
 Sequence
 Exchange
 Protocol

5.1 Ordered Set

The Ordered Sets are four byte transmission words containing data and special
characters which have a special meaning. Ordered Sets provide the availability
to obtain bit and word synchronization, which also establishes word boundary
alignment. An Ordered Set always begins with the special character K28.5. Three
major types of Ordered Sets are defined by the signaling protocol.

The Frame delimiters (the Start-of-Frame (SOF) and End-of-Frame (EOF)


Ordered Sets) are Ordered Sets which immediately precede or follow the
contents of a Frame. There are multiple SOF and EOF delimiters defined for the
Fabric and N_Port Sequence control.

The two Primitive Signals: Idle and Receiver Ready (R_RDY) are Ordered Sets
designated by the standard to have a special meaning. An Idle is a Primitive
Signal transmitted on the link to indicate an operational Port facility ready for
Frame transmission and reception. The R_RDY Primitive Signal indicates that
the interface buffer is available for receiving further Frames.
A Primitive Sequence is an Ordered Set that is transmitted and repeated
continuously to indicate specific conditions within a Port or conditions
encountered by the receiver logic of a Port. When a Primitive Sequence is
received and recognized, a corresponding Primitive Sequence or Idle is
transmitted in response. Recognition of a Primitive Sequence requires
consecutive detection of 3 instances of the same Ordered Set. The Primitive
Sequences supported by the standard are Offline (OLS), Not Operational (NOS),
Link Reset (LR) and Link Reset Response (LRR) [1] [2].

5.2 Frame

The basic building blocks of an FC connection are the Frames. The Frames
contain the information to be transmitted (Payload), the address of the source
and destination ports and link control information. Frames are broadly
categorized as Data frames and Link_control frames. Data frames may be used
as Link_Data frames and Device_Data frames, link control frames are classified
as Acknowledge (ACK) and Link_Response (Busy and Reject) frames. The
primary function of the Fabric is, to receive the Frames from the source port and
route them to the destination port. It is the FC-2 layer's responsibility to break the
data to be transmitted into Frame size, and reassemble the Frames.

Each Frame begins and ends with a Frame Delimiter (Figure 4) The Frame
Header immediately follows the SOF delimiter. The Frame Header is used to
control link applications, control device protocol transfers, and detect missing or
out of order Frames. An optional header may contain further link control
information. A maximum 2112 byte long field (payload) contains the information
to be transferred from a source N_Port to a destination N_Port. The 4 bytes
Cyclic Redundancy Check (CRC) precedes the EOF delimiter. The CRC is used
to detect transmission errors. [1] [2]

Figure 4 Frame Structure [2]


5.3 Sequence

A Sequence is formed by a set of one or more related Frames transmitted


unidirectionally from one N_Port to an other. Each Frame within a sequence is
uniquely numbered with a Sequence Count. Error recovery, controlled by an
upper protocol layer is usually performed at Sequence boundaries [2].

5.4 Exchange

An Exchange is composed of one or more nonconcurrent sequences for a single


operation. The Exchanges may be unidirectional or bidirectional between two
N_Ports. Within a single Exchange, only one sequence may be active at any one
time, but Sequences of different Exchanges may be concurrently active.

5.5 Protocol

The Protocols are related to the services offered by Fibre Channel. Protocols
may be specific to higher-layer services, although Fibre Channel provides its own
set of protocols to manage its operating environment for data transfer. The
following Protocols are specified by the standard [1]:

 Primitive Sequence Protocols are based on Primitive Sequences (see


chapter 5.1) and specified for link failure.
 Fabric Login protocol: The interchanging of Service Parameters of an
N_Port with the fabric.
 N_Port Login protocol: Before performing data transfer, the N_Port
interchanges its Service Parameters with another N_Port.
 Data transfer protocol describes the methods of transferring Upper Layer
Protocol (ULP) data using the Flow control management of Fibre Channel
(see chapter 5.6).
 N_Port Logout Protocol is performed when an N_Port requests removal of
its Service Parameters from the other N_Port. This may be used to free up
resources at the connected N_Port.

5.6 Flow control

Flow control is the FC-2 control process to pace the flow of Frames between
N_Ports and between an N_Port and the Fabric to prevent overrun at the
receiver. Flow control is dependent upon the service classes (see chapter 5.7).
Class 1 Frames use end-to-end flow control, class 3 uses only buffer-to-buffer,
class 2 Frames use both types of flow control.

Flow control is managed by the Sequence Initiator (source) and Sequence


Recipient (destination) Ports using Credit and Credit_CNT. Credit is the number
of buffers allocated to a transmitting Port. The Credit_CNT represents the
number of data frames which have not been acknowledged by the Sequence
Recipient.

The end-to-end flow control process paces the flow of Frames between N_Ports.
In this case the Sequence Recipient is responsible for acknowledging the
received valid data Frames by ACK Frames. When the number of receive buffers
are insufficient for the incoming Frame, a "Busy", when a Frame with error is
received a "Reject" Frame will be sent to the Initiator Port. (see chapter 5.2) The
Sequence Initiator is responsible for managing EE_Credit_CNT. The N_Port login
(see chapter 5.5) is used to establish EE_Credit.

The buffer-to-buffer flow control is managed between an N_Port and an F_Port or


between N_Ports in point-to-point topology. Each port is responsible for
managing BB_Credit_CNT. BB_Credit is established during the Fabric Login (see
chapter 5.5). The Sequence Recipient (destination) Port signals by sending a
Receiver_Ready primitive signal to the transmitting Port whether it has free
receive buffers for the incoming Frames.

Figures 5-7 show the flow control management of the different service classes
(see chapter 5.7) [1].

5.7 Service Classes

To ensure efficient transmission of different types of traffic, FC defines three


classes of service. Users select service classes based on the characteristics of
their applications, like packet length and transmission duration, and allocate the
services by the Fabric Login protocol.

Class 1 is a service which provides dedicated connections, in effect providing the


equivalent of a dedicated physical connection. Once established, a Class 1
connection is retained and guaranteed by the Fabric. This service guarantees the
maximum bandwidth between two N_Ports, so this is the best for sustained, high
throughput transactions. In Class 1, Frames are delivered to the destination Port
in the same order as they are transmitted. Figure 5 shows the flow control
management of a Class 1 connection.
Figure 5 Class 1 Flow Control

Class 2 is a Frame-switched, connectionless service that allows bandwidth to be


shared by multiplexing Frames from multiple sources onto the same channel or
channels. The Fabric may not guarantee the order of the delivery and Frames
may be delivered out of order. This service class can be used, when the
connection setup time is greater than the latency of a short message. Both Class
1 and Class 2 send acknowledgment Frames confirming Frame delivery. If
delivery cannot be made due to congestion, a Busy frame (see chapter 5.2) is
returned and the sender tries again. (Figure 6)
Figure 6 Class 2 Flow Control

Class 3 service is identical to Class 2, except that the Frame delivery is not
confirmed. (Flow control is managed only on buffer level, see Figure 7) This type
of transfer, known as datagram provides the quickest transmission by not
sending confirmation. This service is useful for real- time broadcasts, where
timeliness is key and information not received in time is valueless.

The FC standard also defines an optional service mode called intermix. Intermix
is an option of Class 1 service, in which Class 1 Frames are guaranteed a
special amount of bandwidth, but Class 2 and Class 3 Frames are multiplexed
onto the channel, only when sufficient bandwidth is available to share the link [2]
[3].
Figure 7 Class 3 Flow Control

6 FC-3 Layer

The FC-3 level of the FC standard is intended to provide the common services
required for advanced features such as:

 Striping -To multiply bandwidth using multiple N_ports in parallel to


transmit a single information unit across multiple links.
 Hunt groups - The ability for more than one Port to respond to the same
alias address. This improves efficiency by decreasing the chance of
reaching a busy N_Port.
 Multicast - Multicast delivers a single transmission to multiple destination
ports. This includes sending to all N_Ports on a Fabric (broadcast) or to
only a subset of the N_Ports on a Fabric. [1]

7 FC-4 Layer

FC-4, the highest level in the FC structure defines the application interfaces that
can execute over Fibre Channel. It specifies the mapping rules of upper layer
protocols using the FC levels below. Fibre Channel is equally adept at
transporting both network and channel information and allows both protocol types
to be concurrently transported over the same physical interface.

The following network and channel protocols are currently specified or proposed
as FC-4s [2]:
 Small Computer System Interface (SCSI)
 Intelligent Peripheral Interface (IPI)
 High Performance Parallel Interface (HIPPI) Framing Protocol
 Internet Protocol (IP)
 ATM Adaptation Layer for computer data (AAL5)
 Link Encapsulation (FC-LE)
 Single Byte Command Code Set Mapping (SBCCS)
 IEEE 802.2

Tape Details

DLT, LTO, SDLT

DLT - Digital Linear Technology

DLT 3500 Capacity 10/20 GB


DLT 7000 Capacity 20/40 GB find the data ackup speed from the net i forgot
these details of DLT
DLT 8000 Capacity 40/80 GB

SDLT - I dont know much abt this please find it out from the net

Super DLT

LTO - Linear Tape Open

LTO 1 capacity 100/200 GB Data Backup speed 15/30 mb per second


LTO 2 Capacity 200/400 GB -----||---------- 30/60 mb per second
LTO 3 Capacity 400/800 GB -----||---------- 60/120 mb per second

What is Fibre Channel?

Fibre Channel is a high performance interface designed to bring speed and flexibility to
multiple disc drive storage systems.

Merriam Webster's online dictionary defines interface as "the place at which independent and
often unrelated systems meet and act on or communicate with each other." In the world of
computers, the word interface simply refers to the set of design specifications that govern how
one piece of hardware communicates with another piece of hardware. The Fibre Channel
interface was specifically designed to speed communications in multiple drive systems.
What are the key features of Fibre Channel?

Fibre Channel key features include...

 Hot-pluggability – Fibre Channel drives can be installed or removed while the


host system is operational, which is crucial in high-end and heavy-use server
systems where there is little or no downtime.
 ANSI standard compliance for serial port interface – Fibre Channel does not
require special adapters, which can be expensive.
 Speed – In its intended environment, Fibre Channel is the fastest option available.
 Cost effectiveness – Relative to other high-end solutions, Fibre Channel is
inexpensive because it does not require special adapters.
 Loop resiliency – Fibre channel provides high data integrity in multiple-drive
systems, including Fibre Channel RAID.
 Longer cable lengths – Relative to LVD, Fibre Channel can maintain data
integrity through significantly longer cables. This makes configuring multiple
devices easier.

What is the intended environment for Fibre Channel drives?

Fibre Channel drives were designed for use in multiple-drive system environments like
servers. A Fibre Channel configuration consists a backplane, which is an external enclosure
that houses a printed circuit board (PCB) and multiple drive receptacles, and a Fibre Channel
host bus adapter (HBA). The backplane allows direct connection to the drives (no cable),
supplies power to the drives, and controls the input and output of data on all drives within the
system. Because so much of Fibre Channel's benefits are derived from its method of data
handling among multiple drives, single drive environments will realize no significant
performance enhancement by using Fibre Channel over LVD.

How does LVD compare to Fibre Channel?

The key comparison feature is the number of drives. If the system has less than 5 drives, it is
important to note that Fibre Channel will offer no performance benefits over LVD. For systems
with more than 5 drives, however, Fibre Channel's speed increases as more drives are added.
For this reason, more than 20 Fibre Channel drives in a single loop can transfer data at up to
100 Mbytes/second. In comparison, LVD can only transfer data at up to about 80 Mbytes/sec.
It is recommended that you plan what storage capacity and performance requirements are
needed before choosing a disc drive interface. The table below represents the advantages of
Fibre Channel over LVD:

SCSI Wide (LVD) Fibre Channel


Higher 40-80 Mbytes/sec 100 Mbytes/sec
bandwidth
More 15 devices 126 devices
connectivity
Simplified Ribbon cable, jumpers, SCA backplane: no jumpers,
attachment power switches or power connections
Increased 1.5 meters total length SE 30 meters device to device
(single ended), 12 meters (copper), 10 kilometers device
distance
total length (LVD) to device (optical)
Improved Parity and running disparity CRC
redundancy protected frames
Simplified Wide, Narrow, SCA, Fast, 1 version
interface Ultra, Single-Ended, LVD,
HVD

What are dual loops?

Dual loops allow Fibre Channel drives to be connected to two separate server environments at
the same time. While only one loop can access the drive at any given moment, dual loops
allow both servers to share the drive without manually switching.

How do dual loops improve performance?

Most commonly, dual loops improve performance by sharing data. This means data used by
more than one system can be stored in a central location and accessed by both loop systems.
This eliminates the need to duplicate or synchronize data.

If duplicate or synchronous data is not the issue, Fibre Channel drives can connect to two
independent loops at the same time. Each loop can transfer data at up to 100 Mbytes/second.
While theoretically that means a dual loop system can transfer data at up to 200 Mbytes/sec,
in a traditional Fibre Channel system, the controller can access only one loop at a time, which
means the maximum data transfer rate is about 100 Mbytes/second. In a Fibre Channel RAID
system, however, the maximum transfer rate can increase significantly depending on the
number of drives: the more drives present, the faster the data transfer rate.

How many drives can be connected to a Fibre Channel single loop?

Fibre Channel drives use an SCA connection that combines the data signals and power supply
lines into a single connection. This makes it possible for a maximum of 126 Fibre Channel
drives to connect to a single loop at the same time.

How many drives can be connected to a Fibre Channel dual loop system?

In a dual loop system it is possible to have 126 drives connected to each loop, for a maximum
of 252 drives. It is important to note that any shared drives will detract from the total number.
There can only be 252 total if there are 0 shared drives.

What is the maximum cable length recommended between Fibre Channel devices?

The maximum cable length recommended between Fibre Channel devices is 30 meters using
copper and 10 kilometers (6 miles) using fiber optics. This is significantly longer than LVD's
maximum cable length of 12 meters. It is important to note that exceeding recommended
maximum cable length can significantly impact data integrity.
Quantum

Specifications

Native* Performance and Capacity by Drive Type


Drive Maximum Maximum
Type Performance Capacity
SDLT 600 259GB/hr 6.3TB
SDLT 320 115GB/hr 3.4TB
LTO-2 216GB/hr 5.0TB
LTO-1 108GB/hr 2.5TB
* All performance and capacity characteristics are native (no compression defined)

Magazines, Slots and Cartridges


Drive Max. #of Max. # of Slots per Max. # of
Type Drives Magazines Magazine Cartridges
DLTtape 2 2 10 21
LTO 2 2 12 25

Robotics Reliability
MTBF 250,000 hours (on-hours)
MSBF One million load/unload swaps
MTTR Less than 20 minutes

Host Interfaces
SDLT 320 Ultra2 SCSI LVD — Optional Native Fibre Library
LTO Ultrium 1/2 Ultra2 SCSI LVD — Optional Native Fibre Library
SDLT 600 SCSI Ultra-3 — Optional Native Fibre Library

Management Requirements
MC300 Optional Prism Management — ALERT notification, remote management, SNMP traps
Port 9-pin RS-232C, EIA/TIA female connector

Power Requirements
Input Voltage 88-264 VAC
Frequency 47-63 HZ
Consumption 200W average
Heat Dissipation 682 BTU/hr
Power Cord (US included) US NEMA 5-15P Male

Environmental Limits, Operating


Humidity 20% to 80%, non-condensing
Temperature 50°F to 104°F (10°C to 40°C)
Altitude Sea level to 8,000 ft. (0 to 2,438 meters)

Environmental Limits, Non-Operating (storage and shipping)


Humidity 10% to 95%, non-condensing
Temperature -40°F to 140°F (-40°C to 60°C)
Altitude Sea level to 36,000 ft. (0 to 11,000 meters)

Cabinet Characteristics
Rack Space Required 4U
Height 6.9" (175 mm)
Width 19" (487 mm)
Depth 28.6" (726 mm)
Weight 150lbs (fully populated)

Agency Approvals
Safety UL1950 Listed, CSA950, EN 60950
EMI/RFI FCC CFR 47-15J (Level A), EN55022 (CISPR 22) Level A, EN55024 (CISPR 24), VCCI
Agency Markings CE, VCCI, UL, FCC, CSA

Scalability
Quantum's StackLink™ mechanism is available in four sizes: 2U, 20U, 27U, and 40U. Please refer to the
intermix table below:

Model Configuration
2U StackLink 2 M1500s
20U
5 M1500s; 1 M2500 and 1 M1500
StackLink
28U
7 M1500s; 2 M2500s; 1 M2500 and 3 M1500s
StackLink
40U 10 M1500s; 3 M2500s; 1 M2500 and 6 M1500s; 2 M2500s
StackLink and 3 M1500s

M1500 specifications

Overview

The Quantum M1500 provides modular scalability for companies that need a backup solution that
can grow and change with their data requirements. Developed with flexibility in mind, the M1500
can be configured to fit the performance and capacity requirements of the most aggressive users.
The scalability of the M-Series (including the M1500 and M2500) is unrivaled because it does not
have to taken off-line, shipped to the manufacturer, or rebuilt when new modules are added. The
M-Series libraries also boasts the greatest density per rack space in this class of tape library and
leave valuable rack space for other IT infrastructure.

Features

 Native capacity for SDLT 600 of 6.3TB at 36MB/s native transfer rate
 Base-library configurable from zero to two drives and up to 25 cartridge slots
 Scalable via StackLink up to ten modules in a rack providing up to 20 drives and 250
cartridge slots by the backup software package as a single library
 Supports SDLT 320, SDLT 600, and LTO-1™, and LTO-2™ tape drives
 Barcode reader standard
 Data cartridges in removable magazines providing 100% bulk loading and unloading
 Intuitive GUI control panel
 SCSI standard with a native fibre library option
 Optional remote monitoring, management and SNMP agent (MC300 option)
 Optional redundant power supplies
 Intermixable scalability with the M2500
 Add, remove and replace modules in a StackLinked configuration while modules stay on-
line (pass-thru independent of individual modules)
 Driveless configuration available for "capacity-only" scalability
 Available in stand-alone (non-rackmountable, desktop) configuration

Quantum ValueLoader Tape Autoloader

Overview

In comparison to stand-alone tape drives, the Quantum ValueLoader™ offers increased capacity
and random access to several cartridges without human intervention – the number one cause of
data loss. The ValueLoader is an unparalleled value for DLTtape™ or Ultrium tape automation
and provides all of these benefits at a price comparable to some stand-alone tape drives. Now
available with a choice of DLT VS80, DLT VS160, LTO-1, LTO-2, SDLT 320 or SDLT 600 tape
technology

The ValueLoader is easy-to-use and displays the following features:

 Simple User Interface — The ValueLoader uses an LCD display to operate the loader so
that it is like any accessorized drive with a natural language menu-driven interface.
 Simple Cartridge Management — The ValueLoader features automated cartridge
management for import and export of data cartridges. Users are presented with
cartridges for export and prompted for import.
 Optimal Backup Configuration — The ValueLoader has only one drive and eight cartridge
slots. The eight cartridges orbit around the center-mounted tape drive and insert into the
drive individually. There is a slot for each day of the week and an additional slot that could
be used for either a Cleaning Tape or additional media cartridge.

Features and Benefits

 Up to 4.8TB compressed capacity


 Up to 70MB per second compressed performance
 Supported drive types: DLT VS80, DLT VS160, LTO-1, LTO-2, SDLT 320, SDLT 600
 Reliable — Tape drive and automation are truly integrated to minimize the number of
parts and increase reliability
 Ease of use — Automated cartridge management simplifies user intervention for the
import and export of data cartridges
 Eight cartridges rotate in a carousel around the center-mounted tape drive and insert into
the drive individually
 Rack optimized for slim spaces — Provides best-of-breed data density when
implemented in rack-optimized installations
 Tabletop ready for desktop configurations
 Total Value Delivery — Reduces IT overhead at a price that is comparable to some stand-
alone tape drives

Quantum SuperLoader Tape Autoloader

Overview
This scalable tape autoloader provides unbeatable data density, capacity and performance in a
2U rackmount form factor. The Quantum SuperLoader™ provides the widest range of drive
choices available with both LTO™ and DLTtape™ technology. Choose LTO-1 or LTO-2
technology with the ability to upgrade or swap drives at the customer site. Or choose an SDLT
320 drive for performance, or a DLT1 drive for economy - either way you have an autoloader that
protects your investment in legacy DLTtape IV cartridges,by being backward read compatible.
The unique 8+1 or 16+1 cartridge configuration scales to your needs, now and in the future, and
allows an extra cartridge to be added to a fully populated system. With a modular architecture
that ensures future viability, the SuperLoader brings you enterprise-class automation at a small
office price.

Benefits of the SuperLoader family

Enterprise class reliability at 1,000,000 MSBF


Mail slot allows the insertion or removal of any cartridge, without system interruption
Web-based management tools to control, configure, and diagnose from any location
Modular architecture allows future upgrades and economical servicing
Optional bar code reader for non-sequential tape management

Benefits of SuperLoader DLT


Native capacity for SuperLoader SDLT 320 of 2.56TB at 16MB/s transfer rate
Native capacity for SuperLoader DLT1 of 640GB at 3MB/s transfer rate
Choice of drives (SDLT 320 or DLT1) for performance or value with upgradeability
Cartridge capacity of 8+1 or 16+1 tapes; scalable capacity points from 640GB to 5.1TB
(compressed)

Benefits of SuperLoader LTO


Native capacity for SuperLoader LTO-2 of 3.2TB at 30MB/s transfer rate
Native capacity for SuperLoader LTO-1 of 1.6TB at 15MB/s transfer rate
Choice of drives (LTO-2 and LTO-1) for performance or value with upgradeability
Cartridge capacity of 8+1 or 16+1 tapes; scalable capacity points from 800GB to 6.4TB
(compressed)

Tape Drives and media


Tape Drives
VS80 VS160 SDLT320 SDLT600
40/80GB 80/160GB 160/320GB 300/600GB

Tape Drives. Fortuna have been supplying and installing Tape Drives and Tape Libraries for
over 10 years, we work with the leading software and hardware vendors to provide the best
possible backup solution.

If you are interested in discussing your backup requirement in more detail please e-mail us
sales@peripheralstorage.com or telephone 01256 782030.

Manufacturers backup tape drives from - Certance - HP - Iomega - Quantum - Sony -


Tandberg

Please browse our web site for the backup tape drive that best suits your requirements.
Choosing a suitable tape drive is not an easy process as a multitude of
options are available.

Data Volume to back up - Depends on the amount of data that needs be backed up now and
should include at least 12 months data growth.

Tape drive performance - On the time available to backup the data.

Tape drive capacity - A wide variety of tape capacities are available from 20GB through to
300GB+.

Tape Media - Tape media life varies from different tape technologies. If all you need to do is
backup data on a daily basis and restore files once a month then all tape technologies can do
this. On the other hand if you need to restore historical data from "x" years ago then some are
better than other at retaining important information.

Tape Rotation - When buying a tape drive you should also consider how many backup tapes you
will need. I would recommend a minimum of 10 tapes to carry out the following - full backups
every Friday for 2 weeks, daily differential or incremental backup Mon-Fri. Using this scheme any
data loss that occurs in the last 2 weeks can be recovered.

Tape drive interface - The most common type of tape drive interface is SCSI which require a
separately installed SCSI card in the machine that has the tape drive attached. Other interfaces
for the lower end tape drives are USB, Firewire and ATAPI.

Backup Software - This depends on the operating systems being backed up, type of
information, and the network infrastructure involved.

Disaster Recovery - When buying a tape drive an issue overlooked is how quickly data can be
recovered to bring bring a server back up to a working state. People are happy to ensure the
information is being backed up, but never ask how long it will take to recover "x" amount of data
within a given time frame. We specialise in providing this sort advice. If you need to recover
large amounts of data then a tape drive might not be the best solution. For help or assistance
please call us on 01256 782030 or e-mail sales@peripheralstorage.com.

Why backup using tape?

Tape drives are an important valuable tool for backing up and storing critical data offsite. As the
media tapes are removable any data that is backed up can be stored for safe keeping in the
event of data loss.

Companies experience data loss in a variety of ways:

When power is restored disk drives refuse to spin up or corrupt disk cache
Power outage
information written.
Earthquake Shock damage disk drives fail to spin up
Fire Smoke damage or meltdown
Flood Water damages sensitive electronics
RAID failure RAID fails and no hot online spare
Power surge Blows circuits
Wear Tape drives wear out and bearings fail
Software Software fails to start or finish backup
Virus Viruses re-format hard disks
Malicious Damage Ex-employee deletes data or someone hacks the computer
Theft Computer systems are stolen
Accidental erasure Employee accidentally deletes the wrong file or directory
Terrorist threat Bombs blowing up buildings

Data loss can be very costly not only in dollars and downtime but also in
productivity.

 93% of companies that lost their data center for 10 days or more due to a disaster filed
for bankruptcy within one year of the disaster. 50% of businesses that found themselves
without data management for this same time period filed for bankruptcy immediately.
(Source: National Archives & Records Administration in Washington.)
 File corruption and data loss are becoming much more common, although loss of
productivity continues to be the major cost associated with a virus disaster. (Source: 7th
Annual ICSA Lab's Virus Prevalence Survey, March 2002.)
 The average company spends between $100,000 and $1,000,000 in total ramifications
per year for desktop-oriented disasters (both hard and soft costs.) (Source: 7th Annual
ICSA Lab's Virus Prevalence Survey, March 2002.)
 In addition to being more prevalent, computer viruses were more costly, more destructive,
and caused more real damage to data and systems than in the past. (Source: 7th Annual
ICSA Lab's Virus Prevalence Survey, March 2002.)
 Of those companies participating in the 2001 Cost of Downtime Survey: 46% said each
hour of downtime would cost their companies up to $50k, 28% said each hour would cost
between $51K and $250K, 18% said each hour would cost between $251K and $1
million, 8% said it would cost their companies more than $1million per hour. (Source:
2001 Cost of Downtime Survey Results, 2001.)
 At what point is the survival of your company at risk? 40% said 72 hours, 21% said 48
hours, 15% said 24 hours, 8% said 8 hours, 9% said 4 hours, 3% said 1 hour, 4% said
within the hour. (Source: 2001 Cost of Downtime Survey Results, 2001.)

Vous aimerez peut-être aussi