Académique Documents
Professionnel Documents
Culture Documents
By paulo2
Published: 2007-08-29 04:36
1. What is iSCSI?
It is a network storage protocol above TCP/IP. This protocol encapsulates SCSI data into TCP packets. iSCSI allows us to connect a host to a storage array
via a simple Ethernet connection (tape drive). This solution is cheaper than the Fibre Channel SAN (Fibre channel HBAs and switches are expensive). From
the host view the user sees the storage array LUNs like a local disks. iSCSI devices should not be confused with the NAS devices (for example NFS). The
most important difference is that NFS volumes can be accessed by multiple hosts, but one iSCSI volume can by accessed by one host. It is similar to SCSIi
protocol: usually only one host has access to one SCSI disk (the difference is the cluster enviroment). The iSCSI protocol is defined in the RFC3720
document by the IETF (Internet Engineering Task Force).
Some critics said that iSCSI has a worse performance comparing to Fibre Channel and causes high CPU load at the host machines. I think if we use Gigabit
ethernet, the speed can be enough. To overcome the high CPU load, some vendors developed the iSCSI TOE-s (TCP Offload Engine). It means that the
card has a built in network chip, which creates and computes the tcp frames. The Linux kernel doesn't support directly this and the card vendors write their
own drivers for the OS.
Initiator:
The initiator is the name of the iSCSI client. The iSCSI client has a block level access to the iSCSI devices, which can be a disk, tape drive, DVD/CD
writer. One client can use multiple iSCSI devices.
Target:
The target is the name of the iSCSI server. The iSCSI server offers its devices (disks, tape, dvd/cd ... etc.) to the clients. One device can by accessed by one
client.
Discovery:
Discovery is the process which shows the targets for the initiator.
Discovery method:
Describes the way in which the iSCSI targets can be found.The methods are currently available:
- Internet Storage Name Service (iSNS) - Potential targets are discovered by interacting with one or more iSNS servers.
- SendTargets - Potential targets are discovered by using a discovery-address.
- SLP - Discover targets via Service Location protocol (RFC 4018)
- Static - Static target adress is specified.
iSCSI naming:
The RFC document also covers the iSCSI names.The iSCSI name consists of two parts: type string and unique name string.
Most of the implementations use the iqn format. Let'S see our initiator name: iqn.1993-08.org.debian:01.35ef13adb6d
Our target name is similar (iqn.1992-08.com.netapp:sn.84211978). The difference is that contains the serial number of Netapp filer. Both names are
user editable (initiator,target). We need also two ip adresses for the the target and for the initator, too.
The following figure shows our demo environment. It consists of one Debian host which is the iSCSI initiator, and accesses the
iSCSI disk via /dev/sdb device. The Netapp filer is our iSCSI target device, which offers /vol/iscsivol/tesztlun0 disk or lun for the Debian Linux
host. The iSCSI session consists of login phase, then the data exchange phase.
Solaris:
Solaris 10 (from 1/06 release) supports iSCSI. The initiator driver can do the following:
- Multiple sessions to one target support: this feature enables that one client can create more iSCSI sessions to one target as needed, and it increases the
performance.
- Multipathing: with the help of Solaris Mpxio or IPMP feature we can create redundant paths to the targets.
- 2 Tb disks and CHAP authentication are also supported. The Solaris driver can use the three discovery methods (SLP can't). iSCSI disks can be
accessed by the format program.
HPUX:
HP supported the iSCSI from the HP11i v1 os. This driver can discover the targets via SLP (Service Location Protocol) which is also defined by IETF
(RFC 4018). This means that the iSCSI initiator and targets register themselves at the SLP Directory agent. After the registration the iSCSIi initiator queries
only the Directory agent. HPUX driver implements all of the discovery methods. The CHAP authentication is also implemented and the OS multipath tools
(PVLinks) also supported. The HPUX driver provides transport statistics, too.
AIX:
From 5.2 AIX supports iSCSI.The driver implements the static target discovery only.We can use the iSCSI disks with AIX multi pathing called MPIO. The
CHAP authentication is also supported.
None of the drivers allows us to boot from iSCSI. This can be a next step in the driver development.
The Intel iSCSI implementation contains both target and initiator drivers and a handy tool for generating workloads.
The Open-iSCSI project is the newest implementation.It can be used with 2.6.11 kernels and up. We will test this driver with the Debian host. It contains
kernel modules and an iscsid daemon.
/etc/init.d/open-scsi start
The iSCSI operations can be controlled with the iscsiadm command. The command can discover the targets, login/logout to the target, and displays the
session information.
Target implementations:
iSCSI enterprise target is the open source target implementation for Linux. It based on the Ardis iSCSI Linux implementation and requires the 2.6.14
kernel.
Openfiler is a quite popular Linux NAS implementation, and offers a Linux based NAS software with a web based GUI.
Many other companies offer software-based commercial iSCSI target drivers (Amgeon, Mayastor, Chelsio).
The storage array manufacturers offer also a native support for iSCSI (EMC, Netapp, etc.).
We have chosen Netapp FAS filer for the testing, but you can test it with a free software. There is a link at the bottom of the article which shows how can
we do it with Openfiler.
It is successful.
2. We discover the netapp filer iSCSI LUNs with the iscsiadm command. We have choosen the st (sendtargets) discovery method. Currently it is
implemented with this driver:
192.168.2.222:3260,1000 iqn.1992-08.com.netapp:sn.84211978
3. We have to prepare the Netapp side: In this example we will create one 4GB LUN (part of the RAID group), and assign it to the Debian host. We should
check the free space:
nasa> df -k
The following command creates one 4GB Lun on the iscsiLunVol volume:
Check it:
Initiators connected:
TSIH TPGroup Initiator
19 1000 debian (iqn.1993-08.org.debian:01.35ef13adb6d / 00:02:3d:00:00:00)
Ok, we see the Debain host. Let's create the initiator group, called Debian2.
4. Lets go back to our initator host. Now everything is prepared to access the 4GB lun. The following command makes the disk accessible from the Linux
host.
192.168.2.222:3260 --login
5. We can use it as the normal disk. You can create one partion, and you can easily mount it.
If you want to use sdb after the next reboot, you should change the following entry:
in the /etc/iscsi/nodes/<iscsi target name>/<ip address> file. After you change it the iSCSI daemon will login to this target. Adding an
automatic mount entry (/dev/sdb1 /mnt) in the /etc/fstab file doesn't work, because the open-iscsi daemon will start later than the mounting of
filesystems. One simple script can solve this problem, which does the automatic mounting after the iSCSI daemon starts.
The open-iscsi initiator implementation tolerates network errors well. If you disconnect the Ethernet cable and connect it again, you must start the io
process again, but the reconnection occurs automatically.
Another good solution is for the network failures, if you create multiple paths for the one LUN (For example: /dev/sdb, /dev/sdc), the initator logs in to
two locations (two RAID controllers) and you make the two disks as a single logical disk using Linux multipath software (dmsetup).
I recommend another alternative for iSCSI target implementation: Openfiler (if you cant test on the Netapp box). It is a free Linux based NAS sofware,
which can be managed with a web based GUI.
The iSCSI setup process is quite similar in the case of other Unix implementations.
During the test I ran the Debian host in the Vmware player program, and my network connection was 100 Mbit/s. I cannot reach more than 15 MB/s
read/write performance but it isn't relevant. With Gigabit Ethernet you can reach much better performance, the only drawback is that it increases the CPU
load (CPU must build and compute TCP frames).