Académique Documents
Professionnel Documents
Culture Documents
FalconStor Software, Inc. 2 Huntington Quadrangle, Suite 2S01 Melville, NY 11747 Phone: 631-777-5188 Fax: 631-501-7633 Web site: www.falconstor.com
Copyright 2001-2010 FalconStor Software. All Rights Reserved. FalconStor Software, IPStor, TimeView, and TimeMark are either registered trademarks or trademarks of FalconStor Software, Inc. in the United States and other countries. Linux is a registered trademark of Linus Torvalds. Windows is a registered trademark of Microsoft Corporation. All other brand and product names are trademarks or registered trademarks of their respective owners. FalconStor Software reserves the right to make changes in the information contained in this publication without prior notice. The reader should in all cases consult FalconStor Software to determine whether any such changes have been made. This product is protected by United States Patents Nos. 7,093,127 B2; 6,715,098; 7,058,788 B2; 7,330,960 B2; 7,165,145 B2 ;7,155,585 B2; 7.231,502 B2; 7,469,337; 7,467,259; 7,418,416 B2; 7,406,575 B2 , and additional patents pending."
51010
Contents
Introduction
Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Hardware/software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 NSSVA Specification and requirement summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Virtual machine configuration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Supported Disk Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 NSSVA Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 ESX server deployment planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 About this document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Knowledge requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Contents
High Availability
FalconStor NSS Virtual Appliance High Availability (HA) solution . . . . . . . . . . . . . . . . .38 Configuring the NSS Virtual Appliance Cross-Mirror failover . . . . . . . . . . . . . . . . . . . . .39 Power Control for VMware ESX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 Launching the power control utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43 Check Failover status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 After failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Manual recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Auto recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Fix a failed server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Recover from a cross-mirror failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 Re-synchronize Cross mirror on a virtual appliance . . . . . . . . . . . . . . . . . . . . . . . .48 Check resources and swap if possible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 Verify and repair a cross mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 Modify failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Make changes to the servers in your failover configuration . . . . . . . . . . . . . . . . . . .54 Start/stop failover or recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Force a takeover by a secondary server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Manually initiate a recovery to your primary server . . . . . . . . . . . . . . . . . . . . . . . . .55 Suspend/resume failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55 Remove a failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
Replication
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57 Create a Continuous Replication Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67 Check replication status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69 Replication tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69 Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 Replication object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 Replication performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Set global replication options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Tune replication parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Assign clients to the replica disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72 Switch clients to the replica disk when the primary disk fails . . . . . . . . . . . . . . . . . . . . .72 Recreate your original replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73 Use TimeMark/TimeView to recover files from your replica . . . . . . . . . . . . . . . . . . . . . .74 Change your replication configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74 Suspend/resume replication schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 Stop a replication in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 Manually start the replication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 Reverse a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Reverse a replica when the primary is not available . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Forceful role reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Relocate a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77
NSS Virtual Appliance User Guide ii
Contents
Remove a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Expand the size of the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication with other NSS features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication and TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication and Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication and Thin Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79
Troubleshooting
NSS Virtual Appliance settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 Checking the resource reservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 Checking the virtual Network Adapter setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81 Optimizing SCSI software initiator performance . . . . . . . . . . . . . . . . . . . . . . . . . . .82 Optimizing performance when using a virtual disk on a NSSVA for iSCSI devices .82 Resolving slow performance on the Dell PERC6i . . . . . . . . . . . . . . . . . . . . . . . . . .82 Cross-mirror failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83
Appendix A - Checklist
A. VMware ESX Server system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84 B. NSS Virtual Appliance system information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86 C. Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88 D. Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
Index
iii
Introduction
FalconStor Network Storage Server Virtual Appliance (NSSVA) for VMware Infrastructure 3 and 4 is a pre-configured, production-ready virtual machine that delivers high speed iSCSI and virtualization storage service through VMwares virtual appliance architecture. It provides enterprise-class data protection features including application-aware, space-efficient snapshot technology that can maintain up to 64 point-in-time copies of each volume. The FalconStor NSS Virtual Appliance can also be used as a costeffective virtual iSCSI SAN solution by creating a virtual SAN on a VMware ESX server and turning internal disk resources into a shareable pool of storage. If the FalconStor NSS Virtual Appliance is deployed on a single VMware ESX server, that server can share storage resources with other servers in the environment. This is accomplished without the need for external storage arrays, SAN switches, or costly host bus adapters (HBA). Internal data drives are detected by the software and incorporated into the management console through a simple GUI. At that point, storage can be provisioned and securely allocated via the iSCSI protocol, which operates over standard Ethernet cabling. To enable high availability (HA), the FalconStor NSS Virtual Appliance can be deployed on two VMware ESX servers that can share storage with each other as well as additional VMware ESX servers. In this model, each NSS Virtual Appliance maintains mirrored data from the other server. If one of the servers is lost, all virtual machines that were running on the failed server can restart using the storage resources of the remaining server. Downtime is kept to a minimum as applications are quickly brought back online. Thin Provisioning technology and space-efficient snapshots further decrease costs by minimizing consumption of physical storage resources. The Thin Replication feature minimizes bandwidth utilization by sending only unique data blocks over the wire. Built-in compression and encryption reduce bandwidth consumption and enhance security, without requiring specialized network devices to connect remote locations with the data center or DR site. Tape backup for multiple remote offices can be consolidated to a central site, eliminating the need for distributed tape autoloaders and associated management headaches and overhead. NSSVA is supported under the VMware Ready program for virtual appliances. It is a TOTALLY Open solution for VMware Infrastructure that enables a virtual SAN (vSAN) service directly on VMware ESX servers. The local direct attached storage becomes a shared SAN for all ESX Servers on the iSCSI network. The ability to convert direct attached storage within an ESX Server opens the door for small to medium enterprises to initially deploy VMware Infrastructure without the added expense of a dedicated SAN appliance and to enjoy the broader benefits of VMwares business continuity and resource management feature.
Introduction
Additionally, most businesses, small and large, seek out VMwares advanced enterprise features VMware VMotion (live migration of a running virtual machine from one ESX server to another), HA (High Availability auto restart of virtual machines), and DRS (Distributed Resource Scheduling moving virtual machine workloads based on preset metrics or schedules).
Components
NSSVA consists of the following components: Component NSS Virtual Appliance Description A virtual machine that runs FalconStor NSS software. This virtual appliance delivers high speed iSCSI and virtualization storage service through VMwares virtual appliance architecture: a plug-and-play VMware virtual machine running on VMware ESX server. NSSVA is a TOTALLY Open virtual storage array and a VMware Certified Virtual Appliance. The Windows management console that can be installed anywhere there is IP connectivity to the NSS Virtual Appliance. Collaborate with Windows NTFS volumes and applications in order to guarantee that snapshots are taken with full application level integrity for fastest possible recovery. A full suite of Snapshot Agents is available so that each snapshot can later be used without lengthy chkdsk and database/email consistency repairs. Snapshot Agents are available for Oracle, Microsoft Exchange, Lotus Notes/Domino, Microsoft SQL Server, IBM DB2 Universal Database, Sybase and many other applications. Host-side software that helps you register host machines with the NSS virtual appliance.
Benefits
High Availability
Using FalconStors NSSVA virtual SAN appliances in an Active/Passive configuration enables VMware users to deploy a highly available shared storage environment that takes advantage of VMware Infrastructure enterprise features for better manageability and resiliency. FalconStor NSSVA highly available virtual storage configuration supports iSCSI target failover between NSSVA virtual appliances installed on the initial two ESX Servers which is required to gain VMware HA and DRS features. VMware VMotion support requires only a single NSSVA on one ESX Server in an ESX Server cluster.
Introduction
MicroScan Replication
In the branch or remote office, VMware Infrastructure and FalconStor NSSVA can help to reduce operational costs through a server and storage consolidation to a central data center. FalconStors MicroScan Replication option with built-in WAN acceleration completes remote office server and storage consolidation IT strategies by providing highly efficient replication of branch or remote office data to your central data center. MicroScan Replication also reduces the amount of information replicated by ensuring that data already sent to the central data center is not sent more than once, thereby reducing traffic on the WAN.
Cross-Mirror failover
FalconStor NSSVA supports Cross-Mirror failover, a non-shared storage failover option that provides high availability without the need for shared storage. Used with virtual appliances containing internal storage. Mirroring is facilitated over a dedicated, direct IP connection. This option removes the requirements of shared storage between two partner storage server nodes and allows swapping data functions from a failed virtual disk on the primary server to the mirrored virtual disk on the secondary server. The disks are swapped back once the problem is resolved.
Introduction
Three Versions
NSSVA is available in the following three versions: NSSVA Standard Edition Includes two TB of storage (upgradable to four TB). Supports up to 10 clients. Includes the following client application support: VMware Application Snapshot Director Storage Replication Adapters for VMware SRM* SAN Client Application Snapshot Agent *Supported in pilot environments only. NSSVA Standard Edition trial NSSVA lite (free iSCSI SAN) Edition Includes all of the features of the standard edition for a 30 day period. Can be upgraded to the standard edition Does not include high availability, mirror, or replication. Five client limit. Two TB storage capacity. Can be upgraded to the standard edition. Does not include the following client application support: VMware Application Snapshot Director Storage Replication Adapters for VMware SRM SAN Client Application Snapshot Agent
For advanced configuration of high availability, refer to the documentation link that is included in your registration E-mail.
Introduction
Hardware/software requirements
Component NSS Virtual Appliance Requirement NSSVA supports the following VMware ESX Server platform: VMware ESX Server 3.5 Update 5 VMware ESXi 3.5 Update 5 VMware ESX Server 4.0 Update 1 VMware ESXi 4.0 Update 1 All necessary critical patches for VMware ESX server platforms are available on the VMware download patches web site: http:// support.vmware.com/selfsupport/download/. A virtual or physical machine running any version of Microsoft Windows that supports the Java 2 Runtime Environment (JRE). FalconStor Virtual Appliances for VMware are supported only on VMware certified server hardware. To ensure system compatibility and stability, refer to the online compatibility guide http://www.vmware.com/resources/compatibility/ search.php?action=base&deviceCategory=server. To download the Systems Compatibility Guides: For ESX Server 3.5 and ESX Server 3i, go to https://www.vmware.com/ resources/techresources/1032 For maximum virtualization and iSCSI SAN service, NSSVA uses 64-bit system architecture. To verify 64-bit virtual machine support, download the VMware utility below and execute it on the ESX server to see if the CPU supports 64-bit: http://downloads.vmware.com/d/details/processor_check_5_5_dt/ dCpiQGhkYmRAZQ==
64-bit processor
Introduction
Requirement Each server must have identical internal storage. Each server must have at least two network ports (one for the required crossover cable). The network ports must be on the same subnet. Only one dedicated cross-mirror IP address is allowed for the mirror. The IP address must be 192.168.n.n. Only virtual devices can be mirrored. Service-enabled devices and system disks cannot be mirrored. The number of physical disks on each machine must match and the disks must have matching ACSLs (adapter, channel, SCSI ID, LUN). When failover occurs, both servers may have partial storage. To prevent a possible dual mount situation, we strongly recommend that you use a hardware power controller, such as IPMI. Refer to Power Control for VMware ESX server on page 42 for more information. Prior to configuration, virtual resources can exist on the primary server as long as the identical ACSL is unassigned or unowned by the secondary server. After configuration, pre-existing virtual resources will not have a mirror. You will need to use the Verify & Repair option to create the mirror.
BIOS VT Support
The VMware ESX server must be able to support hardware virtualization for the 64-bit virtual machine. To verify BIOS VT support: Link to VMware knowledgeBase to run the esx command. http://kb.vmware.com/selfservice/microsites/ search.do?language=en_US&cmd=displayKC&externalId=1011712 NSSVA reserves NSS resources of 2000MHz for storage virtualization, iSCSI service, Snapshot, and replication processes, ensuring sufficient resources for the VMware ESX server and multiple virtual machines. The specifications are: Two Dual-core 1.5 GHz 64-bit process One Quad-core 2.0 GHz 64-bit process NSSVA reserves 2 GB of memory resources for storage virtualization, iSCSI service, Snapshot, and replication processes, ensuring sufficient resources for the VMware ESX server and multiple virtual machines. The specifications are: 500MB for VMware ESX server system 2 GB for FalconStor NSS Virtual Appliance More memory for the other virtual machines running on the same ESX server
Introduction
Component Storage
Requirement NSSVA supports up to 2TB of storage for iSCSI storage provisioning and snapshot data. Additional storage can be added in 1 TB increments. Storage is allocated from the standard VMware virtual disk on the local storage or the raw device disk on SAN storage. NSSVA also supports Storage Pools, into which you can add different sized virtual disks. The system allocates resources for storage provisioning or snapshots on demand. NSSVA is pre-configured with two virtual network adapters that manage your multiple path iSCSI connection or dedicated cross-mirror link. For the best network performance, the ESX server needs two physical network adapters for one-to-one mapping to the independent virtual switches and the virtual network adapters of NSSVA. In addition, the ESX server may need extra physical network adapters for Virtual infrastructure management, VMware VMotion, or physical network redundancy. Two physical network adapters for one-to-one virtual network mapping to FalconStor NSSVA. Optional physical network adapters links to one virtual switch for physical network adapters redundancy. Optional physical network adapters for virtual center management though the independent network. Optional physical network adapters for VMotion process though the independent network.
Network Adapter
Introduction
Minimum ESX server hardware requirements Spec CPU ESX Server Configuration Two Dual-core 1.5 GHz 64-bit processor OR One Quad-core 2.0 GHz 64-bit processor Using ESX requires specific hardware and system resources. If you are using ESX 4, refer to the VMware Online Library for specific ESX hardware requirements: http://pubs.vmware.com/ vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=install/ c_esx_hw.html 2 GB Up to 4 TB free storage space Two physical network adapters
Note: *Memory requirements may vary depending upon your usage. Recovering a volume using more than 300 GB of TimeMark data may require additional RAM.
Introduction
SAN Disks*
Note: *Assigning an iSCSI ARRAYs LUN directly to a NSSVAs iSCSI Initiator is not supported. The physical iSCSI arrays LUN must be provisioned to the ESX servers iSCSI Initiator and disks then configured per the instructions described in the guide.
Introduction
NSSVA Configuration
ESX server deployment planning
The FalconStor NSS Virtual Appliance is a pre-configured and ready-to-run solution installed on a dedicated ESX server in order to function as a storage server. NSSVA can also be installed on a ESX server that runs other virtual machines. To deliver high availability storage service, NSSVA can be installed on a second VMware ESX server that will function as a standby storage server with redundant cross-mirror storage. Dedicated NSSVA When NSSVA is installed on a dedicated ESX server no other virtual machine runs on the system. Dedicated High Availability NSSVA When NSSVA is installed on two dedicated ESX servers; they can be configured for Active/Passive high availability. Shared NSSVA When NSSVA is installed on an ESX server on which other virtual machines are installed or will be installed, NSSVA will share the CPU and memory resources with other virtual machines and still offer storage services for the other virtual machines on the same or the other ESX servers. Shared HA NSSVA When NSSVA is installed on two ESX servers on which other virtual machines are installed or will be installed, NSSVA will share the CPU and memory resources with other virtual machines. The two NSSVAs can be configured for Active/Passive high availability.
10
Introduction
Knowledge requirements
Individuals deploying NSSVA should have administrator level experience with VMware ESX and will need to know how to perform the following tasks: Create a new virtual machine from an existing disk Add new disks to an existing virtual machine as Virtual Disks or Mapped Raw Disks Troubleshoot virtual machine networks and adapters
Although not required, it is also helpful to have knowledge about the technologies listed below: Linux iSCSI TCP/IP
11
Installation script for VMware ESX server 3.5, and 4 The generic VMware ESX server provides the local console and SSH remote console connection for management. You can launch the NSSVA installation script on a local or remote console to install NSSVA.
Virtual Appliance Import for VMware ESX server 3.5, ESXi 3.5, ESX server 4, and ESXi 4 The latest VMware ESX server 4 and hypervisor ESXi supports virtual appliance import execution from a VMware Infrastructure Client. If the VMware ESXi server does not support local and remote console, you will only be able to use the virtual appliance import method to install the NSSVA into the system.
Before installation, you must ensure that the CPU supports 64-bit operating systems and is compatible with the VMware ESX system and the system BIOS can support Virtualization Technology (VT). To verify 64-bit virtual machine support: Go to http://downloads.vmware.com/d/ details/processor_check_5_5_dt/dCpiQGhkYmRAZQ== To verify BIOS VT support, go to http://kb.vmware.com/selfservice/microsites/ search.do?language=en_US&cmd=displayKC&externalId=1011712
12
System memory on the ESX server must be at least 2 GB. The ESX server must be a supported 64bit virtual machine. The ESX server must have the BIOS VT function enabled. 4. Enter the number of the VMFS volume where you will be installing the NSS Virtual Appliance system. The installation script copies the system image source and extracts it to the specified volume. The NSS Virtual Appliance is then registered onto the ESX system. Note: For NSSVA Lite: While extracting the NSS virtual appliance system, you will be asked to enter your login credentials for the target. (i.e. Please enter login information for target vi://127.0.0.1)
Installing NSSVA via Virtual Appliance Import from a downloaded zip file
1. On the client machine, unzip the NSSVA.zip file and extract the package to any folder. For example, create a folder called FalconStor-NSSVA. 2. If not already active, launch the VMware Infrastructure/vSphere Client and connect to the ESX server with root privileges. 3. Select File --> Virtual Appliance -->Import (VI client)/ Deploy OVF template (vSphere Client). 4. For the Import Location of the Import Virtual Appliance Wizard, click the Browse button on the Import from file option. Then select the folder to which you extracted the package (i.e. the FalconStor-NSSVA folder), expand the folder, and select the file: FalconStor-NSSVA.ovf in the FalconStor-VA folder. The Virtual Appliance Details checks the virtual appliance information for FalconStor NSSVA. 5. Click Next to continue the import. The Name and Location displays the default appliance name: FalconStorNSSVA. You can change the name of the virtual machine. This change will not be applied into the actual appliance name. 6. On the Datastore list, click on the datastore containing at least 26 GB of space for the NSSVA system Import. 7. For Network Mapping, select the virtual machine network of the ESX server that the NSSVA virtual Ethernet adapter will link to. 8. On the Ready to Complete screen, review all settings and click Finish to start the virtual appliance import task. The virtual appliance import status window displays the completion percentage. It usually takes five to 10 minutes to complete this task.
13
9. Click Close when the completion percentage reaches 100% and the import window displays Completed Successfully. Note: When using OVF import for installing the NSSVA Lite version, you will need to manually add a 100 GB data disk in order to launch the Basic environment configuration.
The client will be installed to the following location: /usr/local/ipstorclient It is important that you install the client to this location. Installing the client to a different location will prevent the client driver from loading. 3. Install the Snapshot Director software. # rpm -ivh asd_vmware-x.xx-xxxx.i386.rpm Note that during installation, several firewall ports will be opened to allow for snapshot notification and command line communications. Note: The ASD is not available in the NSSVA Lite or Trial version.
14
During the installation, the Microsoft Digital Signature Warning window may appear to indicate that the software has not been certified by Microsoft. Click Yes to continue the installation process. 4. Accept the license agreement.
5. When done, click Finish.
Notes:
If you are running Windows Server 2003 SP2 on the virtual machine and the firewall is enabled, you need to open TCP ports 11576, 11582, and 11762 for the SAN Client. The SAN Client is not available in the NSSVA Lite or Trial version.
To install a FalconStor Snapshot Agent on a Windows system: 1. Navigate to the NSS Agents zip file that you copied earlier to a Windows machine. 2. Extract all of the files to a temporary installation directory. 3. Launch the selected Snapshot Agent setup program. 4. When prompted, review the License Agreement and agree to it to continue.
After accepting the license agreement, the installation program will install the Snapshot Agent into the same directory where the SAN Client is installed. 5. When done, click Finish.
15
Install NSS Virtual Appliance The SAN client automatically starts the Snapshot Agent for you. In addition, it will be automatically started each time the client is restarted.
Note: The Snapshot Agent is not available in the NSSVA Lite or Trial version.
5. Highlight Host Name and click Enter to configure the host name of the virtual appliance.
16
6. Highlight Time Zone and click Enter to configure the time zone. Select whether you want to set the system clock to UTC (the default is No). Scroll up and down to search for the correct time zone of your location. 7. Highlight Root Password and click Enter to change the new root password of the virtual appliance. You will need to enter the new password again on the confirm window. 8. Highlight Network Configuration and click Enter to modify your network configurations. Select eth0 or eth1 to change the IP address setting. Answer No to using DHCP and then set the IP address of the selected virtual network adapter. If you want to set the IP subnet mask, press down to move the cursor on the netmask setting. The default IP addresses are listed below: eth0: 169.254.254.1/255.255.255.0 eth1: 169.254.254.2/255.255.255.0 9. Repeat the network configuration to set the IP address of another virtual network adapter. 10. Highlight Default Gateway and click Enter to change the new default gateway of the virtual appliance. 11. Highlight Name Server and click Enter to modify the server name. You can add four DNS server records into the virtual appliance setting. 12. Highlight NTP Server configuration and click Enter to add four DNS server records into the virtual appliance setting. 13. After making all configuration changes, tab over to Finish and click Enter. The utility will list the configuration changes you made.
14. Click Yes to accept and apply the setting on the virtual appliance.
17
The update VMware tool script is launched and you are prompted to update VMware tools. 16. Enter the ESX inventory host name of this NSSVA (Indicated by the display name of NSSVA on ESX server) 17. Enter ESX/vCenter server IP 18. Enter ESX/vCenter server login user name 19. Enter ESX/vCenter server login password If the VMware tool is old, it will be updated; Otherwise, it will not be replaced. If an error is encountered during the update, such as an inability to reach the ESX/vCenter, you will be prompted to Force (press F) the update or Cancel (press C). If you cancel the update, the NSSVA VMware tool will not be changed and you will need to update the VMware tool via the vSphere client. Alternatively, you can enter "chk_vm.sh" in the NSSVA serial console to re-run the update script. Once the installation is complete, you can begin configuration of the NSSVA via the FalconStor Management Console. Refer to the Configuration and Management chapter for details. Once configuration is complete, refer to the checklist at the end of this guide.
18
19
Account Management
There are three types of accounts for the virtual appliance, each with different permission levels. The three accounts have the same default password.
fsadmin - can perform any VA operation other than managing accounts. They are also authorized for VA client authentication. fsuser - can manage virtual devices assigned to them and can allocate space from the storage pool(s) assigned to them. In addition, they can create new SAN/NAS resources, clients, and groups as well as assign resources to clients, and join resources to groups, as long as they are authorized. VA Users are also authorized for VA client authentication. Any time an VA User creates a new SAN/ NAS resource, client, or group, access rights will automatically be granted for the user to that object. root user - has full privileges for all the system operations. Only root can manage the user account and system configuration (maintenance).
20
The connected NSS Virtual Appliance is listed on the FalconStor Management Console as shown below. The default host name is "FalconStor-NSSVA".
Register keycodes
If your computer has Internet access, the console will register a keycode automatically after you enter it; otherwise the registration will fail. You can have a 60 day grace period to use the product without a registered keycode (or 30 day grace period for a trial). If this machine cannot connect to the Internet, you can perform offline registration. To register a keycode: 1. Highlight an unregistered keycode and click the Register button.
NSS Virtual Appliance User Guide 21
2. Click Next to start the activation. 3. On the Select the method to register this license page, indicate if you want to perform Online registration via the Internet or Offline registration. 4. For offline registration, enter a file name to export the license information to local disk and E-mail it from a computer with Internet access to: activate.keycode@falconstor.com It is not necessary to write anything in the subject or body of the e-mail. If your E-mail is working correctly, you should receive a reply within a few minutes. 5. When you receive a reply, save the attached signature file to the same local disk. 6. Enter the path to the saved file in step 5 and click Send to import the registration signature file. 7. Afterwards, you will see a message stating that the license was registered successfully.
You can check the block size of your volume via the VMware Infrastructure Client: 1. Launch the VMware Infrastructure Client, connect to the ESX server and log into the account with root privileges. 2. Click the ESX server in the inventory and then click the Configuration setting.
22
3. On the Configuration tab, click Storage under the Hardware list. Then right-click one of the datastores and click Properties. On the Volume Properties, you can see the Block Size and the Maximum File Size in the Format information. The screen below displays VMware Volume properties with the block size and maximum file size information.
23
6. Check the Support clustering features such as Fault Tolerance option to force creation of an eagerzeroedthick disk. Notes: Do not set EagerZeroThick to both the system/data vmdks and guest VM's vmdks. Creating an EagerZeroThick disk is a time-consuming process. You may experience a significant waiting period, 7. If the volume of the NSS Virtual Appliance system does not have enough space to store the new virtual disk, click Specify a datastore then click the Browse button. Then Select a datastore by clicking a datastore with available free space. 8. Click Next to keep the default values on Specify Advanced Options. 9. Review your choices and click Finish to complete the virtual disk creation setting. In the FalconStor-NSSVA Virtual Machine properties, you will see New Hard Disk (adding) in the hardware list, 10. Click OK to save the setting and the new virtual disk will be created on the datastore. 11. Repeat the steps above to add another virtual disk for virtualization storage.
24
5. On the Disk Preparation screen, click the drop-down list of Device Category and select Reserve for Virtual Device and then click OK. Then enter YES to confirm the change. When the task has completed, a message stating "Physical device category has been changed successfully" will display. 6. Repeat steps 4 and 5 to change the device category of all new detected devices to "Reserved for Virtual Device". 7. Highlight Physical Resources and click to expand the Storage Pools. Then right click the StoragePool-Default and click the Properties. 8. On the Storage Pool Properties screen, click the Select All button and then click OK to add all new detected devices into the storage pool. 9. Click and expand the StoragePool-Default to see all configured new devices that have been added into the pool for central space management. All devices must be added into the storage pool for central resource management.
25
1. Launch VMware Infrastructure Client and connect to the ESX server by the account with root privilege. 2. Once you are connected to the server inventory, highlight the ESX server and click the Configuration tab. 3. On the ESX server Configuration screen, click the Storage Adapters and rightclick the device under iSCSI Software Adapter, for example: vmhba32. 4. Select the iSCSI software adapter device and then click Properties. 5. On the iSCSI initiator (device name) Properties, check the iSCSI properties and record the iSCSI name, for example: iqn.1998-01.com.vmware:esx03. 6. Click the Dynamic Recovery tab, and then click the Add button. 7. On Send Targets, enter the IP address of the NSS Virtual Appliance. 8. It will take several minutes to complete the configuration. 9. Once the IP address has been added into the iSCSI server list, click Close to complete the setting.
1. Launch the FalconStor Management Console and connect to the NSS Virtual Appliance with IPStor administrator privileges. 2. Click and expand the NSSVA, then right click the SAN Clients and then click Add. 3. The Add Client Wizard launches. 4. Click Next to start the administration task. 5. When prompted to Select Client Protocols, click to enable the iSCSI protocol and click Next. 6. Select Target IP by enabling one or both networks providing the iSCSI service. 7. On the Set Client's initiator, the iSCSI initiator name of the ESX server displays if the iSCSI server was added successfully. Click to enable it and then click Next. 8. On Set iSCSI User Access, change it to Allow unauthenticated access or enter the CHAP secret (12 to 16 characters). 9. On Set iSCSI Options, keep the default setting of QoS. 10. On Enter the Generic Client Name, enter the Client IP address using the ESX server's IP address. 11. On Select Persistent Reservation Option, keep the default setting and click Next.
NSS Virtual Appliance User Guide 26
12. On Add the client, review all configuration settings and then click Finish to add the san client into the system. 13. Expand the SAN Clients. You will see the newly created SAN client for ESX server and the iSCSI Target. The screen below displays the SAN client and iSCSI target created for the ESX server connection.
27
7. Select Express as the Creation Method and enter the allocated size of the new san resource you are creating. 8. When prompted to Enter the SAN Resource Name, you can keep the default name created by system or manually change the name. 9. Confirm the allocate size on the Create the SAN resource screen and then click Finish to create the SAN resource. 10. Once the SAN Resource has been created, the Create SAN Resource Wizard prompts you to assign a SAN client to it. If you have already created the SAN client for the ESX server, click Yes. The Assign a SAN Resource Wizard will launch automatically. 11. Click Next to start the administration task. 12. Select the iSCSI target to be assigned to the SAN resource. 13. When prompted to Select LUN Numbers for the resources, click Next to keep the default setting. 14. Click Finish to Assign iSCSI Target(s) to the SAN Resource. If you answered NO on the Assign SAN Client process, you can perform this task later by right clicking on the specific SAN resource name under the SAN Resources tree, and then clicking Assign. The screen below displays the SAN client and iSCSI target created for the ESX server connection.
28
Note: For advanced configuration of high availability, refer to the documentation link that was included in your registration e-mail.
29
2. Click on the Hardware Advanced Settings and Enable the VMDirectPath Configuration option.
3. Reboot. 4. Return to the Configuration tab, and navigate back to the Hardware Advanced Settings.
30
5. Click the Edit link to add the PCI device ports to the Passthrough List.
6. Reboot again. 7. Return to the Hardware Configuration Advanced Settings section to confirm the passthrough ports have been enabled: Once complete, you can follow the steps from the next few sections to add one or several ports to a given virtual machine. The following restrictions apply:
If you are using a dual-port NIC/HBA, the ENTIRE NIC is set to passthrough mode. This means both ports will disappear from the VMKernel. If you are using a dual-port NIC/HBA, the ENTIRE NIC is given to one specific virtual machine. Therefore, whether you assign one port or two ports to the VM, both ports are "reserved" and none can be given to another virtual machine. The pass through is at the PCI port level, so it's an all-or-nothing rule. Once a virtual machine (VM) has a pass through port assigned to it (following the procedures below), the VM can no longer be vmotion'ed (nor DRS'ed, nor HA'ed, nor FT'ed) to another ESX host. It becomes a permanent resident of the current ESX host. Once a VM has a pass through port assigned to it, it can no longer take advantage of "Memory over-allocation" (aka overcommitment); instead, the entire "allocated virtual RAM" must be "RESERVED" (done automatically). Thus, enough RAM on the host must be available for the VM to power on.
31
2. Select the appropriate ovf file. 3. Right-click and select Upgrade Virtual Hardware. 4. Select Edit Settings.
5. Click the Hardware tab and select the appropriate network adapter.
32
The Add Hardware screen displays. 7. Select the type of device you wish to add.
34
Modify a FalconStor Virtual Appliance (for ESX 3.5) to load VMware Drivers
The procedures below illustrate how to modify a FalconStor Virtual Appliance (made for ESX 3.5) to properly load the updated VMware Drivers from VMware Tools for the updated Virtual Machine Hardware v7 (under vSphere v4). 1. Power ON the NSS-VA virtual machine. During the boot up process, you may see several FAILED error messages, which you can disregard for now. 2. Login to the system from the console with the user name: root and password: IPStor101. 3. Perform a VMware Tools upgrade. 4. Click Abort at the installation screen, then hit Ctrl+C on the following screen to exit back to the prompt. 5. Connect to the host device. The Install/Upgrade Tools screen displays. 6. Select Interactive Tools Upgrade then click OK. When you first try to install/upgrade the VMware Tools, you will get an error, and you will be prompted to remove the existing soft links. Once the symbolic links are removed, re-run the installation script (vmwareinstall.pl), and click [ENTER] through the next few screens. 7. Reboot the machine ("sync;sync;reboot" from the command prompt), and then configure the Virtual Appliance per the standard installation guide. The "vaconfig" script will automatically be executed, and you can then configure your network settings, hostname, NTP, DNS, etc. The virtual appliance will reboot automatically.
Modify a FalconStor Virtual Appliance (for ESX 3.5) to load the NIC/HBA driver
This step is necessary for virtual appliances that are pre-RHEL5.3. If you are using Red Hat Enterprise Linux 5.3 (RHEL5.3), then the Intel drivers for the 10Gbe NIC (or QLogic 8Gbps FC) will already be installed. If not, you will need to download, compile, and install the Intel drivers from Intel's web site. 1. Copy the file (i.e. ixgbe-2.0.38.2.tar.gz) to /root. The easiest way (since your network is down at this point), is to create an ISO file, and mount the ISO to the CD-ROM drive of the virtual machine, using the same commands as earlier to mount the CD (mount /dev/cdrom /media).
35
2. Run:
# tar xvfz ixgbe-2.0.38.2.tar.gz # cd ixgbe-2.0.38.2 # vi README
(or simply modprobe ixgbe). 4. Configure your network cards using the "vaconfig" command (if they are eth0 and eth1). If you are creating new files (for eth2, eth3, in case you did NOT remove the original eth0 and eth1 virtual NIC's from the VMware VM's Settings), use the following commands:
# cd /etc/sysconfig/network-scripts/ # vi ifcfg-eth2 DEVICE=eth2 BOOTPROTO=none IPADDR=192.168.88.112 NETMASK=255.255.255.0 ONBOOT=yes TYPE=Ethernet MTU=9000 DHCP_HOSTNAME=
Note: Make sure to set the MTU to 9000 if you want to use Jumbo Frames. 5. Repeat if you need to configure eth3. Make sure to modify all parameters from the content of the file above to match the proper settings (IPADDR, DEVICE, etc). 6. Update the "/etc/modprobe.conf" file to make sure the ixgbe driver is loaded during startup:
# vi /etc/modprobe.conf alias eth0 ixgbe alias eth1 ixgbe alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptspi alias scsi_hostadapter2 ata_piix install pciehp /sbin/modprobe -q --ignore-install acpiphp; /bin/ true install pcnet32 /sbin/modprobe -q --ignore-install vmxnet;/sbin/ modprobe -q --ignore-install pcnet32 $CMDLINE_OPTS;/bin/true alias char-major-14 sb options sb io=0x220 irq=5 dma=1 dma16=5 mpu_io=0x330 NSS Virtual Appliance User Guide 36
Note: Replace the name of the.img file from the command above, with the .img filename indicated in your menu.lst file, in the VERY LAST LINE, for example: # cat /boot/grub/menu.lst # grub.conf generated by anaconda You do not have to rerun grub after making changes to this file. 8. After reboot, run "ifconfig", and confirm your changes.
37
High Availability
FalconStor NSS Virtual Appliance High Availability (HA) solution
FalconStor NSS Virtual Appliance supports High Availability storage service via two NSS Virtual Appliances in a Cross-Mirror and iSCSI service failover design. The High Availability (HA) option is not available for the Single Node Edition, the Lite version or the trial version of NSSVA. For best results using the high availability architecture, make sure all of the configurations follow the best practice instructions and guidelines in this chapter.
38
High Availability
3. Right-click the primary NSSVA. Then point to the failover appliance and launch the Failover Setup Wizard. The Failover Setup Wizard checks to make sure the iSCSI option is enabled on the primary NSSVA. Make sure the iSCSI option is also enabled on the secondary NSSVA. iSCSI is the default service running on the NSSVA. 4. Click Next on the welcome page of Failover Setup Wizard to start the crossmirror configuration. 5. Click Yes to re-scan the physical devices to guarantee the device number and size on both server are equal. You will see the power control enabling suggestions information after the wizard completes. You can ignore this message. 6. At the Configure Cross Mirror Option screen, click Next to start the disk preparation and mirror realization ship creation.
39
High Availability
To save the system configuration for failover purposes, a configuration repository is required for failover primary server. 7. Click OK to close the information. 8. When prompted to Select the Secondary Server and the Cross Mirror Remote Server IP address, enter the IP Address on the Primary Server using the eth1 IP address of the primary NSSVA. Then enter the IP Address on the Secondary Server using the eth1 IP address of secondary NSSVA. Alternatively, you can enter the primary server IP address and then click the Find button to have the wizard retrieve the IP address in the same IP subnet from the secondary NSSVA. The wizard completes the task of Checking the secondary server settings. 9. When prompted to Configure Remote Storage, make sure all devices have been checked and enabled. 10. Click OK to close the dialog screen. The Enable Configuration Repository Wizard launches. 11. Click Next to start the configuration task. 12. When prompted to Select the Physical Resources for the Virtual Device(s), select a physical device with at least 10 GB of available space to save the configuration repository. If all physical devices are 10 GB or larger, you can click Next to continue the configuration. 13. When prompted to Select the Physical Device, select a physical device that is at least 10 GB and click Next. 14. Click Finish to confirm the selected physical device on the Create the Configuration Repository screen and complete the creation of configuration repository. 15. The IPStor User List displays, prompting you for the user name and password. Make sure they match on both the primary and secondary NSSVA and click OK. The Select the Failover Subnets dialog displays as the wizard retrieves the IP addresses of both the primary and secondary NSSVA and the IP subnet (except the interface used by Cross-Mirror). 16. Confirm all information is correct and click Next.
40
High Availability
17. Enter the IP address of the server: <the primary NSSVA host name> using the client access IP address. The ESX server iSCSI Software Adapter uses this IP address to log into the iSCSI target and connect the SAN resource. This IP address will failover to the secondary NSSVA if the primary NSSVA encounters the problem. Note: It is recommended that you use the original eth0 IP address here so you will not need to re-configure the FalconStor Management Console connection. 18. Enter the Health monitoring IP address for the Server <the primary NSSVA host name> by the new eth0 IP address of the primary NSSVA. Note: It is recommended that you create a new eth0 IP address here so you will not need to re-configure the FalconStor Management Console connection. 19. Confirm the Failover Configuration by reviewing the settings and clicking Finish to complete the failover configuration creation. The wizard will recommend that you make sure the clocks are in-sync between the failover servers, 20. Click OK to close the wizard. Notes:
Once the configuration of cross-mirror failover is complete via failover setup wizard, the Power Control option in the FalconStor Management Console must not be changed. If you do not use the original eth0 IP address as the client access IP, you must delete the primary NSSVA record from the FalconStor Management Console and re-add the primary NSSVA using the new client access IP address.
You are now ready to setup the power control patch to complete the failover settings. Refer to Power Control for VMware ESX server.
41
High Availability
The power control options for the VMware ESX server are used to avoid an unplanned take over by an ESX server physical network problem. The NSSVA Power control utility does the following: The NSSVA uses the cross-mirror (via iSCSI connection) so that it does not use the same storage. Sets the connection to the primary ESX server that can send the power off command from the secondary NSSVA to the primary ESX server if necessary. For example, you would use this if the primary NSSVA hangs and cannot answer any failover commands. If the secondary NSSVA cannot send the power off command to the primary ESX server, it will not take over in a default configuration setting. Sets the IP address of the secondary ESX server so that it can ping the IP addresses from the primary NSSVAs to check the network connection. If the force take over option is enabled, the primary NSSVA checks the network connection periodically. Once the network disconnects and a force take over is enabled, it shuts down the primary NSSVA after 30 seconds. The Takeover option is disabled by default. You will need to enable this option using the NSSVA power control (vapwc-config) utility to force the secondary NSSVA to take over. Enable this option if you want the secondary NSSVA to always take over when there is no communication with the primary ESX server.
42
High Availability
43
High Availability
This setting is disabled by default. If you choose Yes, this option enables the network monitor function on the primary NSSVA. The primary NSSVA will shut itself down if a physical connection failure is detected. Use this option with caution as data inconsistency may occur between the primary and the secondary NSSVA in a force takeover situation. 10. Select Primary NSSVA network test to test the network connection of the ESX server. The primary NSSVA network test connects to the primary NSSVA and pings the reference IP addresses on the primary NSSVA, the secondary NSSVA IP address, the secondary NSSVA cross-mirror IP address, the primary NSSVA default gateway IP address, and the secondary ESX server IP address. 11. Select Power control test to test the sending power control command. Power control from the secondary NSSVA to the primary ESX server is verified. Once all communication tests to the primary ESX server are successful, you can click OK to continue the configuration. Failover setup is now complete.
44
High Availability
Failover settings, including which IP addresses are being monitored for failover.
In addition, you will see a colored dot next to a server to indicate the following conditions: Red dot - The server is currently in failover mode and has been taken over by the secondary server. Green dot - The server has taken over the primary server's resources. Yellow dot - The user has suspended failover on this server. The current server will NOT take over the primary server's resources even it detects abnormal condition from the primary server.
Failover events are also written to the primary server's Event Log, so you can check there for status and operational information, as well as any errors. You should be aware that when a failover occurs, the console will show the failover partners Event Log for the server that failed.
45
High Availability
After failover
When a failed server is restarted, it communicates with the acting primary server and must receive the okay from the acting primary server in order to recover its role as the primary server. If there is a communication problem, such as a network error, and no notification is received, the failed server remains in a 'ready' state but does not recover its role as the primary server. After the communication problem has been resolved, the storage server will be able to recover normally. If failover is suspended on the secondary server, or if the failover module is stopped, the primary will not automatically recover until the ipstorsm.sh recovery command is entered. If both failover servers go offline and then only one is brought up, type the ipstorsm.sh recovery command to bring the storage server back online.
Manual recovery
Manual recovery is the process by which the secondary server releases the identity of the primary to allow the primary to restore its operation. Manual recovery can be triggered by selecting the Stop Takeover option from the FalconStor Management Console. If the primary server is not ready to recover, and you can still communicate with the server, a detailed failover screen displays. If the primary server is not ready to recover, and you cannot communicate with the server, a warning message displays.
Auto recovery
You can enable auto recovery by changing the Auto Recovery option after failover, when control is returned to the primary server once the primary server has recovered. Once control has returned to the primary server, the secondary server returns to its normal monitoring mode.
46
High Availability
Logging into a target: iscsiadm -m node -p <ipaddress>:3261,0 -T <remote-target-name> -l Example: "iscsiadm -m node -p 192.168.200.201:3261,0 -T iqn.2000-03.com.falconstor:istor.PMCC2401 -l" 7. Once you have verified that both servers can see the remote storage, restart NSS on both servers. Failure to do so will cause problems recovering the server. 8. After NSS has been restarted verify using the "sms -v" command that both servers are in a ready state.
47
High Availability
48
High Availability
When replacing local or remote storage, if a mirror needs to be swapped first, a swapping request will be sent to the server to trigger the swap. Storage can only be replaced when the damaged segments are part of the mirror, either local or remote. New storage has to be available for this option. Note: If you have replaced disks, you should perform a rescan on both servers before using the Verify & Repair option. To use the Verify & Repair option: 1. Log into both cross mirror servers. 2. Right-click on the primary server and select Cross Mirror --> Verify & Repair. 3. Click the button for any issue that needs to be corrected. You will only be able to select a button if that is the scenario where the problem occurred. The other buttons will not be selectable. Resources If everything is working correctly, this option will be labeled Resources and will not be selectable. The option will be labeled Incomplete Resources for the following scenarios: The mirror resource was offline when auto expansion (i.e. Snapshot resource) occurred but the device is now back online. You need to create a mirror for virtual resources that existed on the primary server prior to cross mirror configuration.
1. Right-click on the server and select Cross Mirror --> Verify & Repair.
49
High Availability
3. Select the resource to be repaired. 4. When prompted, confirm that you want to repair this resource. Remote Storage If everything is working correctly, this option will be labeled Remote Storage and will not be selectable. The option will be labeled Damaged or Missing Remote Storage when a physical disk being used by cross mirroring on the secondary server has been replaced. Note: You must suspend failover before replacing the storage. 1. Right-click the primary server and select Cross Mirror --> Verify & Repair.
50
High Availability
51
High Availability
Local Storage
If everything is working correctly, this option will be labeled Local Storage and will not be selectable. The option will be labeled Damaged or Missing Local Storage when a physical disk being used by cross mirroring is damaged on the primary server and has been replaced. Note: You must suspend failover before replacing the storage. 1. Right-click the primary server and select Cross Mirror --> Verify & Repair.
52
High Availability
4. Confirm that this is the device to replace. Storage and Complete Resources If everything is working correctly, this option will be labeled Storage and Complete Resources and will not be selectable. The option will be labeled Resources with Missing segments on both Local and Remote Storage when a virtual device spans multiple physical devices and one physical device is offline on both the primary and secondary server. This situation is very rare and this option is informational only. 1. Right-click on the server and select Cross Mirror --> Verify & Repair.
2. Click the Resources with Missing segments on both Local and Remote Storage button.
53
High Availability
You will see a list of failed devices. Because this option is informational only, no action can be taken here.
54
High Availability
Suspend/resume failover
Select Failover --> Suspend Failover to stop monitoring its partner server. In the case of active-passive failover, you can suspend from the secondary server. However, the server that you suspend from will stop monitoring its partner and will not take over for that partner server in the event of failure. It can still fail over itself. Select Failover --> Resume Failover to restart the monitoring. Notes: If the cross mirror link goes down, failover will be suspended. Use the Resume Failover option when the cross mirror link comes back up. The disks will automatically be re-synced at the scheduled interval or you can manually synchronize using the cross mirror synchronize option. If you stop the NSS processes on the primary server after suspending failover, you must do the following once you restart your storage server:
1. At a Linux command prompt, type sms to see the failover status. 2. When the system is in a ready state, type the following: ipstorsm.sh
recovery
Once the connection is repaired, the failover status is not cleared until failover is resumed on both servers.
55
Replication
Overview
Replication is the process by which a SAN Resource maintains a copy of itself either locally or at a remote site. The data is copied, distributed, and then synchronized to ensure consistency between the redundant resources. The SAN Resource being replicated is known as the primary disk. The changed data is transmitted from the primary to the replica disk so that they are synchronized. Under normal operation, clients do not have access to the replica disk. If a disaster occurs and the replica is needed, the administrator can promote the replica to become a SAN Resource so that clients can access it. Replica disks can be configured for NSS storage services, including backup, mirroring, or TimeMark/ CDP, which can be useful for viewing the contents of the disk or recovering files. Replication can be set to occur continuously or at set intervals (based on a schedule or watermark). For performance purposes and added protection, data can be compressed or encrypted during replication. Note: Replication is not available in the NSSVA Lite or Trial version.
Replication configuration
Requirements
The following are the requirements for setting up a replication configuration: (Remote replication) You must have two storage servers. (Remote replication) You must have write access to both Servers. You must have enough space on the target server for the replica and for the Snapshot Resource. Both clocks should be synchronized so that the timestamp matches. In order to replicate to a disk with thin provisioning, the size of the SAN resource must be equal to or greater than 10GB (the minimum permissible size of a thin disk).
56
Replication
Setup
You can enable replication for a single SAN Resource or you can use the batch feature to enable replication for multiple SAN Resources. You need Snapshot Resources for the primary and replica disks. If you do not have them, you can create them through the wizard. 1. For a single SAN Resource, right-click on the resource and select Replication -> Enable. For multiple SAN Resources, right-click on the SAN Resources object and select Replication --> Enable. Each primary disk can only have one replica disk. If you do not have a Snapshot Resource, the wizard will take you through the process of creating one. 2. Select the server that will contain the replica.
For local replication, select the Local Server. For remote replication, select any server but the Local Server. If the server you want does not appear on the list, click the Add button.
57
Replication
Continuous Mode - Select if you want to use FalconStors Continuous Replication. After the replication wizard completes, you will be prompted to create a Continuous Replication Resource for the primary disk. Delta Mode - Select if you want replication to occur at set intervals (based on schedule or watermark).
58
Replication
Use existing TimeMark - Determine if you want to use the most current TimeMark on the primary server when replication begins or if the replication process should create a TimeMark specifically for the replication. In addition, using an existing TimeMark reduces the usage of your Snapshot Resource. However, the data being replicated may not be the most current. For example, Your replication is scheduled to start at 11:15 and your most recent TimeMark was created at 11:00. If you have selected Use Existing TimeMark, the replication will occur with the 11:00 data, even though additional changes may have occurred between 11:00 and 11:15. Therefore, if you select Use Existing TimeMark, you must coordinate your TimeMark schedule with your replication schedule. Even if you select Use Existing TimeMark, a new TimeMark will be created under the following conditions: - The first time replication occurs. - Each existing TimeMark will only be used once. If replication occurs multiple times between the creation of TimeMarks, the TimeMark will be used once; a new TimeMark will be created for subsequent replications until the next TimeMark is created. - The most recent TimeMark has been deleted, but older TimeMarks exist. - After a manual rescan. Preserve Replication TimeMark - If you did not select the Use Existing TimeMark option, a temporary TimeMark is created when replication begins. This TimeMark is then deleted after the replication has completed. Select Preserve Replication TimeMark to create a permanent TimeMark that will not be deleted when replication has completed (if the TimeMark option is enabled). This is convenient way to keep all of the replication TimeMarks without setting up a separate TimeMark schedule.
59
Replication
5. Configure how often, and under what circumstances, replication should occur.
An initial replication for individual resources begins immediately upon setting the replication policy. Then replication occurs according to the specified policy. You must select at least one policy but you can have multiple. You must specify a policy even if you are using continuous replication. This way, if the system switches to delta replication, it can automatically switch back to continuous replication after the next regularly-scheduled replication takes place. Any number of continuous replication jobs can run concurrently. However, by default, 20 delta replication jobs can run, per server, at any given time. If an additional job is ready to run, pending jobs will wait until one of the current replication jobs finish.. Note: Contact Technical Support for information about changing this value but note that additional replication jobs will increase the load and bandwidth usage of your servers and network and may be limited by individual hardware specifications. Start replication when the amount of new data reaches - If you enter a watermark value, when the value is reached, a snapshot will be taken and replication of that data will begin. If additional data (more than the watermark value) is written to the disk after the snapshot, that data will not be replicated until the next replication. If a replication that was triggered by a watermark fails, the replication will be re-started based on the retry value you enter, assuming the system detects any write activity to the primary disk at that time. Future watermark-triggered replications will not start until after a successful replication occurs. If you are using continuous replication and have set a watermark value, make sure that it is a value that can actually be reached; otherwise snapshots will
NSS Virtual Appliance User Guide 60
Replication
rarely be taken. Continuous replication does not take snapshots, but you will need a recent, valid snapshot if you ever need to rollback the replica to an earlier TimeMark during promotion. If you are using SafeCache, replication is triggered when the watermark value of data is moved from the cache resource to the disk. Start an initial replication on mm/dd/yyyy at hh:mm and then every n hours/minutes thereafter - Indicate when replication should begin and how often it should be repeated. If a replication is already occurring when the next time interval is reached, the new replication request will be ignored. Note: if you are using the FalconStor Snapshot Agent for Microsoft Exchange 5.5, the time between each replication should be longer than the time it takes to stop and then re-start the database. 6. Specify if you want to use the Throughput Control option.
Click Enable Throughput Control to control the synchronization process and maintain optimal resource throughput. This option can be used for questionable networks. The replication will be monitored every four minutes. If the replication takes longer than four minutes, the system will slow down replication to 10KB to avoid replication failure.
61
Replication
This screen allows you to specify the interval at which the I/O activity is to be checked as well as the resume synchronization schedule. The default is set to check throughput activity every minute and only resume when the I/O activity per second is less than or equal to 20MB. The maximum checking attempts before resuming synchronization defaults to three (3). You can change the number of attempts or enter zero (0) to make the number of attempts unlimited. 7. Click Next once you have set the throughput policy. 8. Select whether you want to use TCP or RUDP as the protocol for this replication. Note: All new installations of NSS default to TCP.
62
Replication
The Compression option provides enhanced throughput during replication by compressing the data stream. This leverages machines with multi-processors by using more than one thread for processing data compression/decompression during replication. By default, two (2) threads are used. The number can be increased to eight (8). This reduces the size of the transmission, thereby maximizing network bandwidth. Note: Compression requires 64K of contiguous memory. If the memory in the storage server is very fragmented, it will fail to allocate 64K. When this happens, replication will fail. The Encryption option provides an additional layer of security during replication by securing data transmission over the network. Initial key distribution is accomplished using the authenticated Diffie-Hellman exchange protocol. Subsequent session keys are derived from the master shared secret, making it very secure. Enable Microscan - Microscan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small random updates to the disk. If the global Microscan option is turned on, it overrides the Microscan setting for an individual virtual device. Also, if the virtual devices are in a group configured for replication, group policy always overrides the individual devices policy.
63
Replication
Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express automatically creates the replica for you from available hard disk segments. You will only have to select the storage pool or physical device that should be used to create the replica resource. Select Existing lets you select an existing resource. There are several restrictions as to what you can select: - The target must be the same type as the primary. - The target must be the same size as the primary. - The target can have Clients assigned to it but they cannot be connected during the replication configuration. Note: All data on the target will be overwritten.
64
Replication
Select the storage pool or device to use to create the replica resource.
Only one disk can be selected at a time from this dialog. To create a replica disk from multiple physical disks, you will need to add the disks one at a time. After selecting the first disk, you will have the option to add more disks. You will need to do this if the first disk does not have enough space for the replica.
Click Add More if you need to add another physical disk to this replica disk. You will go back to the physical device selection screen where you can select another disk.
65
Replication
The name is not case sensitive. 12. Confirm that all information is correct and then click Finish to create the replication configuration. Notes: - Once you create your replication configuration, you should not change the hostname of the source (primary) server. If you do, you will need to recreate your replication configuration. - After the configuration is complete, the primary server will be added as a client on the replica server. We do not recommend assigning any resources to this client since its purpose is to be used for replication only. When will replication begin? If you have configured replication for an individual resource, the system will begin synchronizing the disks immediately after the configuration is complete if the disk is attached to a client and is receiving I/O activity. If you have configured replication for a group, synchronization will not start until one of the replication policies (time or watermark) is triggered. If you configured continuous replication If you are using continuous replication, you will be prompted to create a Continuous Replication Resource for the primary disk and a Snapshot Resource for the replica disk. If you are not using continuous replication, the wizard will only ask you to create a Snapshot Resource on the replica. Because old data blocks are moved to the Snapshot Resource as new data is written to the replica, the Snapshot Resource should be large enough to handle the amount of changed data that will be replicated. Since it is not always possible to know how much changed data will be replicated, it is a good idea for
NSS Virtual Appliance User Guide 66
Replication
you to enable expansion on the target servers Snapshot Resource. You then need to decide what to do if your Snapshot Resource runs out of space (reaches the maximum allowable size or does not have expansion enabled). The default is to stop writing data, meaning the system will prevent any new writes from getting to the disk once the Snapshot Resource runs out of space and it cannot allocate any more. Protect your replica resource For added protection, you can mirror or TimeMark an incoming replica resource by highlighting the replica resource and right-clicking on it.
67
Replication
Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express lets you designate how much space to allocate and then automatically creates the resource using an available device. Note: The Continuous Replication Resource cannot be expanded. Therefore, you should allocate enough space for the resource. By default, the size will be 256 MB or 5% of the size of your primary disk (or 5% of the total size of all members of this group), whichever is larger. If the primary disk regularly experiences a large number of writes, or if the connection to the target server is slow, you may want to increase the size, because if the Continuous Replication Resource should become full, the system switches to delta replication mode until the next regularly-scheduled replication takes place. If you outgrow your resource, you will need to disable continuous replication and then re-enable it. 3. Verify the physical devices you have selected, confirm that all information is correct, and then click Finish. On the Replication tab, you will notice that the Replication Mode is set to Delta. Replication must be initiated once before it switches to continuous mode. You can either wait for the first scheduled replication to occur or you can right-click on your SAN Resource and select Replication --> Synchronize to force replication to occur.
68
Replication
Replication tab
The following are examples of what you will see by checking the Replication tab for a primary disk:
With Continuous Replication enabled
69
Replication
All times shown on the Replication tab are based on the primary servers clock. Accumulated Delta Data is the amount of changed data. Note that this value will not display accurate results after a replication has failed. The information will only be accurate after a successful replication. Replication Status / Last Successful Sync / Average Throughput - You will only see these fields if you are connected to the target server. Transmitted Data Size is based on the actual size transmitted after compression or with Microscan performed. Delta Sent represents the amount of data sent (or processed) based on the uncompressed size. If compression and Microscan are not enabled, the Transmitted Data Size will be the same as Delta Sent and the Current/Average Transmitted Data Throughput will be the same as Instantaneous/Average Throughput. If compression or Microscan is enabled and the data can be compressed or blocks of data have not changed and will not be sent, the Transmitted Data Size is going to be different from Delta Sent and both Current/Average Transmitted Data Throughput will be based on the actual size of data (compressed or Micro-scanned) sent over the network.
Event Log
Replication events are also written to the primary servers Event Log, so you can check there for status and operational information, as well as any errors.
Replication object
The Incoming and Outgoing objects under the Replication object display information about each server that replicates to this server or receives replicated data from this server. If the servers icon is white, the partner server is "connected" or "logged in". If the icon is yellow, the partner server is "not connected" or "not logged in".
70
Replication
Replication performance
Set global replication options You can set global replication options that affect system performance during replication. While the default settings should be optimal for most configurations, you can adjust the settings for special situations. To set global replication properties for a server: 1. Right-click on the server and select Properties. 2. Select the Performance tab. Default Protocol - Select the default protocol to use for replication jobs. Timeout replication after [n] seconds - Timeout after inactivity. This must be the same on both the primary and target replication servers. Note: This parameter can be affected by the TCP timeout setting. Throttle - The maximum amount of bandwidth that will be used for replication. Changing the throttle allows you to limit the amount of bandwidth replication will use. This is useful when the WAN is shared among many applications and you do not want replication traffic to dominate the link. This parameter affects all resources using either remote or local replication. Throttle does not affect manual replication scans; it only affects actual replication. It also does not affect continuous replication, which uses all available bandwidth. Leaving the Throttle field set to 0 (zero) means that the maximum available bandwidth will be used. Besides 0, valid input is 10-1,000,000 KB/s (1G). Enable Microscan - Microscan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small random updates to the disk. This global Microscan option overrides the Microscan setting for each individual virtual device. Tune replication parameters You can run a test to discover maximum bandwidth and latency for remote replication within your network. 1. Right-click on a server under Replication --> Outgoing and select Replication Parameters. 2. Click the Test button to find see information regarding the bandwidth and latency of your network.
71
Replication
Switch clients to the replica disk when the primary disk fails
Because the replica disk is used for disaster recovery purposes, clients do not have access to the replica. If a disaster occurs and the replica is needed, the administrator can promote the replica to become the primary disk so that clients can access it. The Promote option promotes the replica disk to a usable resource. Doing so breaks the replication configuration. Once a replica disk is promoted, it cannot revert back to a replica disk. You must have a valid replica disk in order to promote it. For example, if a problem occurred (such as a transmission problem or the replica disk failing) during the first and only replication, the replicated data would be compromised and therefore could not be promoted to a primary disk. If a problem occurred during a subsequent replication, the data from the Snapshot resource will be used to recreate the replica from its last good state. Notes: You cannot promote a replica disk while a replication is in progress. If you are using continuous replication, you should not promote a replica disk while write activity is occurring on the replica. If you just need to recover a few files from the replica, you can use the TimeMark/TimeView option instead of promoting the replica. Refer to Use TimeMark/TimeView to recover files from your replica for more information.
To promote a replica: 1. In the Console, right-click on an incoming replica resource under the Replication object and select Replication --> Promote. If the primary server is not available, you will be prompted to roll back the replica to the last good TimeMark, assuming you have TimeMark enabled on the replica. When this occurs, the wizard will not continue with the promotion and
72
Replication
you will have to check the Event Log to make sure the rollback completes successfully. Once you have confirmed that it has completed successfully, you need to re-select Replication --> Promote to continue. 2. Confirm the promotion and click OK. 3. Assign the appropriate clients to this resource. 4. Rescan devices or restart the client to see the promoted resource.
73
Replication
To change the configuration: 1. Right-click on the primary disk and select Replication --> Properties. 2. Make the appropriate changes and click OK. Note: - If you are using continuous replication and you enable or disable encryption, the change will take effect after the next delta replication. - If you are using continuous replication and you change the IP address of your target server, replication will switch to delta replication mode until the next regularly-scheduled replication takes place.
74
Replication
75
Replication
76
Replication
replication transfer Mode and TimeMark tab under the Replication Setup Options. 2. Right-click on the primary or replica server and select Replication --> Forceful Reversal. 3. Type YES to confirm the operation and then click OK. 4. Once the forceful role reversal is done, Repair the promoted replica to establish the new connection between the new primary and replica server. The replication repair operation must be performed from the NEW primary server. Note: If the SAN Resource is assigned to a client in the original primary server, it must be unassigned in order to perform the repair on the new primary. 5. Confirm the IP address and click OK. The current primary disk remains as the primary disk and begins replicating to the recovered server. After the repair operation is complete, replication will synchronize again either by schedule or manual trigger. A full synchronization is performed if the replication was not synchronized prior the forceful role reversal and the replication policy from the original primary server will be used/update on the new primary server. If you want to recreate your original replication configuration, you will need to perform another reversal so that your original primary becomes the primary disk again. Notes: The forceful role reversal operation can be performed even if the CDP journal has unflushed data. The forceful role reversal operation can be performed even if data is not synchronized between the primary and replica server. The snapshot policy, TimeMark/CDP, and throttle control policy settings are not swapped after the repair operation for replication role reversal.
Relocate a replica
The Relocate feature allows replica storage to be moved from the original replica server to another server while preserving the replication relationship with the primary server. Relocating reassigns ownership to the new server and continues replication according to the set policy. Once the replica storage is relocated to the new server, the replication schedule can be immediately resumed without the need to rescan the disks.
77
Replication
Before you can relocate the replica, you must import the disk to the new NSS appliance. Refer to the NSS Reference Guide for additional information. Once the disk has been imported, open the source server, highlight the virtual resource that is being replicated, right-click and select Relocate. Notes: You cannot relocate a replica that is part of a group. If you are using continuous replication, you must disable it before relocating a replica. Failure to do so will keep replication in delta mode, even after the next manual or scheduled replication occurs. You can reenable continuous replication after relocating the replica.
Replication
Depending upon the replication schedule, when you promote the mirror of a replica resource, the mirrored copy may not be an identical image of the replication source. In addition, the mirrored copy may contain corrupt data or an incomplete image if the last replication was not successful or if replication is currently occurring. Therefore, it is best to make sure that the last replication was successful and that replication is not occurring when you promote the mirrored copy.
79
Troubleshooting
NSS Virtual Appliance settings
FalconStor NSS Virtual Appliance settings can be verified as described below:
5. Click Memory under the settings list and enter 1024 of memory in the Reservation setting in the Resource Allocation pane.
80
Troubleshooting
81
Troubleshooting
vSwitch1 for virtual machines vSwitch2 for VMotion vSwitch3 for iSCSI
The general recommendation from VMware is to separate the vSwitch so that only one vSwitch has iscsi traffic and general network traffic. Refer to the VMware Knowledge Base article at: http://kb.vmware.com/selfservice/ microsites/search.do?language=en_US&cmd=displayKC&externalId=1001251
Optimizing performance when using a virtual disk on a NSSVA for iSCSI devices
You can allocate EagerZeroedThick disk at the creation of the virtual disk to optimize performance. Notes: A thick-eager zeroed disk has all the space allocated and zeroed out at the time of creation. It is a time consuming process. Do not set eagerzerothick to both NSSVA's system/data vmdks and guest VM's vmdks. Do not enable Fault Tolerance on the either Guest this will create a ghost system on the ESX HA pair that will be written to simultaneously over the LAN and will impact performance.
82
Troubleshooting
To enable write cache without a battery, you need to modify the Bios. Go to the virtual disk settings and select Advanced settings > Write policy (choose write back) > select checkbox "force WB with no battery". Consult with Dell regarding the risk of this configuration to confirm that your perc card has a battery.
Cross-mirror failover
Symptom: During cross-mirror configuration, the system reports a mismatch of physical disks on the two appliances even though you are sure that the configuration of the two appliances is exactly the same, including the ACSL, disk size, CPU and memory. Cause/Resolution: An iSCSI initiator must be installed on the storage server and is included on FalconStor cross-mirror appliances. If you are not using a FalconStor cross-mirror appliance, you must install the iSCSI initiator RPM from the Linux CD before running the IPStorinstall installation script. The script will update the initiator.
83
Appendix A - Checklist
A. VMware ESX Server system configuration
VMware ESX Server system configuration check list What to check The primary ESX server first virtual switch name: How to check Connect to the primary ESX server via the VMware vSphere client. On the console, go to Configuration -> Networking -> first Virtual Switch Example: vSwitch0 Connect to the primary ESX server via the VMware vSphere client. On the console, go to Configuration -> Networking -> second Virtual Switch Example: vSwitch1 Connect to the primary ESX server via the VMware vSphere client. On the console, go to Configuration -> Networking. Select Properties on vSwitch0. Then select Service Console and verify the IP displayed in the right panel. Connect to the primary ESX server via the VMware vSphere client. On the console, go to Configuration -> Networking. elect Properties on vSwitch1. Then select Service Console and verify the IP displayed in the right panel To get IP, the user has to manually add VMKernel connection type first by select the vSwitch and click Properties and click Add button to add a VMKernel IP. After this, get the IP via the VMware vSphere client. On the console, go to Configuration -> Networking -> Click Properties on vSwitch0 -> Select VMKernel, The IP will be showed on right panel. Value
84
Appendix A - Checklist
VMware ESX Server system configuration check list What to check The secondary ESX server 1st virtual switch name: How to check Connect to the secondary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking -> The first Virtual Switch Example: vSwitch0 Connect to the secondary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking -> The second Virtual Switch Example: vSwitch1 Connect to the secondary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking -> Click Properties on vSwitch0 -> Select Service Console, The IP will be showed on right panel. Check from ESX console -> configuration > Networking -> Click Properties on vSwitch1 -> Select Service Console, The IP will be showed on right panel. To get IP, the user has to manually add VMKernel connection type first by select the vSwitch and click Properties and click Add button to add a VMKernel IP. After this, get the IP via the VMware vSphere client. On the console, go to Configuration -> Networking -> Click Properties on vSwitch0 -> Select VMKernel, The IP will be showed on right panel. Value
85
Appendix A - Checklist
The primary NSSVA root password: The primary NSSVA eth0 IP address:
The secondary NSSVA root password: The secondary NSSVA eth0 IP address:
86
Appendix A - Checklist
NSS Virtual Appliance system information check list What to check The secondary NSSVA eth0 virtual machine network: How to check Right-click on the VA machine -> Edit Setting -> Hardware -> Select Network adapter 1 -> Set the network connection -> Network label Right-click on the VA machine -> Edit Setting -> Hardware -> Select Network adapter 2 -> Set the network connection -> Network label The user account should mean "root" account. If they're not equal, the failover cannot be set successfully Check from FalconStor console->Connect to secondary NSSVA ->SAN Clients Check from FalconStor console->Connect to primary NSSVA -> Logical Resources -> SAN Resources Value
The user account and password must be equal on both NSSVA No created SAN client on the secondary NSSVA No created SAN resource on the primary NSSVA
87
Appendix A - Checklist
C. Network Configuration
Network Configuration check list What to check Make sure the primary ESX server installed two physical network adapters and link to independent virtual switches. How to check Connect to the primary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking and find the VM network of vSwitch0 and vSwitch1. Two Physical Adapters should be connected separately. Example: vmnic0 & vmnic1 Make sure the secondary ESX server installed two physical network adapters and link to independent virtual switches Connect to the primary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking -> Find the VM network of vSwitch0 and vSwitch1 -> Two Physical Adapters should be connected separately. Example: vmnic0 & vmnic1 Make sure the the crossover cable connects the 2nd physical network adapter of the primary and secondary ESX server Connect to the secondary ESX server via the VMware vSphere Client. Go to Configuration -> Networking -> Virtual Switch: vSwitch1 -> The 2nd physical network adapter (vmnic1) should be connected with vSwitch1 For 1. & 2, use SSH to login to the Primary ESX server and ping the service console & VMkernel IP of secondary ESX server on 1st virtual switch. For 3, use SSH to login to the Primary NSSVA and ping the eth0 IP of the secondary NSSVA. Value
Make sure the IP address of the following items are set in the same IP subnet and can ping each other: 1. The IP address of the service console on 1st virtual switch in the primary and secondary ESX server 2. The IP address of the VMkernel on 1st virtual switch in the primary and secondary ESX server 3. The IP address of the eth0 in the primary and secondary NSSVA 4. The IP address of the client access IP address in Cross-Mirror setting.
88
Appendix A - Checklist
Network Configuration check list What to check Make sure the IP address of the following items are set in the same IP subnet and can ping each other: 1. The IP address of the service console on the 2nd virtual switch in the primary and the secondary ESX server 2. The IP address of the eth1 in the primary and secondary NSSVA 3. The IP addresses list for Cross-Mirror setting: The heart-beat IP of the primary NSSVA (eth0): New IP address The Cross-Mirror IP of the primary NSSVA (eth1) The Cross-Mirror IP of the Secondary NSSVA (eth1) The client access IP for crossmirror access: Original primary eth0 IP How to check For 1. Use SSH to login to Primary ESX server. Ping the service console and VMkernel IP of secondary ESX server on 2nd virtual switch. For 2. & 3. Use SSH to login to Primary NSSVA. Ping the eth0 IP of secondary NSSVA. The IP will be set to monitor primary health during setup Failover The heart-beat IP will be created during setting Failover. This IP should be the same subnet as eth0. The Cross-Mirror IP of primary/secondary NSSVA should be the NSSVA eth1 IP address according to B. NSS Virtual Appliance system information on page 86. Value
D. Storage Configuration
Storage Configuration check list What to check The category of all devices in the secondary NSSVA is set in "un-assigned". How to check Check the Storage Configuration via the FalconStor Management Console. Go to Physical Resources.->Physical Devices>SCSI Devices. Check the devices info via FalconStor Management Console. Go to Physical Resources -> select SCSI Devices on the right panel. Using the FalconStor Management Console, connect to the NSSVA. Navigate to Physical Resources -> Physical Devices -> SCSI Devices. You should see at least 10GB of free space. Value
The devices in the secondary NSSVA can create one to one mapping to the devices in the primary NSSVA.They have the same size and SCSI ID. 10 GB free space is available on the primary NSSVA to create the configuration repository during the Cross-Mirror configuration.
89
Index
Index
C
Compression Replication 63 Configuration 19 Console Register keycodes 21 console 16 Continuous replication 66 Enable 58 Resource 67, 68 Cross mirror Check resources & swap 48 Recover from disk failure 47 Requirements 6 Re-synchronize 48 Verify & repair 48 Replication note 78 Requirements Cross mirror 6 Server changes 54 Subnet change 54 Suspend/resume 55 failover configuration 43 FalconStor Management Console 2 FalconStor Virtual Appliance Setup utility 16 Force takeover 43
G
Global options 71
H
Hardware tab 81 Health monitoring 41 high availability (HA) 1
D
Datastore 13 Delta Mode 58 Delta Replication Status Report 69 Disaster recovery Replication 56
I
Installation Snapshot Agent 15
K E
Encryption Replication 63 Keycodes Register 21 Knowledge requirements 11
F
Failover Auto Recovery 46 Auto recovery 46 Cross mirror Check resources & swap 48 Recover from disk failure 47 Requirements 6 Re-synchronize 48 Verify & repair 48 Fix failed server after failover 46 Force a takeover 54 Manually initiate a recovery 55 Physical device change 54 Recovery 46 Remove configuration 55
L
Local Replication 56
M
Microscan 63, 71 Mirroring Replication note 78
N
Network Mapping 13 NSS Virtual Appliance 2
P
Performance Replication 71
90
Index
Power Control Test 44 Power control utility 42 Primary ESX server connection 43 Primary ESX server root password 43 Primary NSSVA network test 44
R
Relocate a replica 77 Remote Replication 56 Replica resource Protect 67 Replication 56, 71 Assign clients to replica disk 72 Change configuration options 74 Compression 63 Configuration 56 Continuous replication resource 67 Delta mode 58 Encryption 63 Expand primary disk 78 Failover note 78 First replication 66 Force 75 Microscan 63, 71 Mirroring note 78 Performance 71 Parameters 71 Policies 60 Primary disk 56 Promote 72 Recover files 74 Recreate original configuration 73 Relocate replica 77 Remove configuration 78 Replica disk 56 Requirements 56 Resume schedule 75 Reversal 73, 76 Scan 73 Setup 57 Start manually 75 Status 69 Stop in progress 75 Suspend schedule 75 Switch to replica disk 72 Synchronize 68, 75 Test 71 Throttle 71 TimeMark note 78
S
SafeCache 61 SAN Disk Manager 2 Secondary ESX server connection 43 Security 84 Snapshot Agents 2
T
Thin Provisioning 1, 56 Thin Replication 1 Throttle 71 Throughput Control enable 61 set policy 62 TimeMark Replication note 78
V
vapwc-config 43 virtual iSCSI SAN 1 VMware Infrastructure Client 16
W
watermark value 60
91