Vous êtes sur la page 1sur 11

What is dynamic sparing?

Dynamic sparing Increases data availability by copying the data on a failing volume to a spare volume until the original device is replaced. What is permanent sparing? Permanent sparing replaces a faulty drive automatically from a list of available spares residing in the Symmetrix System without CE involvement on site. What is DMX?

Direct Matrix Architecture (DMX)


What is Rule of 17? The front end rule of 17 is a recommendation for host cabling to eliminate single points of failure. Odd numbered and even numbered adapters are connected to different internal buses. For example the UNIX host is connected to the odd numbered FA 01 and the even numbered FA 16 (1 + 16 = 17). What is a Composite group? Composite Group is a user-defined group of SRDF devices that act in unison to maintain the integrity of a database distributed across multiple Symmetrix units or multiple RDF groups within a single Symmetrix. What is tripping the Composite Group? If a source R1 device in the Composite Group cannot propagate data to the target R2 device, data propagation from all R1 devices in the Composite Group is halted. This suspension is called tripping the Composite Group. What is zoning? Logical segmentation of a fabric environment.

Zoning is used to partition a Fibre Channel switched fabric into subsets of logical devices. Each zone contains a set of members that are permitted to access each other. Members can be HBAs or storage ports. When zoning is enabled, members in the same zone can communicate with each other. Any two nodes that are not members of the same zone cannot communicate with one another. Members may belong to one zone or to multiple zones
When ever an initiator (hba) discovers storage in the environment it does a scsi scan, if there are too many initiator doing a scsi scan each others scsi scan could be interrupted and could cause timeouts and hence zone would allow a initiator to scan within its zone which would avoid this confusion. What is hard zoning? Hard Zoning uses the physical fabric port number of a switch to create zones and enforce the policy. What is Soft Zoning? Soft zoning uses the name server to enforce zoning. The World Wide Name (WWN) of the elements enforces the configuration policy. What is mixed zoning?

Mixed Port and WWN Zoning: A zone may contain both WWNs and switch ports. However, this method still restricts the ability to move nodes identified by switch port, by requiring that port numbers be redefined in the zone set.
What is the difference between Hard and Soft zoning? Soft Zoning: Soft zoning uses the name server to enforce zoning. The World Wide Name (WWN) of the elements enforces the configuration policy. Pros: - Administrators can move devices to different switch ports without manually reconfiguring zoning. This is major flexibility to administrator. You don't need to change once you create zone set for particular device connected on switch. You create a zone set on switch and allocate storage to host. You can change any port for device connectivity Cons: - Devices might be able to spoof the WWN and access otherwise restricted resources. - Device WWN changes, such as the installation of a new Host Bus Adapter (HBA) card, require policy modifications. - Because the switch does not control data transfers, it cannot prevent incompatible HBA devices from bypassing the Name Server and talking directly to hosts.

Hard Zoning: - Hard Zoning uses the physical fabric port number of a switch to create zones and enforce the policy. Pros: - This system is easier to create and manage than a long list of element WWNs. - Switch hardware enforces data transfers and ensures that no traffic goes between unauthorized zone members. - Hard zoning provides stronger enforcement of the policy (assuming physical security on the switch is well established). Cons: - Moving devices to different switch ports requires policy modifications. What is WWN? A World Wide Name, or WWN, is a 64-bit address used in fibre channel networks to uniquely identify each element in a Fibre Channel network. What is persistent binding? Usually this happens on open system.

Persistent binding binds the WWN of a SP port to a (t#) target so that every time the system boots, the same SP port on the same array will have the same t#. /dev/rdsk/c#t#d# c = HBA t = SP Port d = LUN Targets addresses are dynamically assigned.
Persistent binding of target ID solved a problem that is the result of the dynamic nature of Fibre Channel based SCSI. In the addressing scheme used by Solaris (and many other open systems host), the c# in the device address that refers to the HBA instance, the t# refers to the target instance. In the case of a disk array, the front-end port, and the d# is the SCSI address assigned to the LUN. The HBA number and the SCSI address are static but the t# by default is assigned in the order in which the targets are identified during the configuration process of a system boot. The order that a target is discovered can be different between reboots. What is PSM? Persistent Storage Manager is a hidden LUN that records configuration information. Both SPs access a single PSM so environmental records are in sync. PSM is stored in vault drives (0-4 drives of the array 0) What is the difference between zoning and Lun masking? Zoning is done at switch level, but Lun masking is done on array. What is LUN masking? LUN (Logical Unit Number) Masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts. LUN Masking is implemented primarily at the HBA (Host Bus Adapter) level.

EMC CLARiiON Storage system types AX150 :-Dual storage processor enclosure with Fibre-Channel interface to host and SATA-2 disks. AX150i :-Dual storage processor enclosure with iSCSI interface to host and SATA-2 disks. AX100 :- Dual storage processor enclosure with Fibre-Channel interface to host and SATA-1 disks. AX100SC : Single storage processor enclosure with Fibre-Channel interface to host and SATA-1 disks. AX100i Dual storage processor enclosure with iSCSI interface to host and SATA-1 disks. AX100SCi Single storage processor enclosure with iSCSI interface to host and SATA-1 disks. CX3-80

SPE2 - Dual storage processor (SP) enclosure with four Fibre-Channel front-end ports and four back-end ports per SP. CX3-40 SP3 - Dual storage processor (SP) enclosure with two Fibre Channel front-end ports and two back-end ports per SP. CX3-40f SP3 - Dual storage processor (SP) enclosure with four Fibre Channel front -end ports and four back-end ports per SP CX3-40c SP3 - Dual storage processor (SP) enclosure with four iSCSI front-end ports, two Fibre Channel front -end ports, and two back-end ports per SP. CX3-20 SP3 - Dual storage processor (SP) enclosure with two Fibre Channel front-end ports and a single backend port per SP. CX3-20f SP3 - Dual storage processor (SP) enclosure with six Fibre Channel front-end ports, and a single backend port per SP. CX3-20c SP3 - Dual storage processor (SP) enclosure with four iSCSI front-end ports, two Fibre Channel front-end ports, and a single back-end port per SP. CX600, CX700 SPE - based storage system with model CX600/CX700 SP, Fibre-Channel interface to host, and Fibre Channel disks CX500, CX400, CX300, CX200 DPE2 - based storage system with model CX500/CX400/CX300/CX200 SP, Fibre-Channel interface to host, and Fibre Channel disks. CX2000LC DPE2- based storage system with one model CX200 SP, one power supply (no SPS),Fibre-Channel interface to host, and Fibre Channel disks. C1000 Series 10-slot storage system with SCSI interface to host and SCSI disks C1900 Series Rugged 10-slot storage system with SCSI interface to host and SCSI disks C2x00 Series 20-slot storage system with SCSI interface to host and SCSI disks C3x00 Series 30-slot storage system with SCSI or Fibre Channel interface to host and SCSI disks FC50xx Series DAE with Fibre Channel interface to host and Fibre Channel disks FC5000 Series JBOD with Fibre Channel interface to host and Fibre Channel disks FC5200/5300 Series iDAE -based storage system with model 5200 SP, Fibre Channel interface to host, and Fibre channel disks FC5400/5500 Series DPE -based storage system with model 5400 SP, Fibre Channel interface to host, and Fibre channel disks FC5600/5700 Series DPE -based storage system with model 5600 SP, Fibre Channel interface to host, and Fibre Channel disks FC4300/4500 Series DPE -based storage system with either model 4300 SP or model 4500 SP, Fibre Channel interface to host, and Fibre Channel disks FC4700 Series DPE -based storage system with model 4700 SP, Fibre Channel interface to host, and Fibre Channel disks IP4700 Series Rackmount Network-Attached storage system with 4 Fibre Channel host ports and Fibre Channel disks. What is metaLUN? A metaLUN is a type of LUN whose maximum capacity can be the combined capacities of all the LUNs that compose it. The metaLUN feature lets you dynamically expand the capacity of a single LUN (base LUN) into a larger unit called a metaLUN. You do this by adding LUNs to the base LUN. You can also add LUNs to a metaLUN to further increase its capacity. Like a LUN, a metaLUN can belong to a Storage Group, and can participate in SnapView, MirrorView and SAN copy sessions. Difference between Clone and BCV. BCV 1. The data on BCV device could be accessible Clone The data on CLONE device could be accessible

to the secondary host only after the mirror session is established, split and mounted on the secondary host.

to the host right after creating a session and activating. Copy on access technology would be used to copy any track from the source device, when ever a piece of data is being accessed on the secondary host. 2. The performance will not be impacted right when the clone volume is mounted on the secondary host. When ever the data is being accessed from the Clone, clone has to fetch the track of data from the source device which would cause the impact on symm performance. 3. Clone is not a full copy of the source device until full background copy has been made. Clone will not take a position of mirror. 4. Steps involved in creating a Clone are CREATE, ACTIVATE, and TERMINATE.

2. The performance would have impact when the full copy i.e while synchronization is happening between BCV and the standard device (source volume). After the synchronization of data, if the data is being read from the BCV device there would be NO performance impact or load in the backend. 3. BCV device is a mirror copy of the source device. Will take the position of a mirror if required. 4. Steps involved in creating a BCV are ASSOCIATION, ESTABLISH, SPLIT, and RESTORE.

What is a Gatekeeper? Gatekeeper device is not a mandatory requirement.

In order to perform S\W operations on the Symmetrix, we use low level SCSI command. To send SCSI commands to the Symmetrix we need to open a device. We refer to the device we open as a GateKeeper. Symmetrix GateKeepers provide communication paths into the Symmetrix for external software monitoring and/or controlling the Symmetrix. There is nothing special about a GateKeeper, any host visible device can be used for this purpose. We do not actually write any data on this device, nor do we read any date from this device. It is simply a SCSI target for our special low level SCSI commands. Typically we use 6 cyl mirrored devices for all products up to DMX-2 and 3 Cyl mirrored devices for DMX-3 onwards. We do perform I/O to these devices & this can interfere with applications using the same device. This is why we always configure small devices (NO data is actually stored) and map them to hosts to be uses as "gatekeepers". The Widesky software will consider any device with 10 cylinders or less to be the preferred GateKeeper device.

What is read hit? What is read miss? What is a FLARE LUN (FLU)? A logical partition of a RAID group. The basic logical units managed by FLARE, which serve as the building blocks for MetaLUN components. What is MetaLUN? A storage volume consisting of two or more FLUs whose capacity grows dynamically by adding FLUs to it. What are the conditions for having a striped lun expansion? The rules for conducting striped LUN expansion are: All FLUs in the striped component must be of the same RAID type.

All FLUs in the striped component must be of the same user capacity. All FLUs in a MetaLUN must reside on the same disk typeeither all Fibre Channel or all ATA. What is LUN Migration? LUN migration allows an administrator to move the contents of a source LUN to a destination LUN within the same array. Destination LUN may be same or larger capacity Migration is non-disruptive host application Once the migration is complete the destination LUN will assume the attributes of the source LUN

What is an SFS volume? SFS stands for Symmetrix File System. The Symmetrix File System is stored on (2) two way mirrored, 3 Gigabyte, (6140 Cyl minimum), FBA volumes. SFS volumes are not host addressable. As you might guess, the SFS volume contains the Symmetrix File System. The Symmetrix File System consists of Symmetrix System information such as traces and DMSP information.
the process of FA-login.

The process by which a Fibre Channel node establishes a logical connection to a fabric switch.

The following sequences of events occur when a device is connected to a storage network. All ports except private NL_Ports have to go through this sequence for them to be able to communicate with each other in the fabric. Link initialization: When a device is physically connected to a fabric switch port, the Fibre Channel protocol establishes a logical connection between the device (which is now known as a node) and the fabric switch. Primitive ordered sets are sent between the node and the switch to establish the link. Fabric Login: Once the physical link is established, the node sends a special frame called FLOGI to the port to allow it to communicate with the rest of the fabric. The FLOGI frame contains the S_ID field with its ALPA value filled in. This frame is received by the login server which is located at address FFFFFE. The login server responds back with the D_ID field filled with the domain ID and area location. In other words, the device now gets a 24-bit address by which it is identified in the fabric.

Name server registration: The next step is for the node to register with the name server. The name server is located at address FFFFFC and obtains information from the node through the port login frame (PLOGI) and

through subsequent registration frames. Information in the name server is stored in the form of database objects. The node may register values for all or some database objects depending on the requirement. The most commonly registered (and useful) objects are: 24-bit fabric address 64-bit Port Name (WWPN) 64-bit Node Name (WWNN) Class of service parameters FC-4 protocols supported (SCSI, IP etc.) Port type such as N_Port or NL_Port. The Node also requests the name server for a list of nodes that support the same FC-4 Upper Level Protocols as itself. For example a host can request the name server for a list of SCSI-3 devices. This list usually depends on whether there are restrictions placed on what devices the node can talk to. These limitations are assigned through zoning and are discussed in the next slide. N_Port Login: The node then attempts a port login (PLOGI) to all nodes from the list it receives from the switchs Name Server. It provides a specific set of operating characteristics associated with the destination N_Port, including which Classes of service are supported. It also initializes the destination end-to-end credit. The process is repeated as other nodes are attached to other ports on the switch. Process login (PRLI): The node then sets up the environment between itself (originating N_Port) and the device its communicating with (responding N_Port). This environment is then used to determine if there is a LUN present. This is the point at which storage connectivity is established. A group of related processes is collectively known as an image pair. The processes involved can be system processes, system images, control unit images or ULP processes. Use of process login is optional from the perspective of the FC2 layer, but may be required by a specific upper-level protocol such as SCSI-FCP mapping

BASIC TERMS USED IN SAN: Mesh: Fabric topology design wherein each switch in the fabric is ISLed to each other switch in the fabric. Core/Edge: Fabric topology design wherein storage ports are connected to core switches, which are in turn connected to edge switches (via ISL) which are connected to HBAs. ISL: Any physical FC link that directly connects two adjacent switches. This is done in order to increase the size of a fabric and transfer data traffic from one switch to another. Hop: Each time that you exit one switch and enter another switch this is a hop. Routing: All routes are evaluated on a Fabric Shortest Path First Algorithm. Trunking: The aggregation of several ISLs into one logical unit for the purposes of ISL loads balancing. Domain: Unique numeric identifier for each switch in the fabric. Principal Switch: The switch responsible for distributing Name Server information throughout the fabric. Principal ISL: ISL responsible for carrying management traffic between switches. Port Type: Determined by the role played by the device connected to the switch. Common types are: N_Port, NL_Port, F_Port, FL_Port, E_Port, and G_Port.

Zoning: Logical pathing used to partition a fabric. Subscription Ratio: The ratio between host node connections on a switch and the ISL connections from that switch. What is eport ? E Port- This port is connected to another E port to create an Inter-Switch Link (ISL) between two switches. E Ports are non-Trunking ports. What is F-Port? F Port- In fabric port (F Port) mode, an interface functions as a fabric port. This port may be connected to a peripheral device (host or disk) operating as an N port. IBM nterview questions: Lun migration? PSM ? What are vault drives? Size of 1 cylinder in DMX-3? What is v-san configuration? How do you do vsan config? Precautions to be used before spliting a BCV? what are the diffrent Fabric topologies? What is SRDF star? What is trunking? Trunking could be defined a the aggregation of multiple ISLs to form a single logical pipe Trunking is a feature that allows multiple VSANs to share a common interface to another physical switch. It is referred to as VSAN Trunking. Trunking enables interconnect ports to transmit and receive frames in more than one VSAN, over the same physical link, using Extended ISL (EISL) frame format. Figure 13-1 Trunking

The trunking feature includes the following restrictions: Trunking configurations are only applicable to E ports. If trunk mode is enabled in an E port and that port becomes operational as a trunking E port, it is referred to as a TE port. The trunk-allowed VSANs configured for TE ports are used by the trunking protocol to determine the allowed-active VSANs in which frames can be received or transmitted. If a trunking enabled E port is connected to a third-party switch, the trunking protocol ensures seamless operation as an E port. Advantages of trunking? Better SAN Performance: Traffic is interleaved across every available shortest path, changing lanes to fill the entire pipeline. Fewer Switches to Buy: Eliminate wasted ISL bandwidth This is particularly important if your switches do not include dedicated high-speed stacking ports. Easy Management: Trunking mode is automatically invoked when needed. Efficient bandwidth utilization means fewer ISLs to manage Enhanced Reliability: Non-disruptive failover for individual ISLs or complete Trunks

What is fan-in and Fan-out? Fan-In Ratio: Fan-In ratio is a measure of how many storage systems can be accessed by a single host at any given time. This allows a customer to expand connectivity by a single host across multiple storage units. There can be situations where a host requires additional storage capacity and additional space is carved from a new or existing storage unit that was previously used elsewhere. This topology then allows a host to see more storage devices. Care has to be taken not to overload the HBA. If a host requires access to several storage units, it is advisable to add more HBAs. Fan-Out ratio: - is a measure of the number of hosts that can access a Storage port at any given time. Storage Consolidation enables customers to achieve the full benefits of using Enterprise Storage. What is a hop? The number of ISLs a packet has to travel from the destination to target host. What is routing? The path through which a packet takes to travel from the destination to target host.
What is virtual storage area network?

Physical Topology may be partitioned into one or more logical fabrics called VSANs.

(VSAN) is a collection of ports from a set of connected switches that form a virtual fabric. Ports within a single switch can be partitioned into multiple VSANs, despite sharing hardware resources. Conversely, multiple switches can join a number of ports to form a single VSAN. Virtual SANs (VSANs) improve storage area network (SAN) scalability, availability, and security by allowing multiple Fibre Channel SANs to share a common physical infrastructure of switches and ISLs. These benefits are derived from the separation of Fibre Channel services in each VSAN and isolation of traffic between VSANs. Data traffic isolation between the VSANs also inherently prevents sharing of resources attached to a VSAN, like robotic tape libraries. Using IVR, resources across VSANs are accessed without compromising other VSAN benefits. Data traffic is transported between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric. FC control traffic does not flow between VSANs, nor can initiators access any resource across VSANs aside from the designated ones. Valuable resources such as tape libraries are easily shared across VSANs without compromise.

Scenario based questions: A Lun has been assigned to the host but for some reason host is not able to see the Lun: Answer: 1. Check the physical connectivity or the cabling. 2. Check for WWN on the host side and backend device (CLARiiON, symm) and check the zoning settings. 3. Check the fabric login status on the switch management window. 4. Check the storage array for any failures, like disk failure, SP failure. 5. Check the status of HBA card.

What is Symmetrix Optimizer? Symmetrix Optimizer is a tool that automatically balances hyper-volume loads across physical disks within a Symmetrix unit by running a process on the Symmetrix service processor that analyzes hypervolume activity. You can create a DRV device (Dynamic Reallocation Volume) for use by Symmetrix Optimizer to temporarily hold user data while Optimizer reorganizes the devices. Optimizer uses DRVs in device swapping operations in a manner similar to BCV devices in TimeFinder operations. This reorganization takes place on the back end of the Symmetrix and is transparent to the host and end-users. The DRV device maintains the protection level for the device whose backend locations are being optimized. Whats is DRV device? We can create a DRV device (Dynamic Reallocation Volume) for use by Symmetrix Optimizer to temporarily hold user data while Optimizer reorganizes the devices. Optimizer uses DRVs in device swapping operations in a manner similar to BCV devices in TimeFinder operations. This reorganization takes place on the back end of the Symmetrix and is transparent to the host and End-users. The DRV device maintains the protection level for the device whose backend locations are being optimized.

whats access logix?

Can we create a storage group when access logix is not enabled?

What is function of host agent? If host agent is not installed in the host would it be possible to see the information of the host in the connectivity status of clariion (Navisphere)? What are the different types of adaptive copy? What is the difference between device groups and composite groups?

A device group (DG) is a user-defined group comprised of devices that belong to a single Symmetrix array and a single RDF (RA) group. A control operation can be performed on the group as a whole, or on the individual device pairs that comprise it. By default, a device cannot belong to more than one device group and all of the STD devices in a group must reside on the same Symmetrix array. However, if the Symmetrix options file parameter SYMAPI_ALLOW_DEV_IN_MULT_GRPS is enabled, a device can be added to multiple groups. You can use device groups to identify and work with a subset of available Symmetrix devices, obtain configuration, status, and performance statistics on a collection of related devices, or issue control operations that apply to all devices in the specified device group.

A composite group (CG) is a user-defined group comprised of devices that can belong to one or more locally-attached Symmetrix arrays and one or more RDF (RA) groups within a Symmetrix. . An RDF consistency group is a CG comprised of RDF devices (RDF1 or RDF2), which has been enabled for RDF consistency. The RDF consistency group acts in unison to preserve dependent write consistency of a database distributed across multiple SRDF systems. It maintains this consistency via PowerPath or Multi Session Consistency (MSC), which respects the logical relationships between dependant I/O cycles.
diffrence between cx700 and cx3-80 models of clariion? how many bcv volumes could be created for a source device in symm? How do you create a meta device for an already exixting stripped meta device?

To add additional members to an existing striped meta device, use the following form: add dev SymDevName[:SymDevName] to meta SymDevName [, protect_data=[TRUE | FALSE], bcv_meta_head=SymDevName]; where: protect_data = possible values are TRUE or FALSE. The protect_data option is only for striped metas. When set to true, the configuration manager automatically creates a protective copy to the BCV meta of the original device striping. Because this occurs automatically, there is no need to perform a BCV establish. When enabling protection via the protect_data option, you must specify a BCV meta identical to the existing (original) striped meta. bcv_meta_head = when adding new members to an existing, striped meta device, if the data on the meta device is to be protected, you must specify the name of a bcv_meta that matches the original meta device in capacity, stripe count, and stripe size. Example To add Symmetrix devices 0013 and 0014 to striped meta 0010, enter:

add dev 0013:0014 to meta 0010, protect_data=TRUE, bcv_meta_head=00CA;

What is the location where VCMDB is stored? how do i list the raid-5 devices alone on the symm? Symdev list sid XXX raid5 A ce is onsite and he says that he has replaced a drive on which BCV device was already present and he had got the device numbers with him , now how do we get back the BCV and is the sync done ? HOw do u list the device available to particular director or a port? symcfg list -FA 14D -addr (in this case, FA director 14D).

list the steps involved in allocating a lun for a new host in the san environment? What is LUNZ?

LUNz has been implemented on CLARiiON arrays to make arrays visible to the host OS and PowerPath when no LUNs are bound on that array. When using a direct connect configuration,

and there is no Navisphere Management station to talk directly to the array over IP, the LUNZ can be used as a pathway for Navisphere CLI to send Bind commands to the array. LUNz also makes arrays visible to the host OS and PowerPath when the hosts initiators have not yet logged in to the Storage Group created for the host. Without LUNz, there would be no device on the host for Navisphere Agent to push the initiator record through to the array. This is mandatory for the host to log in to the Storage Group. Once this initiator push is done, the host will be displayed as an available host to add to the Storage Group in Navisphere Manager (Navisphere Express). LUNz should disappear once a LUN zero is bound, or when Storage Group access has been attained.To turn on the LUNz behavior on CLARiiON arrays, you must configure the "arraycommpath.

Vous aimerez peut-être aussi