Vous êtes sur la page 1sur 1

Isilon contains both primary and

secondary storage

Isilon specializes in storing


unstructured data.

Isilon specializes in handling file-


based data.

Redundant Array of Independent


Isilon RAIN uses the RSA FEC
Nodes (RAIN) uses individual
(forward error correction)
servers connected via a high speed
mathematical process to protect
fabric, and configured with an
the data.
overlaying management software.
In a Scale-out solution, the
computational throughput, the disk
and disk protection, and the over-
arching management are combined
and exist within a single node or
server.

The Isilon clustered storage system


also does not have any master or
slave nodes. All data is striped
across all nodes in the cluster. As
nodes are added, the file system
grows dynamically and content is
redistributed. Each Isilon storage
node contains globally coherent
RAM, meaning that as a cluster
becomes larger, it also becomes
faster. Each time a node is added,
the cluster’s concurrent
performance scales linearly.
Perhaps the best example of variety
is the world’s migration to social
media. On a platform such as
Facebook, people post all kinds of
file formats: text, photos, video,
polls, and more. According to a
CNET article from June 2012,
Facebook was taking in more than
500 terabytes of data per day,
including 2.7 billion Likes and 300
million photos. Every day. That
many kinds of data at that scale
represents Big Data variety.
What’s an example of velocity?
Machine-generated workflows
Big Data is defined as any
produce massive volumes of data.
collection of data sets so large, is digital data having too much
For example, the longest stage of
diverse, and fast changing that it is volume...velocity...or variety, to be
designing a computer chip is
difficult for traditional technology to stored traditionally.
physical verification, where the chip
efficiently process and manage.
design is tested in every way to see
not only if it works, but also if it
works fast enough. Each time
researchers fire up a test on a
graphics chip prototype, sensors
generate many terabytes of data
per second. Storing terabytes of
data in seconds is an example of
Big Data velocity.
Lesson 1
What do we mean by volume?
Consider any global website that
works at scale. YouTube’s press
page says YouTube ingests 100
hours of video every minute. That is
one example of Big Data volume.

Data Ingestion is the process of


obtaining and processing data for
later use by storing in the most
appropriate system for
the application. An effective
INGEST
ingestion methodology validates
the data, prioritizes the sources and
with reasonable speed
and efficiency commits data to
storage.

Storing data is typically dictated by


the type of storage strategy namely
STORAGE block or file, the flow mechanism
and the
application.
Data analysis technologies load,
process, surface, secure, and
manage data in ways that enable
organizations to mine
value from their data. Traditional
ANALYSIS
data analysis systems are
expensive, and extending them
beyond their critical purposes
A scale-out data lake is a large
can place a heavy burden on IT
storage system where enterprises
resources and costs.
can consolidate vast amounts of Organizational data typically follows
their data from other solutions or key characteristics of a scale-out a linear data flow starting with Post analysis, results and insights
locations, into a single store—a data lake various sources both consumer and have to be surfaced for actions like
data lake. The data can be secured corporate e-discovery, post mortem analysis,
and analysis performed, insights business process
surfaced, and actions. improvements, decision making or
a host of other applications.
Traditional systems use traditional
protocols and access
APPLICATION: SURFACE AND ACT mechanisms while new and
emerging systems are redefining
access requirements to data
already stored within an
organization. A system is not
complete unless it caters to the
range of requirements placed by
traditional and nextgeneration
workloads, systems and processes.
Isilon enhances the data lake concept by enriching your storage with
improved cost efficiencies, reduced risks, data
protection, security, compliance & governance while enabling you to
get to insights faster. You can reduce the risks of your
big data project implementation, operational expenses and try out pilot
projects on real business data before investing in a
solution that meets your exact business needs. Isilon is based on a
fully distributed architecture that consists of modular
hardware nodes arranged in a cluster. As nodes are added, the file
system expands dynamically scaling out capacity and
performance without adding corresponding administrative overhead.
Architecturally, every Isilon node is a peer to
every other Isilon node in a cluster, allowing any
node in the cluster the ability to handle a data
request. The nodes are equals within the cluster
and no one node acts as the controller or the
filer. Instead, the OneFS operating system
unites all the nodes into a globally coherent
pool of memory, CPU, and capacity. As each
new node is added to a cluster, it increases the
aggregate disk, cache, CPU, and network
capacity of the cluster as a whole.
All nodes have two mirrored local
flash drives that store the local
operating system, or OS, as well as
drives for client storage.

All storage nodes have a built-in


NVRAM cache that is either battery
backed-up or that will perform a
vault to flash memory in the event
of a power failure.

The S-Series is for ultra-performance


primary storage and is designed for high-
The Isilon product family consists of transactional and IO-intensive tier 1
four storage node series: S-Series, workflows.
X-Series, NL- Series, and the new
HD-Series. The X-Series strikes a balance between large
capacity and high-performance storage. X-
Series nodes are best for high-throughput
and high-concurrency tier 2 workflows and
also for larger files with fewer users.
The NL-Series is designed to provide a cost-
effective solution for tier 3 workflows, such as
nearline storage and data archiving. It is ideal
for nearline archiving and for disk-based
backups.
The HD-Series is the new high-
density, deep archival platform. This
platform is used for archival level
data that must be retained for long,
if not indefinite, periods of time.
Isilon offers SSDs option for storing
metadata or file data. SSDs are
used as a performance
enhancement for metadata. Isilon
nodes can leverage enterprise SSD
technology to accelerate
namespace-intensive metadata
operations.

The X-Series and S-Series nodes


Lesson 2 can also use SSDs for file-based
storage, which enables the
placement of latency-sensitive data
on SSDs instead of traditional fixed
disks. Data on SSDs provides
better large-file random-read
throughput for small block sizes
(8k, 16k, 32k) than data on the HDD
drives.

The Isilon offering includes the


option to combine SSD and SAS or
SSD and SATA drives in one
chassis to the customer’s storage
requirements.
You can have 6 SSD and 6 SATA
drives in an X200. NOTE: When
using a hybrid node (a node with
SSD and SAS/SATA and HDDs) the
SSD Drives must be located
starting in bay 1 and up to bay 6.

All similar nodes must initially be


purchased in groups of three due to
the way that OneFS protects the
data.

If you accidentally bought three S-


All clusters must start with a nodes and two X- nodes, you could
minimum of three like-type or still form a cluster but only the three
identical nodes. This means that S-nodes would be writeable. The
when starting a new cluster you two X-nodes would add memory
As of this publication, clusters can
must purchase three identical and processing to the cluster but
scale up to a maximum of 144
nodes would sit in a read-only mode until
nodes and access 36.8 TB of global
a third X-node was joined. Once the
system memory.
third X-node was joined, the three
X-nodes would automatically
become writable and add their
storage capacity to the whole of the
cluster.
InfiniBand is a point-to-point
Module 1: Intro to Isilon microsecond-latency interconnect
that is available in 20 Gb/sec
Double Data Rate (DDR), and 40
Gb/sec Quad Data Rate (QDR)
models of switches.

Using a switched star topology,


each node in the cluster is one hop
away from any other node.
If you fill up all the ports on the
back-end switches you will need to
For the internal network, the nodes
buy larger switches as it is
in an Isilon cluster are connected by
absolutely not supported to ‘daisy
a technology called InfiniBand.
chain’ the back-end switches.
it is safer to remember not to coil
the cables less than 10 inches in
diameter to ensure they do not
Connection from the nodes to the become damaged.
internal InfiniBand network now Use a hybrid QSFP-CX4 cable to
comes in copper or fibre, connect DDR InfiniBand switches
depending on the node type. with CX4 ports to nodes that have
QSFP (Quad Small Form-factor
Pluggable) ports (A100, S210, and
X410, HD400).

The key to Isilon’s storage cluster


solutions is the architecture of
OneFS, which is a distributed
cluster file system.

Data redundancy is accomplished


by striping data across the nodes
instead of the disks so that
redundancy and performance are
increased. For the purposes of data
striping, you can consider each
node as an individual device.
You have three options for
managing the cluster. You can use
the web administration interface,
the CLI, and PAPI, which is the
Platform Application Programming
Interface.

The Isilon web administration The permissions to accomplish


interface requires that at least one To log into the web administration locking down of the web UI are
IP address is configured on one of interface, you need to use the: root called RBAC, role-based access
the external Ethernet ports on one account, admin account or be a control
of the nodes. The Ethernet port IP member of a role which has the
address is either configured ISI_PRIV_LOGIN_PAPI privileges Administration can be done on any
manually or by using the assigned to it node in the cluster via a browser
Configuration Wizard. and connection to port 8080.

Because Isilon is built upon


To access the CLI out-of-band, a FreeBSD, many UNIX based
serial cable is used to connect to command, such as grep, ls, cat,
the serial port on the back of each etc., will work via the CLI. There are
node. CLI can also be accessed in- also Isilon-specific commands
band once an external IP address known as isi (izzy) commands that
has been configured for the cluster. are specifically designed to manage
OneFS.

The default shell is zsh

The UNIX shell environment use in


OneFS allows scripting and
execution of many of the original
UNIX commands

The CLI can be accessed by


opening a secure shell (SSH)
connection to any node in the
cluster. This can be done by root or
any user with the
ISI_PRIV_LOGIN_SSH privileges

The isi status command provides


an overview of the cluster, and will
rapidly show if any critical hardware
issues exist.
The man isi or isi --help command
is probably the most important The isi devices command displays
command for a new administrator. a single node at a time. Using the
It provides an explanation of the isi_for_array command, all drives in
many isi commands available. all nodes of the cluster can be
displayed at one time. Using isi
Lesson 3 devices –d <node#:bay#> an
individual drives details is
displayed.

If the Configuration Wizard starts,


the prompt displays as shown
above. There are four options listed:
1. Create a new cluster
When a node is first powered on or
reformatted, the Configuration 2. Join an existing cluster
Wizard automatically starts.
3. Exit wizard and configure
manually

4. Reboot into SmartLock


Compliance mode

The isi config command opens the


Configuration Console where node
and cluster settings can be
configured. The Configuration
Console contains settings that are
configured during the Configuration
Wizard that ran when the cluster
was first created.

Click Cluster Management >


Hardware Configuration >
Shutdown & Reboot Controls.
Restart or shut down the cluster via
The following command restarts a
the web administration interface or
single node by specifying the reboot 6
the CLI.
Run the isi config command. logical node number (lnn):
The following command shuts
shutdown all
down all nodes on the cluster

1. The first method is to add the


If a node attempts to join the
node using the command-line
cluster with a newer or older OneFS
interface.
version, the cluster will
automatically reimage the node to 2. The second method is to join the
match the cluster’s OneFS version. additional nodes to the cluster via
There are 4 methods to join
After this reimage completes, the the front panel of the node.
additional nodes to the cluster:
node finishes the join. A reimage
should not take longer than 5 3. The third method is to use the
minutes, which brings the total web administration interface.
amount of time taken to
4. The fourth method is to use the
approximately 10 minutes.
CLI using isi devices.

Transfer rate = 115,200 bps • Data


bits = 8
Configure the terminal emulator Parity = none
To initially configure an Isilon utility to use the following settings:
cluster, the CLI must be accessed Stop bits = 1
by establishing a serial connection
to the node designated as node 1. Flow control = hardware
The serial port is usually a male If you log in as the root user, it will
DB9 connector. be a # symbol. If you log in as
another user, it will be a % symbol.
For example, Cluster-1# or
Cluster-1%.
Role-based administration defines
the ability to perform specific
A user can be assigned to more
administrative functions to a
than one role and will then have the
specific privilege. You can create a
combined privileges of those roles.
roles that and assign privileges to
that role.

The Admin group from previous versions of OneFS was


eliminated with OneFS 7.0 and customers, with existing
A role is made up of the privileges
members of the Admin group, must add them to a supported
(read or full control) that can be
role in order to maintain identical functionality. However, the
performed on an object. OneFS
root and admin user accounts exist on the cluster. The root
offers both built-in and custom
account has full control through the CLI and web
roles.
administration interface whereas the admin account only has
access through web administration interface.

AuditAdmin: Provides read-only


access to configurations and
settings.
SecurityAdmin: Provides the ability
to manage authentication to the
cluster.
SystemAdmin: Provides all
administrative functionality not
The job engine performs cluster-wide automation of tasks on the cluster. The job engine is a exclusively defined under the
The isi_job_d daemons on each node communicate with each daemon that run on each node. The daemon manages the separate jobs that are run on the cluster. SecurityAdmin role.
other to confirm actions are coordinated across the cluster. This The daemons run continuously, and spawn off processes to perform jobs as necessary. Individual
communication ensures that jobs are shared between nodes to jobs are procedures that are run until complete. Individual jobs are scheduled to run at certain VmwareAdmin: Provides all
keep the work load as evenly distributed as possible. times, are started by an event such as a drive failure, or manually started by the administrator. Jobs administrative functionality required
Built-in roles designate a by the vCenter server to effectively
do not run on a continuous basis.
predefined set of privileges that utilize the storage cluster.
All jobs have priorities. If a low priority job is running when cannot be modified. These pre-
a high priority job is called for, the low priority job is defined roles are: AuditAdmin, BackupAdmin: The initial reaction will be that this BackupAdmin
paused, and the high priority job starts to run. SecurityAdmin, SystemAdmin, role is for use with the NDMP protocol; however, that is not the
VmwareAdmin and BackupAdmin. case. The BackupAdmin role allows for backing up and restoring
The job progress is periodically saved by creating files across SMB, RAN [restful access to namespace] The two
checkpoints. Jobs can be paused and these checkpoints new privileges, ISI_PRIV_IFS_BACKUP and
are used to restart jobs at the point the job was paused ISI_PRIV_IFS_RESTORE, allow you to circumvent the traditional
when the higher priority job has completed. A job is a specific task, or family of tasks, intended to accomplish file access checks, the same way that the ROOT account has the
a specific purpose. Jobs can be scheduled or invoked by a privileges to circumvent the file access checks; this is all that
A job running at a high impact level can use a significant
certain set of conditions BackupAdmin allows you do to
percentage of cluster resources, resulting in a noticeable
reduction in cluster performance. Assign users to both the
SystemAdmin and the
OneFS does not enable administrators to define custom
Lesson 4 SecurityAdmin roles to provide full
jobs. It does permit administrators to change the
administration privileges to an
configured priority and impact levels for existing jobs.
account. By default, the admin and
Changing the configured priority and impact levels can
root users are members of both of
impact cluster operations.
these roles.
The job engine can run up to three jobs at a time. The relationship
For example, an authenticated user
between the running jobs and the system resources is complex.
connecting over SSH with the
Several dependencies exist between the category of the different
ISI_PRIV_IFS_BACKUP privilege
jobs and the amount of system resources consumed before
will be able to traverse all
resource throttling begins.
directories and read all file data and
Job - An application built on the distributed work system of the metadata regardless of the file
job engine. A specific instance of a job, often just called a job, is permissions. This allows that user
controlled primarily through its job ID that is returned using the isi to use the SSH protocol as a
job jobs start command. OneFS 7.1.1 now adds the backup protocol to another
BackupAdmin as one of the five machine, without getting access
Phase - One complete stage of a job. Some jobs have only one built-in administrative roles. These denied errors, and without
phase, while others, like MediaScan, have as many as seven. If an built-in roles designate a predefined connecting as the root user.
error occurs in a phase, the job is marked failed at the end of the set of privileges that cannot be
phase and does not progress. Each phase of a job must complete modified. These pre-defined roles Best use case: Prior to OneFS
successfully before advancing to the next stage or being marked are: AuditAdmin, SecurityAdmin, 7.1.1, when and using Robo Copy
complete returning a job state Succeeded message. SystemAdmin and VmwareAdmin to copy data from a Windows box
and with OneFS 7.1.1, to the cluster, you would have to
Task - A task is a division of work. A phase is started with one or BackupAdmin. create a special share and set the
more tasks created during job startup. All remaining tasks are RUN AS ROOT permission so that
derived from those original tasks similar to the way a cell divides. anyone who connected to that
A single task will not split if one of the halves reduces to a unit share would have ROOT access.
less than whatever makes up an item for the job. At this point, this With the two new privileges you can
task reduces to a single item. For example, if a task derived from use any share to run these copy
a restripe job has the configuration setting to a minimum of 100 type tools without having to create
logical inode number (LINS), then that task will not split further if it a special share and use the RUN
derives two tasks, one of which produces an item with less than AS ROOT permission.
100 LINs. A LIN is the indexed information associated with Job Engine Terminology
specific data.
Task result - A task result is a usually small set of statistics about
the work done by a task up to that point. A task will produce one
or more results; usually several, sometimes hundreds. Task
results are producing by merging item results, usually on the Prior to OneFS 7.1.1, all RBAC
order of 500 or 1000 item results in one task result. The task administration was done at the
results are themselves accumulated and merged by the command line only.
coordinator. Each task result received on the coordinator updates
the status of the job phase seen in the isi job status command.

Item - An item is an individual work item, produced by a task. For


instance, in quotascan an item is a file, with its path, statistics,
and directory information.
In OneFS, data is protected at
Item result - An accumulated accounting of work on a single item;
multiple levels. Each data block is
for instance, it might contain a count of the number of retries
protected using Cyclic redundancy
required to repair a file, plus any error found during processing.
Lesson 1: Job Engine Architecture checks or CRC checksums. Every
Checkpoints - Tasks and task results are written to disk, along file is striped across nodes, and
with some details about the job and phase, in order to provide a protected using error-correcting
restart point. codes or ECC protection.

The job engine consists of all the job daemons across the Metadata checksums are housed in
whole cluster. The job daemons elect a job coordinator. The the metadata blocks themselves,
election is by the first daemon to respond when a job is whereas file data checksums are
started. stored as metadata, thereby
providing referential integrity.
Jobs can have a number of phases. There might be only one
phase, for simpler jobs, but more complex ones can have In the event that the recomputed
multiple phases. Each phase is executed in turn, but the job is checksum does not match the
not finished until all the phases are complete. How the Job Engine works ISI Data Integrity (IDI) is the OneFS stored checksum, OneFS will
process that protects file system generate a system event, log the
Each phase is broken down into tasks. These tasks are event, retrieve and return the
structures against corruption via
distributed to the nodes by the coordinator, and the job is corresponding FEC block to the
32-bit CRC checksums. All Isilon
executed across the entire cluster. client and attempt to repair the
blocks, both for file and metadata,
use checksum verification. suspect data block.
Each task consists of a list of items. The result of each item’s
execution is logged, so that if there is an interruption the job Permanent Internal structures
can restart from where it stopped.
On-Disk data structures

Designed to protect: Transient Internal structures

In-memory data structures


Job Engine v2.0 File data on cluster

However the OneFS operating


system supports multiple mirror
copies of the data. In fact, it is
possible to create up to 7 mirrors of
the data on a single cluster.
Mirroring creates a duplicate copy
The first method is mirroring Mirroring has the highest protection
of the data being protected
overhead in disk space
Isilon supports different methods of consumption. However, for some
Use isi_job_d status from the CLI to data protection. types of workloads, such as NFS
find the coordinator node. The node datastores, mirroring is the
Job Coordinator
number displayed is the node array preferred protection option.
ID.
FEC offers higher level of protection
The primary protection option on an
than RAID and the ability to sustain
Isilon cluster is known as Forward
the loss of up to four drives or
Error Correction or FEC.
nodes in a node pool.
Job Engine v2.0 Components
Isilon system protects the metadata
associated with the file data. The
metadata is protected at one level
higher than the data using
metadata mirroring. So, if the data
is protected at N+3n, then the
Job Workes metadata is protected at 4X.
In RAID systems, the protection is
applied at the physical disk level
and all data is protected identically.
Isilon allows you to define
protection level at the node pool
(group of similar nodes), directory
or even individual file level, and
have multiple protection levels
configured throughout the cluster.
OneFS can support protection
levels of up to N+4n. The data can
Exclusion Sets - Job Engine v2.0 be protected with a N+4n scheme,
where up to 4 drives, nodes or a
combination of both can fail without
data loss. On an Isilon cluster, you
can enable N+2n, N+3n, or N+4n
protection, which allows the cluster
The Isilon system uses the Reed- to sustain two, three, or four
Lesson 2: Jobs and Job Solomon algorithm, which is an simultaneous failures without
Configuration Settings industry standard method to create resulting in data loss.
error-correcting codes or ECC at In OneFS, protection is calculated
the file level. per individual files and not
calculated based on the hardware.
OneFS provides the capability to
set a file’s protection level at
multiple levels. The requested
protection can be set by the default
system setting, at the node pool
level, per directory, or per individual
file.

File stripes are portions of a file that


Each file stripe contains both data
will be contained in a single data
stripe units and protection stripe
and protection band distributed
units.
across nodes on the cluster.

The file stripe width or size of the


stripe varies based on the file size,
the number of nodes in the node
pool, and the requested protection
level to be applied the file. The
number of file stripes can range
from a single stripe to thousands of
stripes per file.
Module 8: Job Engine The file data is broken in to 128KB
data stripe units consisting of 16 x
8KB blocks per data stripe unit.
A single file stripe width can contain
up to 16 x 128KB data stripe units
The data stripe units and protection
for a maximum size of 2MB as the
stripe units are calculated for each
portion of the file’s data.
file stripe by the Block Allocation
Manager (BAM) process. The BAM process calculates 128KB
FEC stripe units to meet the
requested protection level for each
file stripe. The higher the desired
protection level, the more FEC
stripes units are calculated.

OneFS uses advanced data layout


algorithms to determine data layout
for maximum efficiency and
performance. Data is evenly
distributed across nodes in the
node pool as it is written. The
system can continuously reallocate
where the data is stored and make
To get to the job engine information in OneFS 7.2, click CLUSTER storage space more usable and
MANAGEMENT, then click Job Operations. The available tabs are Job efficient.
Summary, Job Types, Job Reports, Job Events and Impact Policies.
Within the cluster, every disk within
each node is assigned both a
unique GUID and logical drive The combination of node number,
number and is subdivided into logical drive number and block
OneFS stripes the data stripe units 32MB cylinder groups comprised of offset comprise a block or inode
and FEC stripe units across the 8KB blocks. Each cylinder group is address and fall under the control
nodes. responsible for tracking, via a of the aptly named Block Allocation
bitmap, whether its blocks are used Manager (BAM).
for data, inodes or other metadata
constructs.

The client saves a file to the node it


is connect to.

The file is divided into data stripe


units. The data stripe units are
simple example of the write assembled into the maximum stripe
process widths for the file.

FEC stripe unit(s) are calculated to


meet the requested protection level.
The data and FEC stripe units are
striped across nodes.

FEC protected stripes are also


calculated using the same
algorithm. The different requested
protection schemes can utilize a
single drive per node or use
multiple separate drives per node
on a per protection stripe basis.
FEC is calculated for each When a single drive per node is
protection stripe and not the used, it is referred to as N+M or
complete file. For file system data, N+Mn protection. When multiple
FEC calculated mirroring and FEC drives per node are used, it is
protection stripes. When the referred to as N+M:B or N+Md:Bn
system determines mirroring is to protection.
be used as the protection, the Protection calculations in OneFS
mirrors are calculated using the are calculated at the block-level
FEC algorithm. The algorithm is run whether using mirroring or FEC
anytime a requested protection stipe units. Files are not always
setting is other than 2X to 8X. exactly 128KB in size. 8KB blocks
are used to store files in OneFS.
OneFS only uses the minimum
required number 8KB blocks to
Lesson 3: Managing Jobs store a file whether it is data or
protection. FEC is calculated at the
8K block-level for each portion of
the file stripe.

Mirroring can be explicitly set as


the requested protection level in all
available locations. One particular
use case is where the system is
used to only store small files. A file
of 128KB or less is considered a
small file.
Mirroring is set as the actual
protection on a file even though
In additional for use to protect file
another requested protection level
data, mirroring is used to protect
is specified under certain
the file’s metadata and some
conditions. If the files are small, the
system files that exist under /ifs in
FEC protection for the file results in
hidden directories.
a mirroring.
Mirroring is also used if the node
pool is not large enough to support
the requested protection level. As
an example, if there are 5 nodes in
a node pool and N+3n is the
requested protection, the file data is
saved at the 4X mirror level as the
actual protection.

Lesson 1

As displayed in the graphic, only a


single data stripe unit or a single
FEC stripe unit are written to each
node. These requested protection
levels are referred to as N+M or
N+Mn.
M represents the number of
simultaneous drive failures on
separate nodes that can be
tolerated at one time. It also
represents the number of
simultaneous node failures at one
time. A combination of both drive
failures on separate nodes and
node failures is also possible.
N must be greater than M to gain
benefit from the data protection.
Referring to the chart, the minimum
number of nodes required in the
node pool for each requested
protection level are displayed, three
nodes for N+1n, five nodes for
N+2n, 7 nodes for N+3n, and 9
nodes for N+4n. If N equals M, the
protection overhead is 50 percent.
If N is less than M, the protection
results in a level of FEC calculated
mirroring.

The isi job status command is used to view currently running, paused, or
queued jobs, and the status of the most recent jobs. Use this command
to view running and most recent jobs quickly. Failed jobs are clearly
indicated with messages.

The isi job statistics command includes the options of list and view. The The available N+Mn requested
verbose option is provided to provide detail information about the job protection levels are +1n, +2n, +3n,
operations. To get the most information about all current jobs, use the isi and +4n.
job statistics list –v command.
With N+Mn protection only one
stripe unit is located on a single
node. Each stripe unit is written to a
single drive on the node. Assuming
the node pool is large enough, the
maximum size of the file stripe
width is 16 data stripe units plus
the protection stripe units for the
requested protection level.

The : (colon) represents an “or”


Events provide notifications for any ongoing issues and displays conjunction. The B value represents
the history of an issue. This information can be sorted and filtered the number of tolerated node
by date, type/module, and criticality of the event. losses without data loss.

an example of a 1MB file with a


requested protection of +2d:1n.
Two stripe units, either data or
Events and event notifications enable you to receive information protection stripe units are place on
about the health and performance of the cluster, including drives, separate drives in each node. Two
nodes, snapshots, network traffic, and hardware. drive on different nodes per sub
pool can simultaneously be lost or a
single node without the risk of data
loss.

Lesson 1: Cluster Event


Architecture

The raw events are processed by the CELOG coalescers and are stored in
An event is a notification that provides important information about the
log databases. Events are presented in a reporting format through SNMP
health or performance of the cluster. Some of the areas include the task
polling, as CLI messages, or as web administration interface events. The
state, threshold checks, hardware errors, file system errors, connectivity
events generate notifications, such as ESRS notifications, SMTP email
state and a variety of other miscellaneous states and errors.
alerts, and SNMP traps.
The purpose of cluster events log (CELOG) is to monitor, log and report
Multiple data stripe units and FEC
important activities and error conditions on the nodes and cluster.
stripe units are placed on separate
Different processes that monitor cluster conditions, or that have a need
The CELOG system receives event messages from other processes in the drive on each node. This is referred
to log important events during the course of their operation, will
system. Multiple related or duplicate event occurrences are grouped, or to as N+M:B or N+Md:Bn
communicate with the CELOG system. The CELOG system is designed
coalesced, into one logical event by the OneFS system. protection. These protection
to provide a single location for the logging of events. CELOG provides a
schemes are represented as
single point from which event notifications are generated, including
+Md:Bn in the OneFS web
sending alert emails and SNMP traps.
administration interface and the
Instance ID – The unique event identifier command line interface. The single protection stripe spans
the nodes and each of the included
Start time – When the event began
drives on each node. The
End time – When the event ended, if supported N+Md:Bn protections
applicable Quieted time – When the event are N+2d:1n, N+3d:1n, and N+4d:
was quieted by the user 1n

Event type – The event database reference N+2d:1n is the default node pool
ID. Each event type references a table of requested protection level in
information that populates the event details OneFS. M is the number the
N+Md:Bn utilizes multiple drives
and provides the template for the messages number of stripe units or drives per
per node as part of the same data
displayed. node, and the number of FEC stripe
stripe with multiple stripe units per
units per protection stripe. The
Category – Displays category of the event, node. N+Md:Bn protection lowers
same maximum of 16 data stripe
hardware, software, connectivity, node status, the protection overhead by
units per stripe is applied to each
etc. increasing the size of the protection
protection stripe.
stripe.
Message – More specific detail about the
event

Scope – Is the event cluster wide or pertaining


to a particular node

Update count – If the event is a coalesced


event or re-occurring event, the event count
is updated

Event hierarchy – Normal event or a To display the event details, on the Summary page,
coalescing event in the Actions column, click View details. N+3d:1n and N+4d:1 are most
effective with larger file sizes on
Severity – The level of the event severity from smaller node pools. Smaller files
just informational (info), a warning event are mirrored when these protection
N+2d:1n contains 2 FEC stripe levels are requested.
(warn), a critical event, or an emergency event
units, and has 2 stripe units per
Extreme severity – The highest severity level examples for the available node. N+3d:1n contains 3 FEC
received for coalesced events where the N+Md:Bn requested protection stripe units, and has 3 stripe units
severity level may have changed based on the levels. per node. N+4d:1n contains 4 FEC
values received especially for threshold stripe units, and has 4 stripe units
violation events per node.
Value – A variable associated with a particular
event. What is displayed varies according to
the event generated. In the example displayed,
the value 1 represents true, where 0 would
represent false for the condition. In certain
N+3d:1n1d includes three FEC
events it represents the actual value of the The maximum number of data
stripe units per protection stripe,
monitored event and for some events the value stripe units is 15 and not 16 when
and provides protection for three
field is not used. using N+3d:1n1d requested
simultaneous drive losses, or one
protection.
Extreme value – Represents the threshold node and one drive loss.
setting associated with the event. In the
N+4d:2n includes four FEC stripe
example displayed, the true indicator is the
units per stripe, and provides
threshold for the event. This field could In addition to the previous
The available requested protection protection for four simultaneous
represent the threshold exceeded that N+Md:Bn there are two advanced
levels N+3d:1n1d and N+4d:2n. drive losses, or two simultaneous
triggered the event notification to occur. forms of requested protection. node failures.

examples of the advanced


N+Md:Bn protection schemes
Reading Event Type
If there is a 10 node cluster, 2 FEC
stripe units would be calculated on
the 8 data stripe units using an N+2
protection level. The protection
overhead in this case is 20 percent
Using N+2 protection, the 1 MB file
would be placed into 3 separate
data stripes, each with 2 protection
Lesson 2: Working with System stripe units. A total of 6 protection
Events stripe units are required to deliver
the requested protection level for
the 8 data stripe units. The
protection overhead is 43 percent.
A coalesced event is spawned by
an ancestry event, which is the first Display Coalesced Event Details Using N+2:1 protection the same 1
occurrence of the same event. MB file requires 1 data stripe, 2
example to assist in clarifying drives per node wide per node and
N+2:1 even better only 2 protection stripe units. The
10 stripe units are written to 2
different drives per node. The
protection overhead is the same as
the 10 node cluster at 20 percent.

isi events list command – List events either by default


or using available options to refine output; including
specific node, event types, severity and date ranges.

isi events show command – Displays event details


associated with a specific event.

isi events quiet command – Changes event status to


quieted and removes from the new

events list and adds the event to the quieted events


list.
Use the isi events command to display and manage events
isi events unquiet command - Changes event status to Use isi events –h to list available
through the CLI. You can access and configure OneFS events
unquieted and re-adds the event to the new events command actions and options.
and notification rules settings using the isi event command.
list.

isi events cancel command – Changes the event status


to cancelled and adds the event to the event history list.
The protection overhead for each
isi events notifications command – Used to set the protection level depends on the file
notification rules including, method of notification, email size and the number of nodes in the
addresses and contacts based on event severity level. cluster.
isi events settings – Used to list event settings.

isi events sendtest – Sends a test notification to all


notification recipients.

From a cluster-wide setting, the


requested protection in the default
file pool policy is applied to any file
or folder that has not been set by
another requested protection
policy. A requested protection level
is assigned to every node pool. In
Quieting vs Canceling Events OneFS, the requested protection
can be set at the directory or
individual file level.

Management of the requested


protection levels is available using
the web administration interface,
the CLI or Platform Application
Programming Interface (PAPI).
If you configure email event notifications, you designate recipients
and specify SMTP, authorization, and security settings.

If you configure the OneFS cluster for SNMP monitoring, you select
events to send SNMP traps to one or more network monitoring
stations, or trap receivers. When you configure event notification rules, you can choose from
three methods to notify recipients: email, ESRS or SNMP trap.
The isi events notifications command is used to manage the details
Each event notification can be configured through the web
for specific or all notification rules.
administration interface or the command-line interface.
The isi events settings command manages the values of global
settings or the settings for a specific notification policy.

The isi events sendtest command sends a test event notification to


verify event notification settings.

Determine whether a cluster is


performing optimally.

Compare changes in performance


across multiple metrics, such as
CPU usage, network The default file pool policy default
protection setting is to use the node
traffic, protocol operations, and pool or tier setting. Requested
client activity. InsightIQ helps you monitor and analyze Isilon protection is set per node pool.
cluster performance and file systems. Requested protection configuration When a node pool is created, the
Correlate critical cluster events with is available at multiple levels. default requested protection
performance changes. applied to the node pool is +2d:1n.
Determine the effect of workflows, The required minimum requested
software, and systems on cluster protection for an HD400 node pool
performance over time. is +3d:1n1d.

View and compare properties of the


data on the file system.

The current version of InsightIQ is 3.1, and only this


version will deal with all the features of OneFS 7.2

InsightIQ has a straightforward layout of its independent


units. Inside the Isilon cluster, monitoring information is
generated by isi_stat_d, and presented through isi_api_d,
which handles PAPI calls, over HTTP.

The storage cluster collects statistical data in isi_stats_d.


It then uses the platform API to deliver that data via http
to the InsightIQ host.
By default, InsightIQ stores cluster data on a virtual hard
drive. The drive must have at least 64GB of free disk A SmartPools license is required to
SmartPools file pool policies are
space. As an alternative, you can configure InsightIQ to create custom file pool policies.
used to automate data
store cluster data on an Isilon cluster or any NFS- Custom policies can be filtered on
management including the
mounted server. many different criteria for each
application of requested protection
policy including file path or
settings to directories and files, the
metadata time elements. Without a
storage pool location, and the I/O
SmartPools license on the default
optimization settings.
file pool policy is applied.

Manual settings are often used to


modify the protection on specific
The InsightIQ File System Analytics (FSA) feature lets you view and
directories or files. The settings can
analyze file system reports. When FSA is enabled on a monitored
be changed at the directory or
cluster, a FSA job runs on the cluster and collects data that Lesson 3: InsightIQ Overview subdirectory level. Individual file
InsightIQ uses to populate reports. You can modify how much
settings can be manually changed.
information is collected by the FSA job through OneFS. Lesson 2 Using manual settings is not
recommended. Using manual
settings can return unexpected
results and create management
issues as the data and cluster age.
File system explorer is used to view
the directories and files on the
cluster. You can also modify the The requested protection is
properties of any directory or file. displayed in the Policy column. To
The properties are stored for each modify the requested protection
file in OneFS. To access file system level, click Properties.
explorer requires the administrator
to login as root.
Before installing InsightIQ, you must obtain an
When a new node pool is added to
InsightIQ license key from Isilon.
Suggested Protection refers to the a cluster or the node pool size is
visual status and CELOG event modified, the suggested protection
notification for node pools that are level is calculated and the MTTDL
set below the calculated suggested calculations are compared to a
protection level. The suggested database for each node pool. The
protection is based on meeting the calculations use the same logic as
minimum mean time to data loss, or the Isilon Sizing Tool, which is an
MTTDL, standard for EMC Isilon online tool used primarily by EMC
node pools. Isilon Pre-Sales engineers and
business partners.
What commonly occurs is a node
pool starts small and then grows
beyond the configured requested
The default requested protection protection level. The once adequate
setting for all new node pools is +2d:1n requested protection level is
+2d:1n, which protects the data no longer appropriate, but is never
against either the simultaneous loss modified to meet the increased
of two drives or the loss of a single MTTDL requirements. The
Module 2: Data Protection node. suggested protection feature
provides a method to monitor and
notify users when the requested
protection level should be changed.

By default, the suggested


The suggested protection features protection feature is enabled on
notifies the administrator only when new clusters. On clusters upgraded
the requested protection setting is to OneFS 7.2, the feature is
below the suggested level for a disabled by default.
node pool. The notification doesn’t
give the suggested setting and In the web administration interface,
The DASHBOARD provides an aggregated cluster overview and a cluster-by- node pools that are within suggested protection notifications
cluster overview. In the Aggregated Cluster Overview section, you can view the suggested protection levels are not are located under FILE SYSTEM
status of all monitored clusters as a whole. There is a list of all the clusters and displayed. Suggested protection is MANAGEMENT > Storage Pools >
nodes that are monitored. Total capacity, data usage, and remaining capacity part of the SmartPools health status Summary and are included with
are shown. Overall health of the clusters is displayed. There are graphical and reporting. other storage pool status
numeral indicators for Connected Clients, Active Clients, Network Throughput, messages.
File System Throughput, and Average CPU Usage. There is also a Cluster-by-
Cluster Overview section that can be expanded In the web administration interface,
go to DASHBOARD > Events >
Summary. Use the CLI command isi
events list to display the list of
events.
An alternate method to using the
Isilon Sizing Tool is to just change
the requested protection setting to
a higher protection setting and see
When a node pool is below the
if the Caution notification is still
InsightIQ needs to have a location where it can store the monitoring recommended suggested
present.
database it maintains. On the Settings tab, the Data Store submenu protection level, a CELOG event is
opens the interface for entering those parameters. created, which can be viewed like You should also verify and, if
any other event. required, modify any appropriate
SmartPools file pool policies. Often
the default file pool policy reflects a
fixed requested protection level,
which can override node pool
Module 7: Monitoring specific settings. Verify the default
The data store size requirements vary depending on how many clusters the file pool policy setting for requested
customer want InsightIQ to monitor, how many nodes comprise the monitored protection is set to Using requested
clusters, how many clients the monitored clusters have, and the length of time protection level of the node pool or
that the customer want to retain retrieved data. If they want InsightIQ to monitor tier.
more clusters with more clients and nodes, or if they want to retain data about a
longer period of time, they will need a larger data store. Start with at least 70 GB The number of nodes in the cluster
free disk space available. affects the data layout because
data is laid out vertically across all
To specify an NFS data store, click Settings > Data Store. The Configure Data nodes in the cluster
Store Path page appears and displays the current location of the data store. In
the NFS server text box, type the host name or IP address of the server or Isilon The protection level also affects
cluster on which collected performance data will be stored. In the Data store data layout because you can
path text box, type the absolute path, beginning with a slash mark (/), to the change the protection level of your
directory on the server or cluster where you want the collected data to be data down to the file level, and the
stored. This field must only contain ASCII characters. Click Submit. protection level of that individual file
changes how it will be striped
Verify that a valid InsightIQ license is enabled on the monitored cluster and that across the cluster.
the local InsightIQ user is enabled and configured with a password on the
monitored cluster. This is done using the cluster web administration interface. Go The file size also affects data layout
to Help > About This Cluster > Activate license. There are four variables that because the system employs
combine to determine how data is different layout options for larger
verify that a local InsightIQ user is created and active by going to CLUSTER laid out. files than for smaller files to
MANAGEMENT > Access Management > Users. Next to Users, click the down maximize efficiency and
arrow to select System and then FILE: System. There should be a user named performance.
insightiq. You will have to enable this user and assign a password.
The disk access pattern modifies
To add clusters to be monitored, go back to the InsightIQ web interface. Click both prefetching and data layout
Settings > Monitored Clusters, and then on the Monitored Clusters page, click settings associated with the node
Add Cluster. In the Add Cluster dialog box, click I want to monitor a new cluster. pool. Disk access pattern can be
Type the name of an Isilon SmartConnect zone for the cluster to be monitored. In set at a file or directory level so you
the Username box, type insightiq. In the Password box, type the local InsightIQ are restricted to using only one
user’s password exactly as it is configured on the monitored cluster, and then pattern for the whole cluster.
click OK. InsightIQ begins monitoring the cluster.
the system’s job is to lay data out in
If the customer wants to email scheduled PDF reports, you must enable and the most efficient, economical,
configure InsightIQ to send outbound email through a specified email server. Click highest performing way possible.
Settings > Email. The Configure Email Settings (SMTP) page appears. In the
SMTP server box, type the host name or IP address of an SMTP server that The maximum number of drives for
handles email for the customer’s organization. streaming is six drives per node
across the node pool for each file.
The interface for quota monitoring displays which quotas have been defined on
the cluster, as well as actual usage rates. The storage administrator can use this Concurrency is used to optimize workflows with many
as a trending tool to discover where quotas are turning into limiting factors before concurrent users access the same files. The preference is that
it happens without necessarily scripting a lot of analysis on the front-end. If each protection stripe for a file is placed on the same drive or
SmartQuotas has not been licensed on the cluster, InsightIQ will report this fact. Lesson 4: Using InsightIQ Overview drives depending on the requested protection level. For
example, a larger file with 20 protection stripes, each stripe unit
The deduplication interface in InsightIQ displays several key metrics. The from each protection stripe would prefer to be placed on the
administrator can clearly see how much space has been saved, in terms of same drive in each node. Concurrency is the default data
deduplicated data as well as data in general. The run of deduplication jobs is also access pattern. Concurrency influences the prefetch caching
displayed so that the administrator can correlate cluster activity with deduplication algorithm to prefetch and cache a reasonable amount of
successes. anticipated associated data during a read access.

Streaming is used for large streaming workflow data such as


movie or audio files. Streaming prefers to use as many drives as
possible when writing multiple protection stripes for a file. Each
file is written to the same sub pool within the node pool. With a
You can create custom live performance reports by clicking
streaming data access pattern, the protection stripes are
Performance Reporting > Create a New Performance Report. On the
distributed across the 6 drives per node in the node pool. This
Create a New Performance Report page, specify a template to use for
maximizes the number of active drives per node as the
the new report. There are three types of reports: You can create a live Default Reports
streaming data is retrieved. Streaming also influences the
performance report from a template based on the default settings; a The data access pattern influences prefetch caching algorithm to be highly aggressive and gather
user- created performance report; or select one of the standard reports how a file is written to the drives as much associated data as possible.
included with InsightIQ during the write process.
A random access pattern prefers using a single drive per node
for all protection stripes for a file just like a concurrency access
pattern. With random however, the prefetch caching request is
minimal. Most random data does not benefit from prefetching
Before you can view and analyze data-usage and data-properties data into cache.
information through InsightIQ, you must enable the File System -a <value> -<default|streaming|
Analytics feature. Click Settings > Monitored Clusters. The Monitored random> - Specifies the file access
Clusters page appears. In the Actions column for the cluster for pattern optimization setting.
which you want to enable or disable File System Analytics, click
Configure. The Configuration page displays. Click the Enable FSA -d <@r drives> - Specifies the
tab. The Enable FSA tab displays. Access can be set from the web administration interface or the isi set -a <default|streaming| minimum number of drives that the
command line. From the command line, the drive access pattern random> -d <#drives> <path/file> file is spread across.
can be set separately from the data layout pattern. Options: -l <value> - <concurrency|
streaming|random> - Specifies the
file layout optimization

Troubleshooting InsightIQ setting. This is equivalent to setting


both the –a and -d flags.

Even though a client is connected to only one node,


when that client saves data to the cluster, the write
operation occurs in multiple nodes in the cluster. This
is also true for read operations. A client is connected
to only one node at a time, however when that client
requests a file from the cluster, the node to which the
Troubleshooting File System client is connected will not have the entire file locally
Analytics on its drives. The client’s node retrieves and rebuilds
the file using the backend InfiniBand network.

InsightIQ Logs
Lesson 3

File System Analytics Export

All files 128 KB or less are mirrored. For a


protection strategy of N+1 the 128 K file would
view information on the cluster, critical events, cluster job have a 2X mirroring; the original data and one
status, and the basic identification, statistics, and usage, mirrored copy. We will see how this is applied to
run isi status at the CLI prompt. different files sizes.
The isi devices command displays information about
devices in the cluster and changes their status. There are
Cluster Monitoring Commands
multiple actions available including adding drives and
nodes to your cluster.

The isi statistics command has approximately 1500


combinations of data you can display as statistical output
of cluster operations.

The isi statistics command enables


you to view cluster throughput The isi statistics command provides a set of cluster and node
based on connection type, protocol statistics. The statistics collected are stored in an sqlite3 database
type, and open files per node. that is under the /ifs folder on the cluster. Additionally, other Isilon
In the background, isi_stats_d is the services such as InsightIQ, the web administration interface, and
daemon that performs a lot of the SNMP gather needed information using the isi statistics command.
data collection.

In the web administration interface,


navigate to FILE SYSTEM >
isi statistics gathers the same kind of information as Storage Pools > File Pool Policies.
InsightIQ and isi statistics
InsightIQ, but presents the information in a different way. To modify either the default policy
or an existing file pool policy, click
Data layout is managed mostly at the same way as View / Edit next to the policy. To
requested protection. The exception is data layout is create a new file pool policy, click +
not set at the node pool level. Settings are available in Create a File Pool Policy. The I/O
the default file pool policy, with SmartPools file pool Optimization Settings section is
policies, and manually set using either file system located at the bottom of the page.
explorer in the web administration interface or the isi To modify or set the data layout
set command in the CLI. pattern, select the desired option
under Data Access Pattern.
In the CLI, use the isi set command
with the –l option followed by
concurrency, streaming, or random.

the actual protection applied to a


file depends on the requested
protection level, the size of the file,
and the number of nodes in the
– isi statistics system node pool. Actual protection must
– isi statistics protocol To get more information on isi meet or exceed the requested
To display usage help statistics, run man isi statistics from protection level but may be laid out
– isi statistics client any node. differently than the requested
Lesson 5: Statistics from the
protection default layout.
– isi statistics drive command Line

isi get output Actual protection setting for file

The actual protection nomenclature


is represented differently than
requested protection when viewing
the output showing actual
protection from the isi get –D or isi
get –DD command.

To find the protection setting from


the CLI, using the isi get command
provides detailed file or directory
information. The primary options
are –d <path> for directory settings
and – DD <path>/<filename> for
individual file settings.
There are several methods that
Isilon clusters uses for caching.
Each storage node contains
standard DRAM (between 4GB and
256GB) and this memory is
primarily used to cache data that is
on that particular storage node and
is actively being accessed by
clients connected to that node.
The use of SSDs for cache is
optional but enabled by default.

Caching maintains a copy of


metadata and or the user data
blocks in a location other than
primary storage. The copy is used
to accelerate access to the data by
placing the copy on a medium with
faster access than the drives. Since
cache is a copy of the metadata
and user data, any data contained
in cache is temporary and can be
discarded when no longer needed.
Cache in OneFS is divided into
levels and each level serves a
ESRS stands for EMC Secure Remote Support. It is a tool
specific purpose in read and write
that enables EMC’s support staff to perform remote
transactions.
support and maintenance tasks.
Both L1 cache and L2 cache are
managed and maintained in RAM

ESRS Principles

Caching in OneFS consist of the


client-side L1 cache and write
coalescer, and L2 storage and
node-side cache.

L3 cache interacts the L2 cache


and is contained on SSDs. Each
cache has its own specialized
purpose and work together to
provide performance improvements
Lesson 6: ESRS Overview across the entire cluster.

L1 cache specifically refers to read


transaction requests, or when a
client requests data from the cluster

Level 1, or L1, cache is the client- Related to L1 cache is the write


side cache. It is the immediate cache or the write coalescer that
buffer on the node connected to the buffers write transactions from the
client and is involved in any client to be written to the cluster.
immediate client data transaction. The write coalescer collects the
write blocks performs the additional
process of optimizing the write to
disk. The write cache is flushed
after successful write transactions.
For write transactions, L2 cache
Level 2, or L2, cache is the storage works in conjunction with the
side or node side buffer. L2 cache NVRAM journaling process to
stores blocks from previous read insure protected committed writes.
and write transactions, buffers write L2 cache is flushed by the age of
transactions to be written to disk the data as L2 cache becomes full.
and prefetches anticipated blocks
for read requests, sometimes L2 cache is node specific. L2 cache
referred to as read ahead caching. interacts with the data contained on
the specific node.

Like L2 cache, L3 cache is node


specific and only caches data
Level 3, or L3, cache provides an associated with the specific node.
additional level of storage node Advanced algorithms are used to
side cache utilizing the node’s determine the metadata and user
SSDs as read cache. SSD access is data blocks cached in L3.
slower than access to RAM and is
relatively slower than L2 cache but When L3 cache becomes full and
Hadoop clusters can be dynamically scaled up and significantly faster than access to new metadata or user data blocks
down based on the available resources and the data on HDDs. are loaded into L3 cache, the oldest
required services levels. existing blocks are flushed from L3
Hadoop is an open source software project that cache.
Hadoop has emerged as a tool of choice for big data enables the distributed processing of large data
analytics but there are reasons to use it in a typical sets across clusters of commodity servers.
enterprise environment to analyze existing data to
improve processes and performance depending on The L1 cache is connected to the
your business model. L2 cache on all of the other nodes
and within the same node. The
HDFS is a scalable file system used in the connection to other nodes occurs
Hadoop cluster over the InfiniBand internal network
when data contained on those
The “Map” step does the following: The master node takes nodes is required for read or write.
the input, divides it into smaller sub-problems, and The L2 cache on the node connects
distributes them to worker nodes. The worker node to the disk storage on the same
processes the smaller problem, and passes the answer MapReduce is the compute algorithm that Hadoop has two core components: node. The L3 cache is connected to
back to its master node. Displayed in a diagram of a seven
analyzes the data and collects the answers from node cluster divided into two node the L2 cache and serves as a read
The “Reduce” step does the following: The master node the query. pools with a detailed view of one of only buffer. L3 cache is spread
then collects the answers to all the sub-problems and the nodes. across all of the SSDs in the same
combines them in some way to form the output – the node and enabled per node pool.
answer to the problem it was originally trying to solve.

The NameNode holds the location information for


Accelerator nodes do not allocate
every file in the cluster.The file system metadata.
memory for level 2 cache. This is
The Secondary NameNode is a backup because accelerator nodes are not
NameNode. This is a passive node that requires the writing any data to their local disks,
Administrator to intervene to bring it up to primary so there are no blocks to cache.
Components of Conventional
NameNode. Instead accelerator nodes use all
Hadoop
their memory for level 1 cache to
The Datanode server is where the data resides. service their clients.
Task Tracker is a node in the cluster that accepts When a client requests a file, the
tasks - Map, Reduce and Shuffle operations from a node to which the client is
Job Tracker, connected uses the isi get
command to determine where the
the data exists in silos. Production data is maintained on
blocks that comprise the file are
productions server and then copied in some way to a Landing
In a traditional Hadoop environment, there is no automated located. The first file inode is
Zone Server, which then imports or ingests the data into
failover of the NameNode. In the event that the cluster loses loaded and the file blocks are read
Hadoop/HDFS. It is important to note that the data on HDFS is
the NameNode, administrative intervention is required to from disk on all other nodes. If the
not production data; it is copied from another source, and a
restore the ‘secondary NameNode’ into production. data isn’t already in the L2 cache,
process must be in place to update the HDFS data periodically
data blocks are copied in the L2.
with the production data information.
In a traditional Hadoop only environment, we have to Populating Hadoop with data can be an exercise in
remember that the HDFS is a read-only file system. patience.

Kerberos is not a mandatory requirement for a Hadoop evolved from other open-source Apache
Hadoop, like many open source technologies, such as
Hadoop cluster, making it possible to run projects, directed at building open source web search
UNIX and TCP/IP, was not created with security in mind.
entire clusters without deploying any security. engines and security was not a primary consideration.

Lesson 4

read cache flow is displayed

The data lake represents a paradigm shift


away from the linear data flow model.

L1 cache is specific to the node the


client is connected to, and L2
cache and L3 cache are relative to
the node the data is contained on.

When a client requests that a file be written to the cluster, The layout decisions are made by the BAM on the node that
the node to which the client is connected is the node that initiated a particular write operation. The BAM makes the
receives and processes the file. That node creates a write decision on where best to write the data blocks to ensure the
plan for the file including calculating FEC. Data blocks file is properly protected. To do this, the BSW generates a write
assigned to the node are written to the NVRAM of that node. plan, which comprises all the steps required to safely write the
Data blocks assigned to other nodes travel through the new data blocks across the protection group. Once complete,
Infiniband network to their L2 cache, and then to their the BSW will then execute this write plan and guaranty its
NVRAM. Once all nodes have all the data and FEC blocks in successful completion. OneFS will not write files at less than
NVRAM a commit is returned to the client. Data block(s) the desired protection level, although the BAM will attempt to
assigned to this node stay cached in L2 for future reads of use an equivalent mirrored layout if there is an insufficient stripe
that file. Data is then written onto the spindles. width to support a particular FEC protection level.
The other major improvement in
Endurant Cache or EC is only for synchronous writes, or over all node efficiency with
writes that require a stable write acknowledgement be synchronous writes comes from
returned to the client. EC provides Ingest and staging of utilizing the Write Coalescer’s full
stable synchronous writes. EC manages the incoming capabilities to optimize writes to
write blocks and stages them to stable battery backed disk.
Lesson 1: Demystifying Hadoop NVRAM. Insuring the integrity of the write. EC also
provides Stable synchronous write loss protection by Endurant Cache was specifically
creating multiple mirrored copies of the data, further developed to improve NFS
guaranteeing protection from single node and often synchronous write performance and
multiple node catastrophic failures. write performance to VMware
VMFS and NFS datastore.
Stages and stabilizes the write – At
the point the ACK request is made
by the client protocol, the EC
Logwriter process mirrors the data
block or blocks in the Write
Coalescer to the EC log files in
NVRAM where the write is now
protected and considered stable.
The NameNode now resides on the Isilon cluster giving it a Once stable, the acknowledgement
complete and automated failover process. In the event that the or ACK is now returned to the
node running as the NameNode fails, another Isilon node will client. At this point the client
immediately pick up the function of the NameNode. No data or considers the write process
metadata would be lost since the distributed nature of Isilon will complete. The latency or delay time
spread the metadata across the cluster. There is no downtime is measured from the start of the
when this occurs and most importantly there is no need for process to the return of the
administrative intervention to failover the NameNode. acknowledgement to the client.
Data Protection – Hadoop does 3X mirror for data From this point forward, our
protection and had no replication capabilities. Isilon The Endurant Cache, or EC, ingests and stages standard asynchronous write
supports snapshots, clones, and replication using it’s stable synchronous writes. Ingests the write into process is followed. We let the
Enterprise features. the cluster – The client sends the data block or Write Coalescer manage the write
blocks to the node’s Write Coalescer with a in the most efficient and
No Data Migration – Hadoop requires a landing zone economical manner according to
synchronous write acknowledgement, or ACK,
for data to come to before using tools to ingest data the Block Allocation Manager, or
request.
to the Hadoop cluster. Isilon allows data on the cluster BAM, and the BAM Safe Write or
to be analyzed by Hadoop. Imagine the time it would BSW path processes.
take to push 100TB across the WAN and wait for it to Module 6: Application Integration
migrate before any analysis can start. Isilon does in with OneFS The write is completed – Once the
place analytics so no data moves around the network. standard asynchronous write
process is stable with copies of the
Security – Hadoop does not support kerborized different blocks on each of the
authentication it assumes all members of the domain involved nodes’ L2 cache and
are trusted. Isilon supports integrating with AD or NVRAM, the EC Log File copies are
LDAP and give you the ability to safely segment de-allocated using the Fast Invalid
access Path process from NVRAM. The
Dedupe – Hadoop natively 3X mirrors files in a cluster, Hadoop Advantage using EMC write is always secure throughout
meaning 33% storage efficiency. Isilon is 80% efficient Isilon the process. Finally the write to the
hard disks is completed and the file
Compliance and security – Hadoop has no native copies in NVRAM are de-allocated.
encryption. Isilon supports Self Encrypting Drives, Copies of the writes in L2 cache will
using ACLS and ModeBits, Access zones, RBAC and remain in L2 cache until flushed
is SEC compliant though one of the normal
processes.
Multi-Distribution Support – Each physical HDFS
cluster can only support one distribution of a client sends a file to the cluster requesting a synchronous write
Hadoop...we let you co-mingle physical and virtual acknowledgement. The client begins the write process by sending
versions of any apache standards-based distros you 4KB data blocks. The blocks are received into the node’s Write
like Coalescer; which is a logical separation of the node’s RAM similar to
but distinct from L1 and L2 Cache. Once the entire file has been
Scale Compute and Storage Independently – Hadoop received into the Write Coalescer, the Endurant Cache (EC)
pairs the storage with the compute o if you need more synchronous write
LogWriter Process writes mirrored copies of the data blocks (with
space, you have to pay for more CPU that may go some log file–specific information added) in parallel to the EC Log
unused or if you need more compute, you end up with Files, which reside in the NVRAM. The protection level of the
lots of overhead space. We let you scale compute as mirrored EC Log Files is based on the Drive Loss Protection Level
needed and Isilon for storage as needed; aligning your assigned to the data file to be written; the number of mirrored
costs with your requirements copies equals 2X, 3X, 4X, or 5X.

OneFS supports the Hadoop


distributions

write process as a flow diagram.

Synchronous writes request an ACK after each piece of the file. The size of
the piece is determined by the client and may not match the 8KB block
size used by OneFS. If there is a synchronous write flag, the Endurant
cache process is used to accelerate the process to having the write
considered stable and protected in NVRAM and providing the ability to
return the ack to the client faster. After the synchronous write is secure the
file blocks follow the asynchronous write process.
If the write is asynchronous, the data blocks are processed from the write
coalescer using the Block Allocation Manager (BAM) Safe Write or BSW
process. This is where FEC is calculated, the node pool, sub pool, nodes,
drives and specific blocks to write the data to are determined and the
128KB stripe units are formed.
To view the cache statistics, use the
isi_cache_stats –v command. Statistics for L1, L2
and L3 cache are provided. Separate statistics for
L3 data and L3 metadata are provided.

L3 cache consumes all SSD in the


L3 cache is enabled by default for all new node node pool when enabled. L3 cache
pools added to a OneFS 7.1.1 cluster. New node cannot coexist with other SSD
pools containing SSDs are automatically enabled. A strategies on the same node pool;
global setting is provided in the web administration no metadata read acceleration, no
interface to change the default behavior. Each node metadata read/write acceleration,
pool can be enabled or disabled separately. L3 and no data on SSD. SSDs in an L3
cache is either on or off and no other visible cache enabled node pool cannot
configuration settings are available. participate as space used for GNA
NFS and SMB only log post events. They are either.
delivered to the isi_audit_d service and stored
permanently on disk in the log storage. Clients can access their files via a node in the cluster
because the nodes communicate with each other via the
We have two consumer daemons that pull the
ISILON Administration and InfiniBand back-end to locate and move data. Any node
Clients can connect to different nodes based on
Manage 2015 performance needs.
events from disk and deliver them: the first is may service requests from any front-end port. There are
isi_audit_syslog – this delivers auditing to legacy no dedicated ‘controllers’.
clients, and we have isi_audit_cee that delivers
the audit events to the CEE. Lesson 2: Establishing Audit Using the isi networks list ifaces –v command you can see
The external adapters are labelled ext-1, ext-2, ext-3,
Capabilities Isilon nodes can have up to four front-end or external both the interface name and its associated NIC name. For
ext-4, 10gige-1, 10 gige-2 and can consist of 1 GigE or
They can chose the zone they wish to audit using networking adapters depending on how the customer example, ext-1 would be an interface name and em1 would be
10 GigE ports depending on the configuration of the
the isi zone zones modify <zonename> command configured the nodes. a NIC name. NIC names are required if you want to do a
node.
and they can select what events within the zone tcpdump and may be required for additional command syntax.
they wish to forward. In OneFS, if the configuration audit topic is
LACP monitors the link status and will fail traffic
selected then, by default, all data regardless
Syslog is configured with an identity of over if a link has failed. LACP balances outgoing
of the zone, is logged in the audit_config.log,
audit_protocol. By default, all protocol events are traffic across the active ports based on hashed
which is the /var/log directory. LACP is the preferred method for
forwarded to the audit_protocol.log file that is protocol header information and accepts incoming
link aggregation on an Isilon cluster.
saved to the /var/log/directory, regardless of the traffic from any active port. Isilon is passive in the
zone in which they originated. LACP conversation and listens to the switch to
dictate the conversation parameters.
To enable and manage audit from the CLI,
run the isi audit settings command. Fast EtherCHannel balances outgoing traffic
across the active ports based on hashed
You cannot NIC aggregate mixed interface types, meaning that Isilon supports five aggregation types: Link protocol header information and accepts
a 10 GigE must be combined with another 10 GigE, and not with Aggregation Control Protocol (LACP), Fast incoming traffic from any active port. The hash
a 1 GigE. Also, the aggregated NICs must reside on the same EtherChannel (FEC), Legacy Fast EtherChannel (FEC) includes the Ethernet source and destination
node. mode, Active/Passive Failover, and Round-Robin Tx. address, and, if available, the VLAN tag, and the
IPv4/IPv6 source and destination address.
Legacy Fast EtherChannel (FEC) mode is
supported in earlier versions of OneFS and
supports static aggregated configurations
Link aggregation, also known as NIC aggregation, is an
In OneFS 7.1.1, a new audit log compression optional IP address pool feature that allows you to Round Robin Tx can cause packet reordering and
algorithm was added on file roll over. combine the bandwidth of a single node’s physical result in latency issues. One indicator of the issue
network interface cards into a single logical connection is if having fewer links involved actually increases
network throughput.
Isilon supports the following Audit vendors:
Northern Storage Suite, Steathbits When planning link aggregation remember that pools that use the
Technologies, and Symantec Data Insight as same aggregated interface cannot have different aggregation
new audit vendors. This support is not modes
specific for OneFS 7.2;
you cannot select LACP for one pool and Round Robin for
Using storage pools, multiple tiers of Isilon storage nodes another pool if they are using the same two external interfaces.
(including S-Series, X-Series, and NL-Series) can all co-exist
within a single file system, with a single point of management. A node’s external interfaces cannot be used by an IP address
By using SmartPools, administrators can specify exactly which pool in both an aggregated configuration and as individual
SmartPools is a software module that enables interfaces. You must remove a node’s individual interfaces from all
files they want to live on particular nodes pools and tiers. administrators to define and control file management pools before configuring an aggregated NIC.
SmartPools is used to manage global settings for the cluster, policies within a OneFS cluster.
such as L3 cache enablement status, global namespace OneFS uses link aggregation primarily for NIC failover purposes.
acceleration (GNA) enablement, virtual hot spare (VHS) Both NICs are used for client I/O, but the two channels are not
management, global spillover settings, and more. bonded into a single 2 Gigabit link. Each NIC is serving a separate
stream or conversation between the cluster and a single client. You
Multiple node pools with similar performance will need to remove any single interfaces if they are a part of the
characteristics can be grouped together into a single tier A node pool is used to describe a group of similar aggregate interface - they cannot co-exist.
with the licensed version of SmartPools, nodes. There can be from three up to 144 nodes in a
single node pool. All the nodes with identical
File pool policies are used to determine where data is placed, how hardware characteristics are automatically grouped
it is protected and which other policy settings are applied based on in one node pool. A node pool is the lowest
the user-defined and default file pool policies. The policies are granularity of storage space that users manage.
applied in order through the SmartPools job.

File pool policies are user created polices used to


Files and directories are selected using filters and change the storage pool location, requested protection
apply actions to files matching the filter settings. The settings, and I/O optimization settings. File pool policies
management is file- based and not hardware-based. add the capability to modify the settings at any time, for
any file or directory. LNI numbering corresponds to the physical
positioning of the NIC ports as found on the back of
the node. LNI mappings are numbered from left to
right starting in the back of the node.

NIC names correspond to the


network interface name as shown in
command-line interface tools such
as ifconfig and netstat

Multiple cluster subnets are


supported without multiple network
Enabling the Isilon cluster to switches
Basic vs Advanced participate in a VLAN provides the
following advantages: Security and privacy is increased
because network traffic across one
Virtual LAN (VLAN) tagging is an optional front-end
VLAN is not visible to another VLAN
network subnet setting that enables a cluster to
participate in multiple virtual networks. Ethernet interfaces can be configured as either access
Lesson 1 ports or trunk ports. An access port can have only one
VLAN configured on the interface; it can carry traffic for
only one VLAN. A trunk port can have two or more VLANs
configured on the interface; it can carry traffic for several
VLANs simultaneously.

Another challenge prior to OneFS 7.2 is that we had no


metrics to prefer a 10 GigE interface over a 1 GigE, so if both
a 1 GigE and a 10 GigE were in the same subnet, although
with unlicensed SmartPools, we have a one-tier policy Subnet configuration is the highest level of traffic might arrive on the 10 GigE network, it might go out
of anywhere with all node pools tied to that storage pool network configuration below cluster configuration. the 1 GigE interfaces, which could reduce client I/O for
target through the default file pool policy. Before OneFS 7.2, one quirk of OneFS’s subnet customers that were unaware of this.
configuration is that although each subnet can
have a different default gateway, OneFS only uses Asymmetric Routing issues have also been an issue prior to
the highest priority gateway configured in all of its OneFS 7.2. Asymmetric Routing means that packets might
subnets, falling back to a lower priority one only if take one path from source to target, but a completely
the highest priority one is unreachable. different path to get back. UDP supports this, but TCP does
not; this means that most protocols will not work properly.
Asymmetric Routing often causes issues with SyncIQ, when
dedicated WAN links for data replication are present.
If enabled, source-based routing is applied across the entire
Checking Storage Pools Health cluster. It automatically scans your network configuration and
creates rules that enforces client traffic to be sent through the
if SmartPools is not licensed a Caution message is displayed. A similar gateway of the source subnet. Outgoing packets are routed via
Caution notification is displayed if the requested protection for a node their source IP address. If you make modifications to your
pool under the suggested protection. If there are tiers created without network configuration, SBR adjusts its rules. SBR is configured
any assigned node pools, a Caution warning is displayed. The Needs as a cluster wide setting that is enabled via the CLI.
Attention notifications are displayed for such events as node pools SBR rules take priority over static
containing failed drives or filled over the capacity threshold limits, or routes.
when file pool policies target a node pool that no longer exists.
SBR was developed to be enabled
or disabled as seamlessly as
possible.

Each node pool must contain at least three nodes. If you have
less than three nodes, the node pool is considered to be under
provisioned. If you submit a configuration for a node pool that
contains less than three nodes, the web administration interface
will notify you that the node pool is under provisioned. The
cluster will not store files on an under provisioned node pool.

All node pools in a tier and all file pools policies targeting a tier should
be removed before a tier is deleted. When a tier is deleted still
containing node pools, the node pool is removed from all tiers and listed SBR mitigates how previous versions of
SmartPools Configuration
as the node pool. Any file pool policies targeting the deleted tier will OneFS only used the highest priority gateway.
generate notifications and require modification by the administrator. Source-based routing ensures that outgoing
client traffic (from the cluster) is directed
through the gateway of the source subnet.

The top-level of the DNS architecture is called the


ROOT domain and it represented by a single “.” dot.

Below the ROOT domain is the Top Level Domains.


These domains are used to represent companies,
educational facilities, non-profits, and country
codes: .com, .edu, .org, .us, .uk, .ca, etc., and are
managed by a Name Registration Authority.
The Secondary Domain would represent the unique
name of the company or entity, such as EMC, Isilon,
Harvard, MIT, etc.
The last record in the tree is the HOST record, which
indicates an individual computer or server.
The Domain Name System, or DNS, is a hierarchical
distributed database.

Node Compatibility

Create Compatibility
Domain names are managed under a hierarchy headed by the
Internet Assigned Numbers Authority (IANA), which manages the
top of the DNS tree by administrating the data in the root
nameservers.
In the CLI, use the command isi storagepool compatibilities active For example, a server by the name of server7
create with arguments for the old and new node types. The changes An A-record maps the hostname to a specific IP address would have an A record that mapped the
to be made are displayed in the CLI. You must accept the changes by to which the user would be sent for each domain or hostname server7 to the IP address assigned to it:
entering yes, followed by ENTER to initiate the node compatibility. subdomain. It is simple name-to-IP resolution.
Server7.support.emc.com A 192.168.15.12

An example of a FQDN looks like this:


Server7.support.emc.com.

In DNS, a FQDN will have an associated HOST or A record


(AAAA if using IPv6) mapped to it so that the server can
return the corresponding IP address.
Secondary domains are controlled by companies, educational
Delete Compatibility
A Fully Qualified Domain Name, or FQDN, is the DNS institutions, etc., where as the responsibility of management of
name of an object in the DNS hierarchy. A DNS resolver most top-level domains is delegated to specific organizations
query must resolve a FQDN to its IP address so that a by the Internet Corporation for Assigned Names and Numbers
connection can be made across the network or the or (ICANN), which contains a department called the Internet
In the CLI, use the command isi storagepool compatibilities active internet. Assigned Numbers Authority (IANA).
delete with arguments with the compatibility ID number. The changes
to be made will be displayed. You must accept the changes by
entering yes, followed by ENTER to initiate the node compatibility.

NS records indicate which name servers are authoritative for For example if you have a domain called Mycompany.com and
the zone or domain. NS Records are primarily used by you want all DNS Lookups for Seattle.Mycompany.com to go
companies that wish to divide their domain into to a server located in Seattle. You would create an NS record
subdomains. Subdomains indicate that you are delegating a that maps Seattle.Mycompany.com to the Name Server in
portion of your domain name to a different group of name Seattle with a hostname of SIP thus the mapping looks like:

The Directory Protection setting is configured to protect servers. You create NS records to point the name of this Seattle.Mycompany.com NS
directories of files at one level higher than the data. delegated subdomain to different name servers. SrvNS.Mycompany.com

When a client needs to resolve a Fully Qualified


DNS Name Resolution and Resolvers
Domain Name (FQDN) it follows the following steps:

GNA enables SSDs to be used for cluster-wide metadata acceleration and use SSDs in one The Global namespace acceleration setting enables file
part of the cluster to store metadata for nodes that have no SSDs. The result is that critical metadata to be stored on node pool SSD drives, and requires
SSD resources are maximized to improve performance across a wide range of workflows. that 20% of the disk space be made up of SSD drives.
Global namespace acceleration can be enabled if 20% or more of the nodes in the cluster
contain SSDs and 1.5% or more of the total cluster storage is SSD-based. The Once the client is at the front-end interface, the associated
recommendation is that at least 2.0% of the total cluster storage is SSD-based before access zone then authenticates the client against the proper
enabling global namespace acceleration. If you go below the 1.5% SSD total cluster space directory service; whether that is external like LDAP and AD or
capacity requirement, GNA is automatically disabled and all GNA metadata is disabled. If you internal to the cluster like the local or file providers.
SmartConnect is a client load balancing feature that allows
SmartFail a node containing SSDs, the SSD total size percentage or node percentage
segmenting of the nodes by performance, department or Access zones do not dictate which front-end interface the client The access token contains all the permissions and
containing SSDs could drop below the minimum requirement and GNA would be disabled.
SmartPools Settings subnet. SmartConnect deals with getting the clients from rights that the user has. When a user attempts to
connects to, it only determines what directory will be queried to
their devices to the correct front-end interface on the verify authentication and what shares that the client will be able access a directory the access token will be checked
cluster. to view. Once authenticated to the cluster, mode bits and ACLs to verify if they have the necessary rights.
(access control lists) dictate the files, folders and directories In OneFS 7.0.x the maximum number of supported
that can be accessed by this client. Remember, when the client Access Zone is five. As of OneFS 7.1.1 the maximum
Lesson 1 is authenticated Isilon generates an access token for that user. number of supported Access Zones is 20.
SmartConnect is a client connection balancing management
feature (module) that enables client connections to be balanced
The Virtual hot spare (VHS) option reserves the free space needed to across all or selected nodes in an Isilon cluster. It does this by
rebuild the data if a disk or node failure occurs. Up to four full drives can providing a single virtual host name for clients to connect to,
be reserved. If you choose the Reduce amount of available space option, which simplifies connection mapping.
free space calculations do not include the space reserved for the virtual
hot spare. The reserved VHS free space can still be used for writes unless It provides load balancing and dynamic NFS failover and
you select the Deny new data writes option. If these first two VHS options failback of client connections across storage nodes to provide
are enabled, it is possible for the file system use to report at over 100%. optimal utilization of the cluster resources. SmartConnect
SmartConnect zones allow a granular control of where a
A minimum number of virtual drives in each node pool (1-4) eliminates the need to install client side drivers, enabling
connection is directed. An administrator can segment the
administrators to manage large numbers of clients in the event
A minimum percentage of total disk space in each node pool (0-20 cluster by workflow allowing specific interfaces within a
of a system failure.
percent) node to support different groups of users.
VHS reserved space allocation is
SmartConnect simplifies client connection management. Based
A combination of minimum virtual drives and total disk space. The defined using these options:
on user configurable policies, SmartConnect Advanced applies
larger number of the two settings determines the space allocation, intelligent algorithms (e.g., CPU utilization, aggregate
not the sum of the numbers. If you configure both settings, the throughput, connection count or round robin) and distributes
enforced minimum value satisfies both requirements. clients across the cluster to optimize client performance.
SmartConnect can be configured into multiple zones that can be
used to ensure different levels of service for different groups of
The Enable global spillover section, controls whether the cluster clients. All of this is transparent to the end-user.
can redirect write operations to another storage pool if the target
storage pool is full, otherwise the write operation fails. Perhaps a client with a 9-node cluster containing three S-
nodes, three X-nodes and three NL-nodes wants their
Research team to connect directly to the S-nodes to utilize a
variety of high I/O applications. The administrators can then
SmartPools Action Settings give you a way to enable or disable managing
have the Sales and Marketing users connect to the front-end
requested protection settings and I/O optimization settings. If the box is
of the X-nodes to access their files.
unchecked (disabled), then SmartPools will not modify or manage settings on the
files. The option to Apply to files with manually managed protection provides the The first external IP subnet was configured during the
ability to override any manually managed requested protection setting or I/O initialization of the cluster. The initial default subnet, subnet0,
optimization. This option can be very useful if manually managed settings were is always an IPv4 subnet. Additional subnets can be
made using file system explorer or the isi set command. configured as IPv4 or IPv6 subnets. The first external IP
IP address pools partition a cluster’s external network address pool is also configured during the initialization of the
interfaces into groups or pools of IP address ranges in a cluster. The initial default IP address pool, pool0, was
subnet, enabling you to customize how users connect to your created within subnet0. It holds an IP address range and a
cluster. Pools control connectivity into the cluster by allowing physical port association.
different functional groups, such as sales, RND, marketing,
etc., access into different nodes. This is very important in
those clusters that have different node types.

The file pool policies are listed and applied in the order of that list. Only
one file pool policy can apply to a file, so after a matching policy is
found, no other policy is evaluated for that file. The default file pool
policy is always last in the ordered list of enabled file pool policies.

The SmartPools File Pool Policies page displays currently configured file
pool policies and available template policies. You can add, modify, delete,
and copy file pool policies in this section. The Template Policies section
lists the available templates that you can use as a baseline to create new
file pool policies.
File pool polices are applied to the cluster by the SetProtectPlus job or
the SmartPools job if SmartPools is licensed. By default, this job runs
SmartConnect is available in a basic (unlicensed) and
at 22:00 hours every day at a low priority.
advanced (licensed) versions.

With licensed SmartPools multiple file pool policies can be created to manage file
and directory storage behavior. By applying file pool policies to the files and
directories, files can be moved automatically from one storage pool to another within
the same cluster. File pool policies provide a single point of management to meet
performance, requested protection level, space, cost, and other requirements.
The SmartConnect service IP answers queries from DNS. There
can be multiple SIPs per cluster and they will reside on the node
with the lowest array ID for their node pool. If you know the IP
SmartConnect Components
address of the SIP and wish to know just the zone name, you
can use isi_for_array ifconfig –a | grep <IP of SIP> and it will
show you just the zone that the SIP is residing within.

you must configure the network DNS server to forward


cluster name resolution requests to the SmartConnect
service on the cluster.
You can configure SmartConnect name resolution on a
BIND server or a Microsoft DNS server. Both types of DNS
Lesson 2 server require a new name server, or NS, record be added
to the existing authoritative DNS zone to which the cluster
belongs.

To modify the default file pool policy, click File System, click Storage The default file pool policy is defined under the default policy. The In the Microsoft Windows DNS Management Console, an
Pools and then click the File Pool Policies tab. On the File Pool individual settings in the default file pool policy apply to all files that do NS record is called a New Delegation. On a BIND server,
Policies page, next to the default policy, click View / Edit. After have not that setting configured in another file pool policy that you the NS record must be added to the parent zone (in BIND
configure SmartConnect 9, the “IN” is optional). The NS record must contain the
finishing the configuration changes, you need to submit and then create. You cannot reorder or remove the default file pool policy.
confirm your changes. FQDN that you want to create for the cluster and the name
you want the client name resolution requests to point to. In
Under I/O Optimization Settings, the SmartCache setting is enabled addition to an NS record, an A record (for IPv4 subnets) or
by default. SmartCache can improve performance by prefetching data AAAA record (for IPv6 subnets) that contains the SIP of the
for read operations. In the Data access pattern section, you can cluster must also be created.
choose between Random, Concurrency, or Streaming. Random is the
recommended setting for VMDK files. Random access works best for A single SmartConnect zone does not support both IP
small files (<128 KB) and large files with random access to small versions, but you can create a zone for each IP version
blocks. This access pattern turns off prefetching. Concurrency is the and give them duplicate names. So, you can have an IPv4
default setting. It is the middle ground with moderate prefetching. Use subnetandIPaddresspoolwiththezonenametest.mycompan
concurrency access for file sets that get a mix of both random and y.com andyoucanalso define IPv6 subnet using the same
sequential access. Streaming access works best for medium to large zone name
files that have sequential reads. This access pattern uses aggressive
prefetching to improve overall read throughput.

A pool for data and a pool for snapshots can be specified. For
data, you can choose any node pool or tier, and the snapshots
can either follow the data, or be assigned to a different storage
location. You can also apply the cluster’s default protection level Cluster Name Resolution Process
to the default file pool, or specify a different protection level for
the files that are allocated by the default file pool policy.

File pools policies are a set of conditions that move data to specific
targets, either a specific node pool or a specific tier. By default, all
files in the cluster are written anywhere on the cluster as defined in Connection count data is collected
the default file pool policy. If a cluster is licensed the administrator has four options to every 10 seconds
SmartConnect will load balance client connections across load balance: round robin, connection count, network
Network throughput data is
the front end ports based on what the administrator has throughput and CPU usage. If the cluster does not have
collected every 10 seconds.
determined to be the best choice for their cluster. SmartConnect licensed it will load balance by Round
Robin only. CPU statistics are collected every
10 seconds.

SmartConnect zone is managed as an independent


SmartConnect environment, they can have different
attributes, such as the client connection policy

File pool policies with path-based policy filters and storage pool location actions are executed
during the write of a file matching the path criteria. Path-based policies are first executed when
the SmartPools job runs, after that they are executed during the matching file write. File Pool
Policies with storage pool location actions and policy filters based on other attributes besides
path get written to the node pool with the highest available capacity and then moved, if
necessary to match a file pool policy, when the next SmartPools job runs. This ensures that
write performance is not sacrificed for initial data placement.

File pool policies are used to filter files by attributes and values that
you can specify. This feature, available with the licensed
File pool policy creation can be divided into two
SmartPools module, helps to automate and simplify high file
parts: specifying the file filter and specifying the
volume management. In addition to the storage pool location, the
actions. Static pools are best used for SMB clients because of the stateful
requested protection and I/O optimization settings for files that
match certain criteria can be set. nature of the SMB protocol. When an SMB client establishes a
connection with the cluster the session or “state” information is
negotiated and stored on the server or node. If the node goes offline
the state information goes with it and the SMB client would have to
When configuring IP-address pools on the cluster, an reestablish a connection to the cluster. SmartConnect is intelligent
administrator can choose either static pools or dynamic enough to hand out the IP-address of an active node when the SMB
pools. client reconnects.

Due to the nature of the NFS protocol being a state-less protocol, in


that the session or “state” information is maintained on the client
side, if a node goes down, the IP address that the client is connected
to, will failover (or move) to another node in the cluster.

If a node with client connections established goes offline, the behavior is protocol-
specific. NFSv3 automatically re-establishes an IP connection as part of NFS
failover. In other words, if the IP address gets moved off an interface because that
interface went down, the TCP connection is reset. NFSv3 re-establishes the
Module 3: Networking
connection with the IP on the new interface and retries the last NFS operation.
However, SMBv1 and v2 protocols are stateful. So when an IP is moved to an
interface on a different node, the connection is broken because the state is lost.
NFSv4 is stateful (just like SMB) and like SMB does not benefit from NFS failover.

Note: A best practice for all non-NFSv3 connections is to set the IP allocation
method to static. Other protocols such as SMB and HTTP have built-in
mechanisms to help the client recover gracefully after a connection is unexpectedly
disconnected.

Example: Multiple Static Pools

File pool polices are applied to the cluster by a job. When


SmartPools is unlicensed, the SetProtectPlus job applies the default
file pool policy. When SmartPools is licensed, the SmartPools job
processes and applies all file pool policies. By default, the job runs
at 22:00 hours every day at a low priority. The SetProtectPlus and Note: Select static as the IP allocation method to assign IP addresses as member The member interface is removed
SmartPools jobs are part of the restripe category for the job engine. interfaces are added to the IP pool. As members are added to the pool, this from the network pool.
Only one restripe job can run at a time. method allocates the next unused IP address from the pool to each new member.
After an IP address is allocated, the pool member keeps the address indefinitely The member node is removed from
unless: the cluster.

It enables NFS failover, which provides continuous


If a node pool has SSDs, by default, L3 cache is enabled on the node pool. To NFS service on a cluster even if a node becomes
use the SSDs for other strategies, the L3 cache must first be disabled on the Note that Dynamic IP allocation has the following unavailable.
node pool. The metadata read acceleration is the recommended SSD strategy. advantages:
With metadata read acceleration, OneFS directs one copy of the metadata is It provides high availability because the IP address
directed to SSDs, and the data and remaining metadata copies are directed to is available to clients at all times.
reside on HDDs. The benefit of using SSDs for file-system metadata includes
faster namespace operations used for file lookups.

Example: Multiple Dynamic Pools


To help create file pool policies, OneFS also provides customizable template
policies that can be used to archive older files, increase the protection level for
specified files, send files that are saved to a particular path to a higher-
performance disk pool, and change the access setting for VMware files. To use
a template, click View / Use Template.
SmartQuotas is a software module used to limit, monitor, thin
SmartQuotas allows for thin provisioning, also known provision, and report disk storage usage at the user, group,
as over-provisioning, which allows administrators to and directory levels. Administrators commonly use file
assign quotas above the actual cluster size. With thin system quotas as a method of tracking and limiting the
provisioning, the cluster can be full even while some amount of storage that a user, group, or a project is allowed
users or directories are well under their quota limit. to consume. SmartQuotas can send automated notifications
when storage limits are exceeded or approached.
Track the amount of disk space that various users or groups use

Review and analyze reports that can help identify storage usage patterns

Intelligently plan for capacity expansions and future storage requirements

SmartQuotas accounting quotas can be used to:

IP rebalancing and IP failover are features of


SmartConnect Advanced.

Hard quotas limit disk usage to a specified amount. Writes are Manual Failback – IP address rebalancing is done
denied after the quota threshold is reached and are only allowed manually from the CLI using isi networks modify
again if the usage falls below the threshold. pool. This causes all dynamic IP addresses to
The rebalance policy determines how IP addresses are rebalance within their respective subnet.
Soft quotas enable an administrator to configure a grace period that redistributed when node interface members for a given IP
starts after the threshold is exceeded. After the grace period expires, address pool become available again after a period of Automatic Failback – The policy automatically
the boundary becomes hard, and additional writes are denied. If the unavailability. The rebalance policy could be: redistributes the IP addresses. This is triggered by
usage drops below the threshold, writes are again allowed. a change to either the cluster membership,
external network configuration or a member
Advisory quotas do not deny writes to the disk, but they can trigger network interface.
alerts and notifications after the threshold is reached.
SmartConnect deals with getting the clients from
Enforcement quotas support three subtypes and are based on their devices to the correct front-end interface on the
administrator-defined thresholds: cluster.
Access zones do not dictate which front-end
interface the client connects to, it only determines
what directory will be queried to verify authentication
and what shares that the client will be able to view.

In OneFS 7.0.x, the maximum number of supported


Access Zone is five. As of OneFS 7.1.1 the maximum
number of supported Access Zones is 20.

Isilon provides secure multi-tenancy with access


zones. Access zones do not require a separate
Directory quotas are placed on a directory, and apply to all directories license. Access zones enable you to partition cluster
Using access zones enables you to group these
and files within that directory, regardless of user or group. Directory access and allocate resources to self-contained units,
providers together and limit which clients can
quotas are useful for shared folders where a number of users store providing a shared tenant environment. You can
login to the system.
data, and the concern is that the directory will grow unchecked configure each access zone with its own set of
because no single person is responsible for it. authentication providers, user mapping rules, and
shares/exports.
User quotas are applied to individual users, and track all data that is
written to a specific directory. User quotas enable the administrator to
control how much data any individual user stores in a particular
directory. With the release of OneFS 7.2, NFS users can
authenticate through their own access zone as NFS is
Default user quotas are applied to all users, unless a user has an now aware of the individual zones on a cluster, allowing
explicitly defined quota for that directory. Default user quotas enable the There are five types of quotas that can be configured, which are you to restrict NFS access to data at the target level as
administrator to apply a quota to all users, instead of individual user directory, user, default user, group, and default group. you can with SMB zones.
Access Zone Capabilities
quotas.
Multiple access zones are particularly useful for server
Group quotas are applied to groups and limit the amount of data that the consolidation, for example, when merging multiple
collective users within a group can write to a directory. Group quotas Lesson 2 Windows file servers that are potentially joined to
function in the same way as user quotas, except for a group of people different untrusted forests.
and instead of individual users.

Default group quotas are applied to all groups, unless a group has an
explicitly defined quota for that directory. Default group quotas operate The default access zone within the cluster is
like default user quotas, except on a group basis. called the System access zone.

You should not configure any quotas on the root of the file system (/ifs), as Each access zone has their own
it could result in significant performance degradation. authentication providers (File, Local, Active
Directory, or LDAP) configured. Multiple
1. Default: The default setting is to only track user data, which instances of the same provider can occur in
is just the data that is written by the user. It does not include different access zones.
any data that the user did not directly store on the cluster.
2. Snapshot Data: This option tracks both the user data and Access Zone Architecture
any associated snapshots. This setting cannot be changed
after a quota is defined. To disable snapshot tracking, the
quota must be deleted and recreated. The options are:
Most quota configurations do not need to include overhead
3. Data Protection Overhead: This option tracks both the user calculations. If you configure overhead settings, do so
data and any associated FEC or mirroring overhead. This carefully, because they can significantly affect the amount of
option can be changed after the quota is defined. disk space that is available to users.
4. Snapshot Data and Data Protection Overhead: Tracks all
data user, snapshot and overhead with the same restrictions.
Module 5: All Storage
Quotas can also be configured to include the space that is consumed by When joining the Isilon cluster to an AD
Administration
snapshots. A single path can have two quotas applied to it: one without domain, the Isilon cluster is treated as a
snapshot usage (default) and one with snapshot usage. If snapshots are resource.
included in the quota, more files are included in the calculation.
If the System access zone is set to its defaults,
1. It allows a smaller initial purchase of capacity/nodes, the Domain Admins and Domain Users groups
and the ability to simply add more as needed, promoting a Local Provider - System from the AD domain are automatically added to
capacity on demand model. Doing this accomplishes two the cluster’s local Administrators and Users
2. It enables the administrator to set larger quotas initially things: groups, respectively.
and so that continually increases as users consume their Thin provisioning is a tool that enables an administrator to It’s important to note that, by default, the
allocated capacity are not needed. define quotas that exceed the capacity of the cluster. cluster’s local Users group also contains the AD
However, thin provisioning requires that cluster capacity use be domain group: Authenticated Users.
monitored carefully. With a quota that exceeds the cluster capacity, there
is nothing to stop users from consuming all available space, which can Now with the release of OneFS 7.2, NFS is zone-
result in service outages for all users and services on the cluster. aware, meaning the NFS exports and aliases can
exist and be visible on a per zone basis and not
Lesson 3 Access Zones allow administrators to carve a large exist only with the System zone.
cluster into smaller clusters. In prior versions of
OneFS, only SMB and HDFS was zone aware Each export is associated with only one zone, can
Nesting quotas refers to having multiple quotas
within the same directory structure. only be mounted only by clients in that zone, and
can only expose paths below the zone root. By
default, any export command applies to the
client’s current zone.
The System access zone supports the protocols
Quota events can generate notifications by email or through a cluster SMB, NFS, FTP, HTTP, and SSH.
If you are using LDAP or Active Directory to
event. The email option sends messages using the default cluster
authenticate users, the Isilon cluster uses the email If only the System access zone is used, all joined
settings. You can specify to send the email to the owner of the event,
settings for the user stored within the directory. If no or newly created authentication providers are
which is the user that triggered the event, or you can send email to an
email information is stored in the directory, or automatically contained within the System access
alternate contact, or both the owner and an alternate. You also have the
authentication is performed by a Local or NIS provider, zone. All SMB shares and NFS exports are also
option to use a customized email message template. If you need to send
you must configure a mapping. available through the System access zone.
the email to multiple users, you need to use a distribution list.
OneFS enables you to configure multiple
The default access zone within the cluster is called authentication providers on a per-zone basis.
the System access zone. By default, the built-in In other words, more than one instance of
A default notification is enabled when SmartQuotas is System access zone includes a local provider and LDAP, NIS, File, Local, and Active Directory
enabled. You can specify different notification parameters a file provider and can contain one of each of the providers per one Isilon cluster is possible.
for each type of quota (advisory, soft, and hard). other authentication providers. Multiple access zones can be created to
SMB shares that are bound to an access zone are
accommodate an enterprise environment. It is a
An access zone becomes an independent only visible/accessible to users connecting to the
best practice to ensure that each of these access
point for authentication and access to the SmartConnect zone/IP address pool to which the
zones has its own Zone Base Directory to ensure a
You can use snapshots to protect data against cluster. Only one Active Directory provider can access zone is aligned. SMB authentication and
unique namespace per access zone.
accidental deletion and modification. be configured per access zone. If you connect access can be assigned to any specific access zone.
the cluster to multiple AD environments NFS may be accessed through each zone and NFS
To use the SnapshotIQ, you must activate a SnapshotIQ
(untrusted) only one of these AD providers can authentication can now occur in its own NFS zone
license on the cluster. However, some OneFS operations
exist in a zone at one time. because the NFS protocol is now zone aware in
generate snapshots for internal system use without
requiring a SnapshotIQ license. If an application OneFS 7.2.
generates a snapshot, and a SnapshotIQ license is not
First, the joined authentication sources do not
configured, you can still view the snapshot. However, all
belong to any zone, instead they are seen by
snapshots generated by OneFS operations are
zones; meaning that the zone does not own the
automatically deleted after they are no longer needed.
OneFS snapshot is a logical pointer to data stored on a authentication source. This allows other zones to
You can disable or enable SnapshotIQ at any time.
cluster at a specific point in time. Snapshots target also include an authentication source that may
SnapshotIQ captures Copy on Write (CoW) images. You directories on the cluster, and include all data within that already be in use by an existing zone.
can configure basic functions for the SnapshotIQ directory, including any subdirectories contained within. Second, when joining AD domains, only join those
application, including automatically creating or deleting There are three things to know about joining multiple
Authentication Sources and Access Zone that are not in the same forest. Trusts within the
snapshots, and setting the amount of space that is authentication sources through access zones.
same forest are managed by AD, and joining them
assigned exclusively to snapshot storage.
could allow unwanted authentication between
The default is 20,000 snapshots. Snapshots should be set zones.
up for separate distinct and unique directories. Do not
Finally, there is no built-in check for overlapping
snapshot the /ifs directory. Instead you can create
UIDs. So when two users in the same zone - but
snapshots for the subdirectory structure under the /ifs
from different authentication sources - share the
directory. Snapshots only start to consume space when
same UID, this can cause access issues
files in the current version of the directory are changed or
deleted. First, administrators should create a separate /ifs
tree for each access zone. This process enables
Snapshots are created almost instantaneously regardless of
overlapping directory structures to exist without
the amount of data contained in the snapshot. A snapshot is
conflict and a level of autonomous behavior
Because snapshots do not consume a set not a copy of the original data, but only an additional set of
without the risk of unintentional conflict with other
amount of storage space, there is no pointers to the original data. So, at the time it is created, a
access zone structures.
requirement to pre-allocate space for creating snapshot consumes a negligible amount of storage space on
a snapshot. You can choose to store the cluster. Snapshots reference or are referenced by the Second, administrators should consider the
snapshots in the same or a different physical original file. If data is modified on the cluster, only one copy of System access zone exclusively as an
location on the cluster than the original files. the changed data is made. This allows the snapshot to administration zone. To do this, they should
maintain a pointer to the data that existed at the time that the remove all but the default shares from the System
snapshot was created, even after the data has changed. access zone, and limit authentication into the
There are some best practices for configuring
System access zone only to administrators. Each
They can be found within the path that is being snapped: access zones.
access zones works with exclusive access to its
i.e., if we are snapping a directory located at /ifs/data/
own shares providing another level of access
students/tina, we would be able to view, thru the cli or a
control and data access isolation.
Windows Explorer window (with the view hidden files
attribute enabled) the hidden .snapshot directory. The Isilon recommends joining the cluster to the LDAP
path would look like: /ifs/data/students/tina. environment before joining AD so that the AD users
Snapshot files can be found in two do not have their SIDs mapped to cluster
The second location to view the .snapshot files is at the places. ‘generated’ UIDs. If the cluster is a new
root of the /ifs directory. From here you can view all
configuration and no client access has taken place,
the .snapshots on the system but users can only open
the order LDAP/AD or AD/LDAP doesn’t matter as
the .snapshot directories for which they already have
there have been no client SID-to-UID or UID-to-SID
permissions. They would be unable to open or view
mappings.
any .snapshot file for any directory to which they did not
already have access rights. SMB time is enabled by default and is used to maintain time synchronization
between the AD domain time source and the cluster. Nodes use NTP between
The first is through the /ifs/.snapshot directory.
themselves to maintain cluster time. When the cluster is joined to an AD domain,
This is a virtual directory that will allow you to see
the cluster must stay in sync with the time on the domain controller otherwise
all the snaps listed for the entire cluster. There are two paths through which The Cluster Time property sets the cluster’s date and time settings, authentication may fail if the AD time and cluster time have more than a five
The second way to access your snapshots is to to access snapshots. either manually or by synchronizing with an NTP server. There may minute differential
access the .snapshot directory in the path in be multiple NTP servers defined. The first NTP server on the list is
which the snapshot was taken. used first, with any additional servers used only if a failure occurs. The best case support recommendation is to not use SMB time and only use NTP
Lesson 3 After an NTP server is established, setting the date or time manually if possible on both the cluster and the AD domain controller. The NTP source on
Clones can be created on the cluster using the is not allowed. After a cluster is joined to an AD domain, adding a the cluster should be the same source as the AD domain controller’s NTP source.
cp command and do not require you to license new NTP server can cause time synchronization issues. The NTP If SMB time must be used, then NTP should be disabled on the cluster and only
the SnapshotIQ module. server will take precedence over the SMB time synchronization with use SMB time.
AD and overrides the domain time settings on the cluster.
The isi snapshot list | wc –l command will tell Only one node on the cluster should be setup to coordinate NTP for the cluster.
you how many snapshots you currently have on This NTP coordinator node is called the chimer node. The configuration of the
disk. chimer node is by excluding all other nodes by their node number using the
isi_ntp_config add exclude node# node# node# command.
You can take snapshots at any point in the directory tree.
Each department or user can have their own snapshot
Permissions are preserved at the time of the
schedule. All snapshots are accessible in the virtual
snapshot. If the permissions or owner of the
directory /ifs/.snapshot. Snapshots are also available in
current file change, it does not affect the
any directory in the path where a snapshot was taken,
permissions or owner of the snapshot version.
such as /ifs/data/music/.snapshot. Snapshot remembers
which .snapshot directory you entered through.

Authentication Provider Structure

The lsassd daemon mediates between the authentication protocols used by clients and
the authentication providers in the third row, that check their data repositories, represented
To manage SnapshotIQ in the web administration on the bottom row, to determine user identity and subsequent access to files.
interface, browse to the Data Protection tab, click
SnapshotIQ, and then click Settings. You can manage snapshots by using the web
administration interface or the command line.

isi snapshot settings view To manage SnapshotIQ at the command Authentication Providers
isi snapshot settings modify line, use the isi snapshot command.
The authentication providers handle communication with authentication sources. These sources can
Manual snapshots are useful if you want to create
be external, such as Active Directory (AD), Lightweight Directory Access Protocol (LDAP), and
a snapshot immediately, or at a time that is not
Network Information Service (NIS). The authentication source can also be located locally on the
specified in a snapshot schedule.
cluster or in password files that are stored on the cluster. Authentication information for local users on
The most common method is to use schedules to You can create snapshots either by configuring a snapshot the cluster is stored in /ifs/.ifsvar/sam.db.
generate the snapshots. A snapshot schedule schedule or manually generating an individual snapshot.
ou can also assign an expiration period to the Under FTP and HTTP, the Isilon cluster supports Anonymous mode, which allows users to access
generates snapshots of a directory according to a
snapshots that are generated, automating the files without providing any credentials and User mode, which requires users to authenticate to a
schedule. The benefits of scheduled snapshots is
deletion of snapshots after the expiration period. configured authentication source.
not having to manually create a snapshot every
time you would like one taken. It does not offer advanced features that exist in other directory services such as
Active Directory.
If data is accidentally erased, lost, or otherwise corrupted or
compromised, any user with Windows Shadow Copy Client Within LDAP, each entry has a set of attributes and each attribute has a name and one
installed locally on their computer can restore the data from or more values associated with it that is similar to the directory structure in AD. Each
the snapshot file. To recover an accidentally deleted file, right entry consists of a distinguished name, or DN, which also contains a relative
click the folder that previously contained the file, click Restore distinguished name (RDN). The base DN is also known as a search DN since a given
Previous Version, and then identify the specific file you want base DN is used as the starting point for any directory search. The top level names
to recover. To restore a corrupted or overwritten file, right- almost always mimic DNS names, for example, the top-level Isilon domain would be
click the file itself, instead of the folder that contains file, and dc=isilon,dc=com for Isilon.com.
then click Restore Previous Version. This functionality is
enabled by default starting in OneFS 7.0. Users, groups, and netgroups

Replication provides for making additional copies of Configurable LDAP schemas. For example, the ldapsam schema
Isilon’s replication feature is called allows NTLM authentication over the SMB protocol for users with
data, and actively updating those copies as changes are
SyncIQ. Windows-like attributes.
made to the source.
SyncIQ creates and references snapshots to replicate a Simple bind authentication (with or without SSL)
Isilon’s replication feature, SyncIQ uses asynchronous The LDAP provider in an Isilon cluster supports the following features:
consistent point-in-time image of a root directory which will be replication. Asynchronous replication is similar to an Redundancy and load balancing across servers with identical
the source of the replication. Metadata, such as access control asynchronous file write. The target system passively directory data
lists (ACLs) and alternate data streams (ADS), are replicated acknowledges receipt of the data and returns an ACK. The
along with data. SyncIQ enables you to maintain a consistent data is then passively written to the target. SyncIQ enables Multiple LDAP provider instances for accessing servers with
backup copy of your data on another Isilon cluster. you to replicate data from one Isilon cluster to another. different user data
LDAP can be used in mixed environments and is
SyncIQ offers automated failover and failback capabilities that You must activate a SyncIQ license on both the primary
widely supported. Encrypted passwords
enable you to continue operations on another Isilon cluster if a and the secondary Isilon clusters before you can replicate
primary cluster becomes unavailable. data between them.

If you require a writeable target, you can break the source/


target association. If the sync relationship is broken, a
differential or full synchronization job is required to re-establish
the relationship.

Each cluster can contain both target and source directories,


SyncIQ uses snapshot technology to take a point in time copy
but a single directory cannot be both a source and a target
of the data on the source cluster before starting each
between the same two clusters (to each other) as this could
synchronization or copy job. This source-cluster snapshot
cause an infinite loop. To enable the LDAP service, you must configure a base distinguished name
does not require a SnapshotIQ license. The first time that a
(base DN), a port number, and at least one LDAP server.
You can configure SyncIQ to save historical snapshots on the SyncIQ policy is run, a full replication of the data from the
target, but you must license SnapshotIQ to do this. source to the target occurs. Subsequently, when the
replication policy is run, only new and changed files are
replicated. When a SyncIQ job finishes, the system deletes The ldapsearch command can be used to run queries against an LDAP server to
the previous source-cluster snapshot, retaining only the most LDAP commands for the cluster begin with isi auth config ldap. To display a list
verify whether the configured base DN is correct and the tcpdump command can be
recent snapshot. The retained snapshot is known as the last of these commands, run the isi auth config ldap list command at the CLI.
used to verify that the cluster is communicating with the assigned LDAP server.
know good snapshot. The next incremental replications
reference the snapshot tracking file maintained for each Note: AD and LDAP both use TCP port 389. Even though both services can
SyncIQ domain. be installed on one Microsoft server, the cluster can only communicate with
Lesson 4 one of services if they are both installed on the same server.
the primary reason for joining the cluster to an AD domain is to enable
domain users to access cluster data.

A cluster that joins a domain becomes a domain resource and acts as a


file server.
The replication policies are created on the source cluster. The
replication policies specify what data is replicated, where the NetBIOS requires that computer names be 15 characters or less. Two to four
data is replicated from-to, and how often the data is replicated. characters are appended to the cluster name you specify to generate a unique
SyncIQ jobs are the operations that do the work of moving the name for each node. If the cluster name is more than 11 characters, you can
data from one Isilon cluster to another. SyncIQ generates these specify a shorter name in the Machine Name box in the Join a Domain page.
jobs according to replication policies.
Obtain the name of the domain to be joined.
On the primary these would be accessed under the Before joining the domain, complete the following
Two clusters are defined in a SyncIQ policy replication. The steps: Use an account to join the domain that has the right to create a computer
Policies tab in the web administration interface, on the
primary cluster holds the Source Root Directory and the account
secondary it would be accessed under the Local
secondary cluster holds the target directory. The policy is
Targets tab. Failover operations are initiated on the
written on the primary cluster. in that domain.
secondary cluster.
There is no limit to the number of SyncIQ policies that Include the name of the OU in which you want to create the cluster’s computer
When a SyncIQ policy is started, SyncIQ generates a SyncIQ job account. Otherwise the default OU (Computers) is used.
can exist on a cluster, however the recommended
for the policy. A job is started manually or according to the
maximum is 100 policies. Only five SyncIQ jobs can When a cluster is destined to be used in a multi-mode environment, the
SyncIQ policy schedule.
run at a time cluster connect to the LDAP server first before joining the AD domain,
Sync maintains a duplicate copy of the source data so that proper relationships are established between UNIX and AD
on the target. Any files deleted on the source are identities. Joining AD first and then LDAP will likely create some
removed from the target. Sync does not provide authentication challenges and permissions issues that will require
protection from file deletion, unless the additional troubleshooting.
synchronization has not yet taken place. When you create a SyncIQ policy you must choose a replication The AD authentication provider in an Isilon cluster supports domain
Copy maintains a duplicate copy of the source data type of either sync or copy. trusts and NTLM (NT LAN Manager) or Kerberos pass through
on the target the same as sync. However, files authentication. This means that a user authenticated to an AD domain
Active Directory, or AD, is a directory service created by can access resources that belong to any other trusted AD domain.
deleted on the source are retained on the target. In
Microsoft that controls access to network resources and that
this way copy offers file deletion, but not file change
can integrate with Kerberos and DNS technologies.
protection.
During a full synchronization, SyncIQ transfers all data from the
Before you run the replication policy again, you source cluster regardless of what data exists on the target cluster.
must enable a target compare initial sync, using the Full replications consume large amounts of network bandwidth and
command on the primary isi sync policies modify may take a very long time to complete. A differential synchronization
<policy name> target- compare-initial-sync on. With compares the source and target data by doing tree walks on both
target-compare-initial-sync on for a policy, the next sides. This is used to re- establish the synchronization relationship
time the policy runs the primary and secondary between the source and target. Following the tree walks, the
clusters will do a directory tree walk of the source changed data is replicated in place of a full data synchronization.
and target directory to determine what is different. The differential synchronization option is only executed during the
first time the policy is run.

To join the cluster to an AD domain, in the web administration


interface, click Access, and then click Authentication Providers.

On the Join a Domain page, type the name of the domain you want the
The Enable Secure NFS check box enables users to log in using LDAP credentials,
cluster to join. Type the user name of the account that has the right to add
but to do this, Services for NFS must be configured in the AD environment
computer accounts to the domain, and then type the account password.
The NIS provider exposes the passwd, group, and netgroup maps
NIS provides authentication and uniformity across local area from a NIS server. Hostname lookups are also supported. Multiple
networks. servers can be specified for redundancy and load balancing.
NIS is different from NIS+, which Isilon clusters do not support.

The Local provider supports authentication and lookup


The Local provider can be used in small environments, or in UNIX
facilities for local users and groups that have been defined and
environments that contain just a few clients that access the cluster,
are maintained locally on the cluster. It does not include system
or as part of a larger AD environment. The Local provider plays a
accounts such as root or admin. UNIX netgroups are not
large role when the cluster joins an AD domain
supported in the Local provider.
OneFS uses /etc/spwd.db and /etc/group files for users and
you can use the file provider to manage end-user identity information
groups associated with running and administering the cluster.
based on the format of these files.
These files do not include end-user account information
The file provider enables you to provide an authoritative third-party
source of user and group information to the cluster. The file
provider supports the spwd.db format to provide fast access to the
data in the /etc/master.passwd file and the /etc/group format
supported by most UNIX operating systems.
The file provider pulls directly from two files formatted in the same
Creating a Policy. There are five areas of configuration
manner as /etc/group and /etc/passwd. Updates to the files can be
information required when creating a policy. Those areas
scripted. To ensure that all nodes in the cluster have access to the
are Settings, Source Cluster, Target Cluster, Target
same version of the file provider files, you should save the files to
Snapshots, and Advanced Settings. File Provider
the /ifs/.ifsvar directory. The file provider is used by OneFS to
support the users root and nobody.

The file provider is useful in UNIX environments where passwd,


group, and netgroup files are synchronized across multiple UNIX
servers. OneFS uses standard BSD /etc/spwd.db and /etc/group
database files as the backing store for the file provider. The
spwd.db file is generated by running the pwd_mkdb command-line
utility.

Note: The built-in System file provider includes services to list,


manage, and authenticate against system accounts (for example,
root, admin, and nobody). Modifying the System file provider is not
recommended.

The first layer is the protocol layer. This may be Server


Message Block, or SMB; Network File System, or NFS;
File Transfer Protocol, or FTP; or some other protocol but
this is how the cluster is actually reached.
Lesson 4
The next layer is authentication. The user has to be
identified using some system, such as NIS, local files, or
Active Directory.
The third layer is identity assignment. Normally this is
straightforward and based on the results of the
authentication layer, but there are some cases where
identities have to be mediated within the cluster, or where
roles are assigned within the cluster based on a user’s
identity. We will examine some of these details later in this
module.
Finally, based on the established connection and
Interactions with an Isilon cluster have four layers in the process. authenticated user identity, the file and directory
permissions are evaluated to determine whether or not the
user is entitled to perform the requested data activities.

SyncIQ Job Process

The lsassd daemon mediates between the authentication


protocols used by clients and the authentication
providers in the third row, who check their data
The results of the assessment can be viewed in the repositories, represented on the bottom row, to
web administration interface by navigating to Data determine user identity and subsequent access to files.
Protection > SyncIQ > Reports
Policy Assessment When the cluster receives an authentication request,
The report can also be viewed from the CLI using lsassd searches the configured authentication sources for
the command isi sync reports view <policy name> matches to an incoming identity. If the identity is verified,
<job id>. authentication providers used by OneFS to verify a OneFS generates an access token. This token is not the
user’s identity after which users can then be authorized same as an Active Directory or Kerberos token, but an
to access cluster resources. internal token which reflects the OneFS Identity
Management system.

Failover changes the target directory from read-only to a read- Access tokens form the basis of who you are when
write status. Failover is managed per SyncIQ policy. Only those performing actions on the cluster and supply the primary
policies failed over are modified. SyncIQ only changes the owner and group identities to use during file creation.
directory status and does not change other required operations OneFS, during the authentication process, creates its own
for client access to the data. Network routing and DNS must be token for users that successfully authenticate to the
redirected to the target cluster. Any authentication resources cluster. Access tokens are also compared against
such as AD or LDAP must be available to the target cluster. All permissions on an object during authorization checks.
Failover is the process of allowing clients to modify data on
shares and exports must be availble on the target cluster or be
a target cluster. If the offline source cluster later becomes User identifier, or UID, is a 32-bit string that uniquely identifies users
created as part of the failover process.
accessible again, you can fail back to the original source on the cluster. UIDs are used in UNIX-based systems for identity
Each SyncIQ policy must be failed back. Like failover, failback cluster. Failback is the process of copying changes that management.
must be selected for each policy. The same network changes occurred on the original target while failed over back to the
original source. This allows clients to access data on the OneFS supports three primary identity types, each of Group identifier, or GID, for UNIX serves the same purpose for
must be made to restore access to direct clients to the source
source cluster again, and resuming the normal direction of which can be stored directly on the file system. These groups that UID does for users.
cluster.
replication of data back from the source to target. identity types are used when creating files, checking file The identity types supported by OneFS are:
ownership or group membership, and performing file Security identifier, or SID, is a unique identifier that begins with the
Failover revert may occur even if data modification has
access checks. domain identifier and ends with a 32-bit relative identifier (RID). Most
occurred to the target directories. If data has been modified on Failover revert is a process useful for instances when the source SIDs take the form S-1-5-21-<A>- <B>-<C>-<RID>, where <A>,
the original target cluster, then either a fail back operation becomes available sooner than expected. Failover revert allows <B>, and <C> are specific to a domain or computer, and <RID>
must be performed to preserve those changes, otherwise any administrators to quickly return access to the source cluster, and denotes the object inside the domain. SID is the primary identifier for
changes to the target cluster data will be lost. restore replication to the target. users and groups in Active Directory.
A Failover Revert is not supported for SmartLock directories.

Isilon handles multiple user identities by mapping them


internally to unified identities.

Algorithmic mappings are created by adding a UID or GID


to a well-known base SID, resulting in a “UNIX SID.” These
mappings are not persistently stored in the ID mapper
database.

External mappings are derived from identity sources


outside of OneFS. For example, Active Directory can store a
UID or GID along with an SID. When retrieving the SID from
AD, the UID/GID is also retrieved and used for mappings on
OneFS.
Manual mappings are set explicitly by running the isi auth
mapping command at the command line. Manual mappings
are stored persistently in the ID mapper database. The isi
auth mapping new command allocates a mapping between
a source persona and a target type (UID, GID, SID , or
Lesson 1 principal).

The isi auth mapping token command includes options for


displaying a user’s authentication information by a list of
Managing SyncIQ Performance Mappings are stored in a cluster-distributed database called the Each mapping is stored as a one-way relationship from source to
parameters including user name and UID. This allows for
ID Mapper. The ID provider builds the ID Mapper based on destination. If a mapping is created, or exists, it has to map both
detailed examination of identities on OneFS.
incoming source and target identity type—UID, GID, or SID. ways, and to record these two-way mappings they are presented
Only authoritative sources are used to build the ID Mapper. as two complementary one-way mappings in the database. Automatic mappings are generated if no other mapping
type can be found. In this case, a SID is mapped to a UID
or GID out of the default range of 1,000,000-2,000,000.
This range is assumed to be otherwise unused and a check
is made only to ensure there is no mapping from the given
If no source subnet:pool is specified then the replication job could UID before it is used.
potentially use any of the external interfaces on the cluster. SyncIQ
attempts to use all available resources across the source cluster to
maximize performance. This additional load may have an undesirable
effect on other source cluster operations or on client performance.

Dividing of files is necessary when the remaining file replication


work is greater than or equal to 20 MB in size. The number of file
1. If the source has a UID/GID, use it. This occurs when incoming
splits is limited only by the maximum of 40 SyncIQ workers per job.
requests from AD has Services for NFS or Services for UNIX installed.
File splitting avoids SyncIQ jobs dropping to single-threaded
This service adds an additional attribute to the AD user (uidNumber
behavior if the remaining work is a large file. The resultant behavior
attribute) and group (gidNumber attribute) objects. When you configure
is overall SyncIQ job performance by providing greater efficiency for
this service, you identify from where AD will acquire these identifiers.
large files and a decreased time to job completion. if an incoming authentication request comes in, the
authentication daemon attempts to find the correct UID/GID to 2. Check if the incoming SID has a mapping in the ID Mapper.
File splitting is enabled by default, but only when both the source store on disk by checking for the following ID mapping types in
and target cluster are at a minimum of OneFS 7.1.1. It can be this specified order: 3. Try name lookups in available UID/GID sources. This can be a local, or
disabled or enabled on a per policy basis using the command isi sam.db, lookup, as well as LDAP, and/or NIS directory services. By
sync policies modify <policy_name> disabled-file-split true or false. default, external mappings from name lookups are not written to the ID
True to disable, false to re-enable if it had been disabled. Identity Mapping Rules
Mapper database.

4. Allocate a UID/GID.
Deduplication on Isilon is an asynchronous batch job that occurs You can configure ID mappings on the Access page. To open this
transparently to the user. Stored data on the cluster is inspected, block by page, expand the Membership & Roles menu, and then click
block, and one copy of duplicate blocks is saved. File records point to the User Mapping. When you configure the settings on this page, the
shared blocks, but file metadata is not deduplicated. The user should not OneFS does deduplication by deduplicating blocks. settings are persistent until changed. The settings in here can
experience any difference except for greater efficiency in data storage on however have complex implications, so if you are in any doubt as
the cluster, because the user visible metadata remains untouched - only to the implications, the safe option is to talk to Isilon support
internal metadata are altered. staff, and establish what the likely outcome will be.
Another limitation is that the deduplication does not occur across the UNIX assumes unique case-sensitive namespaces for
length and breadth of the entire cluster, but only on each disc pool users and groups. For example, Name and name can
individually. represent different objects.

Deduplication on Isilon is a relatively nonintrusive process. Windows provides a single namespace for all objects
Rather than increasing the latency of write operations by Some examples of this include the that is not case-sensitive, but specifies a prefix that
deduplicating data on the fly, it is done after the fact. This means UIDs, GIDs, and SIDs are primary identifiers of identity. Names, such as following: targets a specific Active Directory domain. For example
that the data starts out at the full literal size on the cluster’s usernames, are classified as a secondary identifier. This is because domain\username.
drives, and might only reduce to its deduplicated, more efficient different systems such as LDAP and Active Directory may not use the
representation hours or days later. same naming convention to create object names and there are many Kerberos and NFSv4 define principals, which requires
variations in the way a name can be entered or displayed. that all names have a format similar to email addresses.
For example name@domain.
As an example, given the name support and the
domain EXAMPLE.COM, then support,
EXAMPLE\support, and support@EXAMPLE.COM are
all names for a single object in Active Directory.
Deduplication on Isilon identifies identical blocks of storage On-disk identities map identities at a global level for individual protocols.
duplicated across the pool. Instead of storing the blocks in It is important to choose the preferred identity to store on disk because
multiple locations, deduplication stores them in one location. OneFS uses an on-disk identity to transparently map identities for
most protocols require some level of mapping to operate correctly. Only
Deduplication reduces storage expenses by reducing the different protocols. Using on-disk identities, you can choose whether to
The on-disk identity types are UNIX, Sid, and Native. one set of permission, POSIX compatible or Microsoft, is authoritative.
storage needs. Less duplicated data, fewer blocks required to store the UNIX or the Windows identity, or allow the system to determine
The on-disk identity helps the system decide, which is the authoritative
store it. the correct identity to store.
representation of an object’s permissions. The authoritative
representation preserves the file’s original permissions.
If the Unix on-disk identity type is set, the system always stores the UNIX
identifier, if available. During authentication, the system authentication lsassd
daemon looks up any incoming SIDs in the configured authentication sources. If
a UID/GID is found, the SID is converted to either a UID or GID. If a UID/GID
does not exist on the cluster, whether it is local to the client or part of an
untrusted AD domain, the SID is stored instead. This setting is recommended for
NFSv2 and NFSv3, which use UIDs and GIDs exclusively.
How Deduplication Works on OneFS
If the SID on-disk identity type is set, the system will always store a SID, if
available. During the authentication process, lsassd searches the configured
authentication sources for SIDs to match to an incoming UID or GID. If no SID is
found, the UNIX ID is stored on-disk.

If the Native on-disk identity is set, the lsassd daemon attempts to choose the
correct identity to store on disk by running through each of the ID mapping
The available on-disk identity types are UNIX, Sid, and Native. methods. If a user or group does not have a real UNIX identifier (UID or GID), it
stores the SID. This is the default setting in OneFS 6.5 and later.
Lesson 5

The first phase is sampling, in which blocks in files are


taken for measurement, and hash values calculated.
Phase Sequence
In the second phase, blocks are compared with each
other using the sampled data
The process of deduplication
In the sharing phase, blocks which match are written to consists of four phases. The lower 9 bits are grouped as three 3-bit sets, called
shared locations and the data used for all the files which triplets, which contain the read (r), write (w), and execute (x)
contain the duplicate blocks permissions for each class of users (owner, group, other).
Finally the index of blocks is updated to reflect what has The information in the upper 7 bits can also encode what can
changed. be done with the file, although it has no bearing on file
ownership. An example of such a setting would be the so-
the sharing phase is missing compared to the full
called “sticky bit”.
deduplication job. Since this is the slowest phase, it
allows customers to get a fairly quick overview of In a UNIX environment, you modify permissions for users,
The deduplication dry run is three
how much data storage they are likely to reclaim groups, and others (everyone else who has access to the
phases
through deduplication. The dry run has no licensing computer) to allow or deny file and directory access as needed OneFS does not support POSIX ACLs, which are different from Windows ACLs.
requirement, so customers can run it before they
pay for deduplication. You can modify the user and group ownership of files and directories, and set
permissions for the owner user, owner group, and other users on the system.
You can modify UNIX permissions in the web administration interface by
expanding the File System menu, and then clicking File System Explorer.

OneFS supports the standard UNIX tools for changing permissions, chmod and
chown.

The chown command is used to change ownership of a file. You must have root
Use Cases user access to change the owner of a file.

Windows includes many rights that you can assign individually or you can
assign a set of rights bundled together as a permission.

When working with Windows, you should remember a few important rules that
dictate the behavior of Windows permissions. First, if a user has no permission
assigned in an ACL, then the user has no access to that file or folder. Second,
permissions can be explicitly assigned to a file or folder and they can also be
After enabling the Deduplication license, you can find the In Windows environments, file and directory access rights are inherited from the parent folder.
Deduplication under the File System tab. From this screen you defined in Windows Access Control List, or ACL. A Windows
ACL is a list of access control entries, or ACEs. Each entry By default, when a file or folder is created, it inherits the permissions of the
can start a deduplication job and view any reports that have
contains a user or group and a permission that allows or denies parent folder. If a file or folder is moved, it retains the original permissions. You
been generated. You can also make alterations to settings in
access to a file or folder. can view security permissions in the properties of the file or folder in Windows
terms of which paths are deduplicated.
Explorer. If the check boxes in the Permissions box are not available (grayed
Lesson 6: SmartLock out), those permission are inherited. You can explicitly assign a permission. It is
important to remember that explicit permissions override inherited permissions.

The last rule to remember is that Deny permissions take precedence over Allow
permissions. However, an inherited Deny permission is overridden by an explicit
Allow permission.
ACLs are more complex than mode bits and are also capable Instead of the standard three permissions available for mode bits, ACLs have 32
of expressing much richer sets of access rules. However, not bits of fine grained access rights. Of these, the upper 16 bits are general and
all POSIX mode bits can be represented by Windows ACLs any apply to all object types. The lower 16 bits vary between files and directories but
more than POSIX mode bits can represent all Windows ACL are defined in a compatible way that allows most applications to use the same
values. bits for files and directories.
On a Windows computer, you can configure ACLs in Windows Explorer.
In OneFS, an ACL can contain ACEs with a UID, GID, or SID
as the trustee. For OneFS, in the web administration interface, you can change ACLs in
the ACL policies page.

Windows Client Effective Permissions

NFS exports and SMB shares on the cluster can be configured for the same data.

Mixed Environments

To cause cluster permissions to operate with UNIX semantics,


as opposed to Windows semantics, click UNIX only. By
enabling this option, you prevent ACL creation on the system.
To cause cluster permissions to operate in a mixed UNIX and
By default, OneFS is configured with the optimal settings for a Windows environment, click Balanced.
OneFS support a set of global policy settings that enable you to customize the In OneFS, on the Protocols menu,
mixed UNIX and Windows environment; however, you can configure The ACL Policies page appears.
default ACL and UNIX permissions settings to best support your environment click ACLs. To cause cluster permissions to operate with Windows
ACL policies if necessary to optimize for UNIX or Windows.
semantics, as opposed to UNIX semantics, click Windows
only.
If Configure permission policies manually is selected, it enables
fine tuning of the ACL creations and modifications.

When you assign UNIX permissions to a file, no ACLs are stored for that file.
However, a Windows system processes only ACLs; Windows does not process
UNIX permissions. Therefore, when you view a file’s permissions on a Windows
Lesson 2 system, the Isilon cluster must translate the UNIX permissions into an ACL.

Synthetic ACLs are the cluster’s translation of UNIX permissions so they can be
understood by a Windows client. If a file also has Windows-based ACLs (and not
only UNIX permissions), it is considered by OneFS to have advanced ACLs.
If a file has UNIX permissions, you may notice synthetic ACLs when you run the ls
–le command on the cluster in order to view a file’s ACLs. Advanced ACLs display
a plus (+) sign when listed using an isi –l command.

Synthetic vs Advanced ACLs

Module 4: Authentication

OneFS also stores permissions on disk.

OneFS stores an internal representation of the permissions of a file system object, such as a
directory or a file. The internal representation, which can contain information from either the
POSIX mode bits or the ACLs, is based on RFC 3530, which states that a file’s permissions
must not make it appear more secure than it really is. The internal representation can be
Permissions Overview used to generate a synthetic ACL, which approximates the mode bits of a UNIX file for an
SMB client. Since OneFS derives the synthetic ACL from mode bits, it can express only as
much as permission information as mode bits can and not more.
Since the ACL model is richer than the POSIX model, no permissions information is lost
when POSIX mode bits are mapped to ACLs. When ACLs are mapped to mode bits,
however, ACLs must be approximated as mode bits and some information may be lost.
OneFS compares the access token presented during the connection with the authorization
data found on the file. All user and identity mapping occurs during token generation, so no
Authorization Process mapping is performed when evaluating permissions.
OneFS supports two types of authorization data on a file: access control lists (ACLs) and
UNIX permissions.

Click UNIX only for cluster permissions to operate with UNIX semantics, as opposed
to Windows semantics. This option prevents ACL creation on the system.

Click Balanced for cluster permissions to operate in a mixed UNIX and Windows
environment. This setting is recommended for most cluster deployments.

Click Windows only for the cluster permissions to operate with Windows semantics,
as opposed to UNIX semantics. If you enable this option, the system returns an error
on UNIX chmod requests.
Click Configure permission policies manually to configure individual permission-
policy settings.

To configure the type of authorization to use in


ACL policies control how permissions are managed and processed.
your environment:

Managing ACL Permissions

In the web administration interface, click PROTOCOLS, click Windows


Sharing (SMB), and then click SMB Settings. The SMB Server Settings
pages contains the global settings that determine how the SMB file
Enable SMB sharing service operates. These settings include enabling or disabling
support for the SMB service. The SMB service is enabled by default.
You can also set how the Windows client will be authorized when
connecting to the SMB shares that you create. The choices are
Anonymous and User. Anonymous mode allows users to access files
without providing any credentials. User mode allows users to connect
with credentials that are defined in an external source. You can also join
the cluster to an Active Directory domain to allows users in an Active
Directory domain to authenticate with their AD credentials.
Anonymous access to an Isilon cluster uses the special nobody identity
to perform file- sharing operations. When the nobody identity is used, all
files and folders created using SMB are owned by the nobody identity.
Therefore you cannot apply file permissions to the nobody account, so
using Anonymous mode gives access to all files in the share. Other SMB
clients, like Apple clients, are prompted to authenticate in Anonymous
mode. In this case, login as guest with no password.
Lesson 3

The Advanced Settings include the SMB server settings (behavior of


snapshot directories) and the SMB share settings (File and directory
permissions settings, performance settings, and security settings). To
apply a default ACL to the shared directory, click Apply Windows
Default ACLs. If the Auto-Create Directories setting is selected, an
Add an SMB Share ACL with the equivalent of UNIX 700 mode bit permissions is created
for any directory that is automatically created.

In the command-line interface you can create shares using the isi
smb shares create command. You can also use the isi smb
shares modify to edit a share and isi smb shares list to view the
current Windows shares on a cluster.

OneFS supports the automatic creation of SMB home directory paths for
users. Using variable expansion, user home directories are automatically
provisioned. Home directory provisioning enables you to create a single
home share that redirects users to their SMB home directories. A new
directory is automatically created if one does not already exist.

Network File System (NFS) is a protocol that allows a client computer Isilon supports NFS protocol versions 3 and 4. Kerberos
to access files over a network. It is an open standard that is used by authentication is supported. You can apply individual host rules to
UNIX clients. You can configure NFS to allow UNIX clients to address each export, or you can specify all hosts, which eliminates the need to
content stored on Isilon clusters. NFS is enabled by default in the create multiple rules for the same host. When multiple exports are
cluster; however, you can disable it if it isn’t needed. created for the same path, the more specific rule takes precedence.

Enable NFS
The support for NFS version 3 is enabled, NFSv4 is disabled
by default. If NFSv4 is enabled, the name for the NFSv4
domain needs to be specified in the NFSv4 domain box.
The Lock Protection Level setting allows the NFS lock state
to be preserved when a node fails in the cluster. The number
set is the number of nodes that can fail simultaneously and
In the web administration interface, click PROTOCOLS > UNIX still preserve the lock state.
Sharing (NFS), and then select Global Settings. The NFS service
settings are the global settings that determine how the NFS file Other configuration steps on the NFS Settings page are the
Lesson 4 sharing service operates. possibilities to reload the cached NFS exports configuration
to ensure any DNS or NIS changes take affect immediately,
to customize the user/group mappings, and the security
types (UNIX and/or Kerberos), as well as other advanced NFS
settings.

If no clients are listed in any entries, no client restrictions


apply to attempted mounts.

NFSv3 does not track state. A client can be redirected to another node, if
configured, without interruption to the client. NFSv4 tracks state, including
NFSv3 and NFSv4 Compared file locks. Automatic failover is not an option in NFSv4.
NFSv4 can use Windows Access Control Lists (ACLs). NFSv4 mandates
strong authentication. It can be used with or without Kerberos, but NFSv4
drops support for UDP communications, and only uses TCP because of the
need for larger packet payloads than UDP will support

File caching can be delegated to the client: a read delegation implies a


guarantee by the server that no other clients are writing to the file, while a
write delegation means no other clients are accessing the file at all. NFSv4
adds byte-range locking, moving this function into the protocol; NFSv3 relied
on NLM for file locking.
NFSv4 exports are mounted and browsable in a unified hierarchy on a pseudo
root (/) directory. This differs from previous versions of NFS.

The cluster uses the HTTP service in two ways: as a means to


request files stored on the cluster, and to interact with the web
administration interface. The cluster also provides for Distributed
Authoring and Versioning (DAV) services, which enable multiple
users to manage and modify files.
Each node in the Isilon cluster can run an instance of the Apache Web Server to
provide HTTP access. You can configure the HTTP service to run in one of three
modes: enabled, disabled, and disabled entirely. Enabled mode allows HTTP
access for cluster administration and browsing content on the cluster. Disabled Off- No Active Directory Authentication, which is
mode allows only administrative access to the web administration interface. the default setting.
Disabled entirely mode closes the port that is used for HTTP file access, port 80. Basic Authentication Only - Enables HTTP basic
However, users can still access the web administration interface, but they must authentication. User credentials are sent in plain
specify port 8080 in the URL to have a successful connection. text.
Integrated Authentication Only - Enables HTTP
authentication via NTLM, Kerberos, or both.
Lesson 5 You may select one of the following
options for Active Directory Integrated and Basic Authentication - Enables both
Authentication: basic and integrated authentication. Basic
Authentication with Access Controls - Enables HTTP
authentication via NTLM and Kerberos, and enables
the Apache web server to perform access checks.
Integrated and Basic Auth with Access Controls -
Enables HTTP basic authentication and integrated
authentication, and enables access checks via the
Apache web server.

Server-to-server transfers: This enables the


transfer of files between two FTP servers. This
setting is disabled by default.
Anonymous access: This enables users with You can enable the anonymous FTP service on the root by
‘anonymous’ or ‘ftp’ as the user name to creating a local user named ftp. The FTP root can be
access files and directories. With this setting changed for any user by changing the user’s home directory.
Select one of the following Service enabled, authentication is not required. This Local access enables authentication of FTP users using any
settings. setting is disabled by default. of the authentication methods enabled on the cluster.
Local access: This enables local users to
The Isilon cluster supports FTP access, however, by default, access files and directories with their local
the FTP service is disabled. Any node in the cluster can user name and password. Enabling this setting
respond to FTP requests, and any standard user account can allows local users to upload files directly
be used. The FTP service is disabled by default. To enable and through the file system. This setting is enabled
configure FTP access on the cluster, navigate to the FTP by default.
Protocol page at PROTOCOLS > FTP Settings.

Lesson 6 OSX Support

OSX can use NFS or SMB to save files to an Isilon cluster.


When an OSX computer saves a file to another OSX
computer, it appears that only one file is saved, but OSX files The storage administrator can avoid this problem by
are comprised of two sets of data, called forks: a data fork ensuring that OSX clients all reach the same files on
and a resource fork. The data fork is the file raw data, an Isilon cluster through the same protocol. Either
whether it is application code, raw text, or image data. The NFS or SMB can work, so the choice of protocol
resource fork contains metadata, which is not visible to OSX depends on factors such as established infrastructure,
users on an OSX HFS+ volume. Only the file content is performance measurements and so on.
visible. But when an OSX client uses NFS or SMB to save
files to an Isilon cluster, the user does see two files.

Vous aimerez peut-être aussi