Académique Documents
Professionnel Documents
Culture Documents
Administrator Guide
Contents
Working with the StorageGRID Webscale system .................................... 8
Web browser requirements .......................................................................................... 8
Signing in to the Grid Manager ................................................................................... 9
Signing out of the Grid Manager ............................................................................... 10
Changing your password ........................................................................................... 10
Changing the browser session timeout period ........................................................... 11
Viewing StorageGRID Webscale license information .............................................. 12
Updating StorageGRID Webscale license information ............................................. 12
Understanding the Grid Management API ................................................................ 12
Grid Management API versioning ................................................................ 15
Protecting against Cross-Site Request Forgery (CSRF) ................................ 16
Monitoring the StorageGRID Webscale system ...................................... 18
Viewing the Dashboard ............................................................................................. 18
Viewing the Nodes page ............................................................................................ 20
Viewing common tabs for all node types ...................................................... 21
Viewing the Objects and ILM tabs for Storage Nodes .................................. 26
Viewing the Grid Topology tree ................................................................................ 28
Understanding node icons ......................................................................................... 28
Information that you should monitor regularly ......................................................... 29
Key attributes to monitor ............................................................................... 30
Monitoring storage capacity .......................................................................... 30
Monitoring the recovery point objective through ILM ................................. 37
Monitoring object verification operations ..................................................... 38
Monitoring archival capacity ......................................................................... 40
Monitoring servers and grid nodes ................................................................ 40
Monitoring the Total Events alarm ................................................................ 54
About alarms and email notifications ........................................................................ 55
Alarm notification types ................................................................................ 55
Notification status and queues ....................................................................... 55
Configuring notifications .............................................................................. 56
Suppressing email notifications for a mailing list ......................................... 61
Suppressing email notifications system wide ................................................ 62
Selecting a preferred sender .......................................................................... 63
Managing alarms ....................................................................................................... 63
Alarm class types .......................................................................................... 64
Alarm triggering logic ................................................................................... 66
Creating custom service or component alarms ............................................. 70
Creating Global Custom alarms .................................................................... 72
Disabling alarms ........................................................................................................ 74
Alarms and tables .......................................................................................... 74
Disabling Default alarms for services ........................................................... 74
4 | StorageGRID Webscale 11.1 Administrator Guide
Related information
Grid primer
• You had the default security certificate created during installation, or you have a custom
certificate.
• Alarm acknowledgments made through one Admin Node are not copied to other Admin Nodes.
Therefore, the Grid Topology tree might not look the same for each Admin Node.
• Some maintenance procedures can only be performed from the primary Admin Node.
Steps
2. In the browser’s address bar, enter the IP address or fully qualified domain name of the Admin
Node.
The StorageGRID Webscale system's Sign In page appears.
3. If you are prompted with a security alert, view and install the certificate using the browser’s
installation wizard.
The alert will not appear the next time you access this URL.
10 | StorageGRID Webscale 11.1 Administrator Guide
4. Enter your case-sensitive username and password, and click Sign In.
The home page of the Grid Manager appears, which includes the Dashboard.
Related concepts
Configuring certificates on page 227
Related references
Web browser requirements on page 8
Step
Steps
1. From the Grid Manager header, select your name > Change password.
5. Click Save.
Working with the StorageGRID Webscale system | 11
Steps
5. Sign in again.
12 | StorageGRID Webscale 11.1 Administrator Guide
Step
• You must have a new license file to apply to your StorageGRID Webscale system.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Enter the provisioning passphrase for your StorageGRID Webscale system in the Provisioning
Passphrase text box.
3. Click Browse.
4. In the Open dialog box, locate and select the new license file (.txt), and click Open.
The new license file is validated and displayed.
5. Click Save.
2. Select Help > API Docs from the web application header.
API operations
The Grid Management API organizes the available API operations into the following sections:
• accounts – Operations to manage storage tenant accounts, including creating new accounts and
retrieving storage usage for a given account.
• alarms – Operations to list current alarms, and return information about the health of the grid.
• compliance – Operations to manage global compliance settings for the StorageGRID Webscale
system.
• config – Operations related to the product release and versions of the Grid Management API. You
can list the product release version and the major versions of the Grid Management API
supported by that release, and you can disable deprecated versions of the API.
• groups – Operations to manage local Grid Administrator Groups and to retrieve federated Grid
Administrator Groups from an external LDAP server.
Note: The Grid Management API uses the Prometheus systems monitoring tool as the backend
data source. For information about constructing Prometheus queries, see the Prometheus web
site.
• ntp-servers – Operations to list or update external Network Time Protocol (NTP) servers.
Operation details
When you expand each API operation, you can see its HTTP action, endpoint URL, a list of any
required or optional parameters, an example of the request body (when required), and the possible
responses.
Working with the StorageGRID Webscale system | 15
Steps
1. Click the HTTP action to see the request details.
2. Determine if the request requires additional parameters, such as a group or user ID. Then, obtain
these values. You might need to issue a different API request first to get the information you need.
3. Determine if you need to modify the example request body. If so, you can click Model to learn
the requirements for each field.
6. Click Execute.
Top-level resources
The Grid Management API provides the following top-level resources:
• /org: Access is restricted to users who belong to a local or federated LDAP group for a tenant
account. For details, see the information about using tenant accounts.
• /private: Access is restricted to Grid Administrator users. These APIs are intended for internal
use only and are not publicly documented. These APIs are also subject to change without notice.
Related concepts
Protecting against Cross-Site Request Forgery (CSRF) on page 16
Related information
Using tenant accounts
Prometheus: Query basics
Changes in the Grid Management API that are backward incompatible bump the major version of the
API. For example, an incompatible API change bumps the version from 1.1 to 2.0. Changes in the
Grid Management API that are backward compatible bump the minor version instead. Backward-
compatible changes include the addition of new endpoints or new properties. For example, a
compatible API change bumps the version from 1.0 to 1.1.
When you install StorageGRID Webscale software for the first time, only the most recent version of
the Grid Management API is enabled. However, when you upgrade to a new major version of
StorageGRID Webscale, you continue to have access to the older API version for at least one major
StorageGRID Webscale release.
16 | StorageGRID Webscale 11.1 Administrator Guide
Note: You can use the Grid Management API to configure the supported versions. See the “config”
section of the Swagger API documentation for more information. You should deactivate support
for the older version after updating all Grid Management API clients to use the newer version.
GET https://{{IP-Address}}/api/versions
{
"responseTime": "2016-10-03T14:49:16.587Z",
"status": "success",
"apiVersion": "2.0",
"data": [
1,
2
]
}
When true, a GridCsrfToken cookie is set with a random value for sign-ins to the Grid Manager,
and the AccountCsrfToken cookie is set with a random value for sign-ins to the Tenant Manager.
If the cookie is present, all requests that can modify the state of the system (POST, PUT, PATCH,
DELETE) must include one of the following:
• The X-Csrf-Token header, with the value of the header set to the value of the CSRF token
cookie.
• For endpoints that accept a form-encoded body: A csrfToken form-encoded request body
parameter.
See the online API documentation for additional examples and details.
Note: Requests that have a CSRF token cookie set will also enforce the "Content-Type:
application/json" header for any request that expects a JSON request body as an additional
protection against CSRF attacks.
18
• Dashboard
• Nodes page
Related concepts
Viewing the Dashboard on page 18
Viewing the Nodes page on page 20
• Health: Provides an indication of the system's health by showing the number of disconnected grid
nodes and the number of current alarms. Shows license status if there are license-related alerts.
• Information Lifecycle Management (ILM): Displays current ILM operations and ILM queues
for your StorageGRID Webscale system.
Monitoring the StorageGRID Webscale system | 19
• Protocol Operations: Displays the number of protocol specific operations performed by your
StorageGRID Webscale system. You can use this information to monitor your system's workloads
and efficiencies.
• Available Storage: Shows the available and used storage capacity on the grid, not including
archival media. With this information, you can compare the used storage with the available
storage and, in a multi-site grid, determine which site is consuming more storage.
To view more detailed information about each panel in the Grid Manager, click . Detailed
descriptions are also listed in the following table.
Available Displays the available • To view the capacity, place For details about
Storage and used storage reviewing Storage
your cursor over the chart's
capacity in the entire Node information, see
available and used capacity
grid, not including the section about
sections.
archival media. managing disk
The Overall chart • To change the date range or storage.
presents grid-wide select other data, click the
totals. If this is a chart icon in the upper right
multi-site grid, of the panel.
additional charts The default date range in
appear for each data charts on this page is one
center site. month.
With this information, • To see available storage
you can compare the details, click Grid. Then,
used storage with the view the details for the
available storage and entire grid, an entire site, or
in multi-site grids, a single Storage Node.
determine which site
is consuming more.
Information Displays current ILM For details about
• To see the existing ILM
Lifecycle operations and ILM managing ILM rules
rules, click ILM > Rules.
Management queues for your and policies, see the
(ILM) system. • To see the existing ILM section about
With this information, policies, click ILM > managing objects
you can monitor your Policies. through information
system's workload. lifecycle management.
20 | StorageGRID Webscale 11.1 Administrator Guide
Related concepts
Managing alarms on page 63
Viewing the Nodes page on page 20
Managing disk storage on page 178
Managing objects through information lifecycle management on page 106
Creating and managing tenant accounts on page 92
Related information
Grid primer
Using tenant accounts
When you first select Nodes, the Nodes home page provides an overview of your StorageGRID
Webscale grid deployment. You can select the following tabs to view information for the entire grid:
To view information for a particular node, click the appropriate link on the left. The informational
tabs for each node vary by node type.
The graphs on the Nodes page use the Grafana visualization tool and the Prometheus systems
monitoring tool. Grafana displays time-series data in graph and chart formats, while Prometheus
serves as the backend data source.
To display a different time interval, select one of the controls above the chart or graph. You can select
to display the information available for intervals of 1 hour, 1 day, 1 week, 1 month, or 1 year. Or you
can set a custom interval, which allows you to specify date and time ranges.
To display a different time interval, select one of the controls above the chart or graph. You can select
to display the information available for intervals of 1 hour, 1 day, 1 week, 1 month, or 1 year. Or you
can set a custom interval, which allows you to specify date and time ranges.
• Admin Nodes, Archive Nodes, and Gateway Nodes each contain a list of disk devices and
volumes on the node.
Monitoring the StorageGRID Webscale system | 25
• Storage Nodes contain graphs showing data storage and metadata storage used over time, as well
as a list of disk devices and volumes on the node.
Related concepts
Monitoring servers and grid nodes on page 40
Monitoring storage capacity on page 30
Related information
Grid primer
Troubleshooting StorageGRID Webscale
In the ILM tab, you can view information about ILM operations.
Related concepts
Managing objects through information lifecycle management on page 106
Related information
Grid primer
28 | StorageGRID Webscale 11.1 Administrator Guide
To expand or collapse the Grid Topology tree, click or at the site, node, or service level. To
expand or collapse all items in the entire site or in each node, hold down the <Ctrl> key and click.
Task Frequency
Monitor system status. Note what has changed from the previous Daily
day.
Monitor the rate at which Storage Node capacity is being Weekly
consumed.
Check the capacity of the external archival storage system. Weekly
When monitoring capacity, look at the absolute value and at the rate at which capacity is being
consumed over time. Consumption rates can help you estimate when additional capacity might be
required.
Metadata storage Nodes > Storage Node Hover over the Storage Used - Object
capacity Metadata graph to see the percentage of
allowed space consumed by object
metadata. This value is the Metadata Used
Space (Percent) (CDLP) attribute.
Note: The attributes for storage capacity and metadata storage capacity do not include the capacity
used for archived content.
Steps
1. Monitoring storage capacity for the entire grid on page 30
2. Monitoring storage capacity per Storage Node on page 33
3. Monitoring object metadata capacity per Storage Node on page 35
StorageGRID Webscale system. In a multi-site grid, you can compare storage usage between sites
(data centers).
Steps
1. Select Dashboard.
2. In the Available Storage panel, note the overall summary of free and used storage capacity. For
multi-site grids, review the chart for each data center.
Note: The summary does not include archival media.
3. Place your cursor over the chart's Free or Used capacity sections to see exactly how much space is
free or used.
4. Click Chart (Reports) to view a graph showing capacity usage over time for Overall storage or
for individual data centers.
A graph showing Percentage Storage Capacity Used (%) vs. Time appears.
Example
In the following example, usable storage space is being consumed at a rate of approximately 4%
per month, which means that there are eight months left before this data center runs out of space.
32 | StorageGRID Webscale 11.1 Administrator Guide
• Go to Nodes > storage node > Storage and view the graphs and tables.
• Technical support could ask you to use this path: Select Support > Grid Topology. Then
select StorageGRID Webscale Deployment > Overview > Main.
6. To maintain normal system operations, add Storage Nodes, add storage volumes, or archive object
data before usable space is consumed.
Related information
Expanding a StorageGRID Webscale grid
Monitoring the StorageGRID Webscale system | 33
Note: If the value of STAS for a Storage Node falls below the value of the Storage Volume Hard
Read-Only Watermark, the Storage Node becomes read-only and can no longer store new objects.
You should add storage volumes or Storage Nodes before this happens.
Steps
2. Hover your cursor over the Storage Used - Object Data graph.
A pop-up displays Used (%), Used, and Total capacities.
34 | StorageGRID Webscale 11.1 Administrator Guide
3. Review the values in the tables below the graphs. To view graphs of the values, click Chart in
the Available columns in the Volumes and Object Stores tables.
4. Monitor these values over time to estimate the rate at which usable storage space is being
consumed.
Usable space is the actual amount of storage space available to store objects.
5. To maintain normal system operations, add Storage Nodes, add storage volumes, or archive object
data before usable space is consumed.
Monitoring the StorageGRID Webscale system | 35
Related concepts
What watermarks are on page 183
Related information
Expanding a StorageGRID Webscale grid
• CDLP = 70% (minor alarm): You should add new Storage Nodes as soon as possible.
• CDLP = 90% (major alarm): You should add new Storage Nodes immediately.
• CDLP = 100% (critical alarm): You must add new Storage Nodes immediately and you must stop
the ingest of new objects.
Note that when the Metadata Used Space (Percent) attribute reaches 70% (that is, when the Metadata
Allowed Space becomes 70% full), the CDLP alarm is triggered as a minor alarm. You should add
new Storage Nodes in an expansion procedure as soon as possible. When you add the new nodes, the
system automatically rebalances object metadata across all Storage Nodes, and the alarms clear.
Attention: When the Metadata Used Space (Percent) attribute reaches 90%, the CDLP alarm is
triggered as a major alarm, and a warning appears on the Dashboard. If this warning appears, you
must add new Storage Nodes immediately. You must never allow object metadata to use more than
100% of the allowed space.
36 | StorageGRID Webscale 11.1 Administrator Guide
Attention: If the first storage volume is smaller than the Metadata Reserved Space, the CDLP
calculation might be inaccurate.
Steps
3. If the Used % (CDLP Metadata Used Space (Percent)) value is 70% or higher, expand the
StorageGRID Webscale system by adding Storage Nodes.
4. To view alarm details, select the Alarms tab from the Data Store Overview page, or select
Alarms from the top menu bar.
Example
CDLP (Metadata Used Space (Percent)) alarms of all severities (minor, major, and critical) are
displayed on this page. In the example, all the alarms listed are critical.
5. If a major or critical CDLP alarm appears on the Current Alarms page, add Storage Nodes
immediately.
When you add the new nodes, the system automatically rebalances object metadata across all
Storage Nodes, and the alarms clear.
Related concepts
What watermarks are on page 183
Related information
Expanding a StorageGRID Webscale grid
Monitoring the StorageGRID Webscale system | 37
• Awaiting - Background: excludes repair of replicated copies with only one copy remaining.
• Awaiting - Client: includes new ingests and metadata updates; excludes deletions.
• Deletions.
Ingest or other activity can exceed the rate at which the system can process ILM. When this scenario
occurs, the system will begin to queue objects whose ILM can no longer be fulfilled in near real time.
In the example shown, the chart of the Awaiting—Client indicates that the number of objects
awaiting ILM evaluation temporarily increases in an unsustainable manner, then eventually
decreases. Such a trend indicates that ILM was temporarily not fulfilled in near real time.
Steps
3. In the ILM Activity section, review the key attributes for ILM evaluations:
Awaiting - All
The total number of objects awaiting ILM evaluation.
Awaiting - Client
The total number of objects awaiting ILM evaluation from client operations (for example,
ingest).
Scan Rate
The rate at which objects in the grid are scanned and queued for ILM.
Scan Period - Estimated
The estimated time to complete a full ILM scan of all objects.
Note: A full scan does not guarantee that ILM has been applied to all objects.
Related information
Grid primer
Steps
• To check replicated object data verification, select Storage Node > LDR > Verification >
Overview > Main.
• To check erasure coded fragment verification, select Storage Node > LDR > Erasure
Coding > Overview > Main.
40 | StorageGRID Webscale 11.1 Administrator Guide
Steps
3. Check the Store State and Store Status attributes to confirm that the Store component is Online
with No Errors.
An offline Store component or one with errors might indicate that targeted archival storage
system can no longer accept object data because it has reached capacity.
• Services
• NTP synchronization
Services
The Services component tracks the services and support modules running on a grid node. It reports
the service’s current version, status, the number of threads (CPU tasks) running, the current CPU
load, and the amount of RAM being used.
The services are listed as well as the support modules (such as time synchronization). Also listed is
the operating system and the StorageGRID Webscale software version installed on the grid node.
The status of a service is either Running or Not Running. A service is listed with a status of Not
Running when its state is Administratively Down.
42 | StorageGRID Webscale 11.1 Administrator Guide
Related concepts
Alarm notification types on page 55
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Make sure that any event with a count greater than 0 has been resolved.
4. Select the Reset check boxes for the specific counters you want to reset.
Resources
The SSM service uses the standard set of resources attributes that report on the service health and all
computational, disk device, and network resources. In addition, the Resources attributes report on
memory, storage hardware, network resources, network interfaces, network addresses, and receive
and transmit information. The Resources component of the SSM service provides the ability to reset
network error counters.
If the Storage Node is a StorageGRID Webscale appliance, appliance information appears in the
Resources section. For details, see the installation and maintenance instructions for your appliance.
Related information
SG6000 appliance installation and maintenance
44 | StorageGRID Webscale 11.1 Administrator Guide
Timing
The SSM service uses the set of timing attributes that report on the state of the grid node’s time and
the time recorded by neighboring grid nodes. In addition, the SSM Timing attributes report on NTP
Synchronization.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
1. Select Nodes.
3. Select Overview.
The Node Information table on the Overview tab displays the name of the node, the node type,
the software version installed, and the IP addresses associated with the node. The Interface
column contains the name of the interface. An eth interface is a Grid Network, Admin Network,
or Client Network, hic is a bonded port, and mtc (not shown in the following screen shot) is for
management IP addresses on the appliance.
Monitoring the StorageGRID Webscale system | 45
a. View the CPU Utilization and Memory graphs to determine the percentages of CPU and
memory usage over time. To display a different time interval, select one of the controls above
the chart or graph. You can select to display the information available for intervals of 1 hour, 1
day, 1 week, 1 month, or 1 year. Or you can set a custom interval, which allows you to specify
date and time ranges.
46 | StorageGRID Webscale 11.1 Administrator Guide
b. Scroll down to view the table of components for the appliance. This table contains information
such as the model name of the appliance; controller names, serial numbers, and IP addresses;
and the status of each component.
Note: Some fields, such as Compute Controller BMC IP and Compute Hardware, appear
only for appliances with that feature.
Monitoring the StorageGRID Webscale system | 47
Field Description
Appliance Model The model number for this StorageGRID Webscale appliance
shown in SANtricity software.
Storage Controller Name The name for this StorageGRID Webscale appliance shown
in SANtricity software.
Storage Controller A IP address for management port 1 on storage controller A.
Management IP You use this IP to access SANtricity software to troubleshoot
storage issues.
Storage Controller B IP address for management port 1 on storage controller B.
Management IP You use this IP to access SANtricity software to troubleshoot
storage issues.
Storage Controller WWID The worldwide identifier of the storage controller shown in
SANtricity software.
Storage Appliance Chassis The chassis serial number of the appliance.
Serial Number
48 | StorageGRID Webscale 11.1 Administrator Guide
Field Description
Storage Hardware The overall status of the storage controller hardware.
If the Storage Node is a StorageGRID Webscale appliance
and it needs attention, then both the StorageGRID Webscale
and SANtricity systems indicate that the storage hardware
needs attention.
If the status is “needs attention,” first check the storage
controller using SANtricity software. Then, ensure that no
other alarms exist that apply to the compute controller.
Storage Controller Failed The number of drives that are not optimal.
Drive Count
Storage Controller A The status of storage controller A.
Storage Controller B The status of storage controller B.
Some appliance models do not have a storage controller B.
Storage Controller Power The status of power supply A for the storage controller.
Supply A
Storage Controller Power The status of power supply B for the storage controller.
Supply B
Storage Multipath The multipath connectivity state.
Connectivity For details about resolving performance or fault tolerance
issues, refer to the E-Series documents.
Overall Power Supply The status of all power supplies for the appliance.
Compute Controller BMC The IP address of the baseboard management controller
IP (BMC) port in the compute controller.
You use this IP to connect to the BMC interface to monitor
and diagnose the appliance hardware.
This field is not displayed for appliance models that do not
contain a BMC.
Compute Controller Serial The serial number of the compute controller.
Number
Compute Hardware The status of the compute controller hardware.
This field is not displayed for appliance models that do not
have separate compute hardware and storage hardware.
Compute Controller CPU The temperature status of the compute controller's CPU.
Temperature
Compute Controller The temperature status of the compute controller.
Chassis Temperature
Compute Controller Power The status of power supply A for the compute controller.
Supply A
Compute Controller Power The status of power supply B for the compute controller.
Supply B
For details about alarms in StorageGRID Webscale, see the information about
troubleshooting.
5.
Use the following table with the values in the Speed column in the Network Interfaces table to
determine whether the 10/25-GbE network ports on the appliance were configured to use
active/backup mode or LACP mode.
Note: The values shown in the table assume all four links are used.
See the installation and maintenance instructions for your appliance for more information
about configuring the 10/25-GbE ports.
7. Select Storage to view graphs that show the percentages of storage used over time for object data
and object metadata, as well as information about disk devices, volumes, and object stores.
52 | StorageGRID Webscale 11.1 Administrator Guide
a. Scroll down to view the amounts of available storage for each volume and object store.
The Worldwide Name for each disk matches the volume world-wide identifier (WWID) that
appears when you view standard volume properties in SANtricity software (the management
software connected to the appliance's storage controller).
To help you interpret disk read and write statistics related to volume mount points, the first
portion of the name shown in the Name column of the Disk Devices table (that is, sdc, sdd,
sde, and so on) matches the value shown in the Device column of the Volumes table.
Monitoring the StorageGRID Webscale system | 53
Related information
SG6000 appliance installation and maintenance
SG5700 appliance installation and maintenance
SG5600 appliance installation and maintenance
Troubleshooting StorageGRID Webscale
NetApp Documentation: SANtricity Storage Manager
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
For more information, see the troubleshooting information and the installation and maintenance
instructions for your appliance.
Steps
4. In the System Events section, note the count for Storage Hardware Events.
5. If a hardware event is noted here, identify the cause by completing the following steps:
Related information
Troubleshooting StorageGRID Webscale
SG6000 appliance installation and maintenance
SG5700 appliance installation and maintenance
SG5600 appliance installation and maintenance
Related information
Troubleshooting StorageGRID Webscale
Notifications are processed through the email notifications queue and are sent to the mail server one
after another in the order they are triggered. If there is a problem (for example, a network connection
error) and the mail server is unavailable when the attempt is made to send the notification, a best
effort attempt to resend the notification to the mail server continues for a period of 60 seconds. If the
notification is not sent to the mail server after 60 seconds, the notification is dropped from the
notifications queue and an attempt to send the next notification in the queue is made. Because
notifications can be dropped from the notifications queue without being sent, it is possible that an
alarm can be triggered without a notification being sent. In the event that a notification is dropped
from the queue without being sent, the MINS (E-mail Notification Status) Minor alarm is triggered.
For a StorageGRID Webscale system configured with multiple Admin Nodes (and thus multiple
NMS services), if the “standby” sender detects a Server Connection Error with the preferred sender,
it will begin sending notifications to the mail server. The standby sender will continue to send
notifications until it detects that the preferred sender is no longer in an error state and is again
successfully sending notifications to the mail server. Notifications in the preferred sender’s queue are
not copied to the standby sender. Note that in a situation where the preferred sender and the standby
sender are islanded from each other, duplicate messages can be sent.
Related tasks
Selecting a preferred sender on page 63
Configuring notifications
By default, notifications are not sent. You must configure the StorageGRID Webscale to send
notifications when alarms are raised.
Steps
1. Configuring email server settings on page 57
2. Creating email templates on page 58
3. Creating mailing lists on page 59
4. Configuring global email notifications on page 60
Monitoring the StorageGRID Webscale system | 57
Steps
Item Description
Mail Server IP address of the SMTP mail server. You can enter a host name
rather than an IP address if you have previously configured DNS
settings on the Admin Node.
Port Port number to access the SMTP mail server.
Authentication Allows for the authentication of the SMTP mail server. By
default, authentication is Off.
Authentication Credentials Username and Password of the SMTP mail server. If
Authentication is set to On, a username and password to access
the SMTP mail server must be provided.
58 | StorageGRID Webscale 11.1 Administrator Guide
4. Under From Address, enter a valid email address that the SMTP server will recognize as the
sending email address. This is the official email address from which the alarm notification or
AutoSupport message is sent.
5. Optionally, send a test email to confirm that your SMTP mail server settings are correct.
a. In the Test E-mail > To box, add one or more addresses that you can access.
You can enter a single email address or a comma-delineated list of email addresses. Because
the NMS service does not confirm success or failure when a test email is sent, you must be
able to check the test recipient’s inbox.
Related tasks
Creating mailing lists on page 59
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
Item Description
Template Unique name used to identify the template. Template names cannot be
Name duplicated.
Subject Optional. Prefix that will appear at the beginning of an email’s subject line.
Prefix Prefixes can be used to easily configure email filters and organize notifications.
Header Optional. Header text that appears at the beginning of the email message body.
Header text can be used to preface the content of the email message with
information such as company name and address.
Footer Optional. Footer text that appears at the end of the email message body. Footer
text can be used to close the email message with reminder information such as
a contact phone number or a link to a web site.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
3. Click Edit (or Insert if this is not the first mailing list).
60 | StorageGRID Webscale 11.1 Administrator Guide
Item Description
Group Name Unique name used to identify the mailing list. Mailing list names cannot be
duplicated.
Note: If you change the name of a mailing list, the change is not propagated
to the other locations that use the mailing list name. You must manually
update all configured notifications to use the new mailing list name.
Related tasks
Creating email templates on page 58
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
Related tasks
Creating mailing lists on page 59
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Click Edit next to the mailing list for which you want to suppress notifications.
3. Under Suppress, select the check box next to the mailing list you want to suppress, or select
Suppress at the top of the column to suppress all mailing lists.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
Related concepts
Troubleshooting AutoSupport messages on page 82
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
3. Select the Admin Node you want to set as the preferred sender from the drop-down list.
Managing alarms
Customizing alarms lets you customize your StorageGRID Webscale system based on your unique
monitoring requirements.
You can configure customized alarms either globally (Global Custom alarms) or for individual
services (Custom alarms). You can create customized alarms with alarm levels that override Default
64 | StorageGRID Webscale 11.1 Administrator Guide
alarms, and you can create alarms for attributes that do not have a Default alarm. Alarm
customization is restricted to accounts with the Grid Topology Page Configuration and Other Grid
Configuration permissions.
Attention: Using the Default alarm settings is recommended. Be very careful if you change alarm
settings. For example, if you increase the threshold value for an alarm, you might not detect an
underlying problem until it prevents a critical operation from completing. If you do need to change
an alarm setting, you should discuss your proposed changes with technical support.
Related concepts
Controlling system access with administration user accounts and groups on page 249
Related information
Troubleshooting StorageGRID Webscale
• Default: Standard alarm configurations. Default alarms are set during installation.
• Global Custom: Custom alarms that are set at a global level and that apply to all services of a
given type in the StorageGRID Webscale system. Global Custom alarms are configured after
installation to override default settings.
• Custom: Custom alarms that are set on individual services or components. Custom alarms are
configured after installation to override default settings.
Default alarms
Default alarms are configured on a global basis and cannot be modified. However, Default alarms can
be disabled or overridden by Custom alarms and Global Custom alarms.
Default alarms can be disabled both globally and at the services level. If a Default alarm is disabled
globally, the Enabled check box appears with an adjacent asterisk at the services level on the
Configuration page. The asterisk indicates that the Default alarm has been disabled through the
Configuration > Global Alarms page even though the Enabled check box is selected.
You can view the default alarms for a particular service or component. Select Support > Grid
Topology. Then select service or component > Configuration > Alarms.
Related tasks
Disabling Default alarms for services on page 74
Disabling a Default alarm system wide on page 75
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Monitoring the StorageGRID Webscale system | 65
Steps
Related tasks
Creating Global Custom alarms on page 72
Disabling Global Custom alarms for services on page 76
Disabling Global Custom alarms system wide on page 77
Custom alarms
Custom alarms can be created to override a Default alarm or Global Custom alarm at the service or
component level. You can also create new Custom alarms based on the service’s unique
requirements.
You can configure Custom alarms by going to each service's Configuration > Alarms page in the
Grid Topology tree.
Related tasks
Creating custom service or component alarms on page 70
2. Global Custom alarms with alarm severities from Critical down to Notice.
After an enabled alarm for an attribute is found in the higher alarm class, the NMS service only
evaluates within that class. The NMS service will not evaluate against the other lower priority
classes. That is, if there is an enabled Custom alarm for an attribute, the NMS service only evaluates
the attribute value against Custom alarms. Global Custom alarms and Default alarms are not
evaluated. Thus, an enabled Global Custom alarm for an attribute can meet the criteria needed to
trigger an alarm, but it will not be triggered because a Custom alarm (that does not meet the specified
criteria) for the same attribute is enabled. No alarm is triggered and no notification is sent.
Related concepts
What an Admin Node is on page 218
Threshold Values
Global Custom alarm Default alarm (enabled)
(enabled)
Notice >= 1500 >= 1000
Minor >= 15,000 >= 1000
Major >=150,000 >= 250,000
If the attribute is evaluated when its value is 1000, no alarm is triggered and no notification is sent.
The Global Custom alarm takes precedence over the Default alarm. A value of 1000 does not reach
the threshold value of any severity level for the Global Custom alarm. As a result, the alarm level is
evaluated to be Normal.
After the above scenario, if the Global Custom alarm is disabled, nothing changes. The attribute
value must be evaluated again before a new alarm level is triggered.
With the Global Custom alarm disabled, when the attribute value is evaluated again, the attribute
value is evaluated against the threshold values for the Default alarm. The alarm level triggers a Notice
level alarm and an email notification is sent to the designated personnel.
Note, however, that if there are Custom alarms for an attribute, these alarms are still evaluated as
Custom alarms have a higher priority than Global Custom alarms.
Example 2
For the following example an attribute has a Custom alarm, a Global Custom alarm, and a Default
alarm defined and enabled as shown in the following table.
Threshold Values
Custom alarm Global Custom Default alarm (enabled)
(enabled) alarm (enabled)
Notice >= 500 >= 1500 >=1000
Minor >= 750 >= 15,000 >=10,000
Major >=1,000 >= 150,000 >= 250,000
If the attribute is evaluated when its value is 1000, a Major alarm is triggered and an email
notification is sent to the designated personnel. The Custom alarm takes precedence over both the
Global Custom alarm and Default alarm. A value of 1000 reaches the threshold value of the Major
severity level for the Custom alarm. As a result, the attribute value triggers a Major level alarm.
Within the same scenario, if the Custom alarm is then disabled and the attribute value evaluated again
at 1000, the alarm level is changed to Normal. The attribute value is evaluated against the threshold
values of the Global Custom alarm, the next alarm class that is defined and enabled. A value of 1000
does not reach any threshold level for this alarm class. As a result, the attribute value is evaluated to
be Normal and no notification is sent. The Notice level alarm from the previous evaluation is cleared.
Example 3
For the following example, an attribute has a Custom alarm, Global Custom alarm, and Default alarm
defined and enabled/disabled as shown below in the following table.
Threshold Values
Custom alarm Global Custom Default alarm (enabled)
(disabled) alarm (enabled)
Notice >= 500 >= 1500 >=1000
68 | StorageGRID Webscale 11.1 Administrator Guide
Threshold Values
Custom alarm Global Custom Default alarm (enabled)
(disabled) alarm (enabled)
Minor >= 750 >= 15,000 >=10,000
Major >=1,000 >= 150,000 >= 250,000
If the attribute is evaluated when its value is 10,000, a Notice alarm is triggered and an email
notification is sent to the designated personnel.
The Custom alarm is defined, but disabled; therefore, the attribute value is evaluated against the next
alarm class. The Global Custom alarm is defined, enabled, and it takes precedence over the Default
alarm. The attribute value is evaluated against the threshold values set for the Global Custom alarm
class. A value of 10,000 reaches the Notice severity level for this alarm class. As a result, the attribute
value triggers a Notice level alarm.
If the Global Custom alarm is then disabled and the attribute value evaluated again at 10,000, a Minor
level alarm is triggered. The attribute value is evaluated against the threshold values for the Default
alarm class, the only alarm class in that is both defined and enabled.
A value of 10,000 reaches the threshold value for a Minor level alarm. As a result, the Notice level
alarm from the previous evaluation is cleared and the alarm level changes to Minor. An email
notification is sent to the designated personnel.
If the order is reversed, when UMEM drops to 100MB, the first alarm (<=100000000) is triggered,
but not the one below it (= 50000000).
Monitoring the StorageGRID Webscale system | 69
Severity changes
If an alarm’s severity changes, the severity is propagated up the network hierarchy as needed. If there
is a notification configured, a notification is sent. The notification is sent only at the time the alarm
enters or leaves the new severity level.
Notifications
A notification reports the occurrence of an alarm or the change of state for a service. It is an email
communication to designated personnel that the system requires attention.
To avoid multiple alarms and notifications being sent when an alarm threshold value is reached, the
alarm severity is checked against the current alarm severity for the attribute. If there is no change,
then no further action is taken. This means that as the NMS service continues to monitor the system,
it will only raise an alarm and send notifications the first time it notices an alarm condition for an
attribute. If a new value threshold for the attribute is reached and detected, the alarm severity changes
and a new notification is sent. Alarms are cleared when conditions return to the Normal level.
The trigger value shown in the notification of an alarm state is rounded to three decimal places.
Therefore, an attribute value of 1.9999 triggers an alarm whose threshold is less than (<) 2.0,
although the alarm notification shows the trigger value as 2.0.
New services
As new services are added through the addition of new grid nodes or sites, they inherit Default alarms
and Global Custom alarms.
70 | StorageGRID Webscale 11.1 Administrator Guide
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
• Click Edit (if this is the first entry) or Insert to add a new alarm.
Monitoring the StorageGRID Webscale system | 71
• Copy an alarm from the Default alarms or Global Custom alarms tables. Click Copy next
to the alarm you want to modify.
Heading Description
Enabled Select or clear to enable or disable the alarm.
Attribute Select the name and code of the attribute being monitored from the list of all
attributes applicable to the selected service or component.
To display information about the attribute, click Info next to the attribute’s
name.
Severity The icon and text indicating the level of the alarm.
Message The reason for the alarm (connection lost, storage space below 10%, and so
on).
Operator Operators for testing the current attribute value against the Value threshold:
• = equal to
Value The alarm’s threshold value used to test against the attribute’s actual value
using the operator.
The entry can be a single number, a range of numbers specified with a colon
(1:3), or a comma delineated list of numbers and/or ranges.
Additional A supplementary list of email addresses to be notified when the alarm is
Recipients triggered, in addition to the mailing list’s configuration on the Configuration >
Notificationspage. Lists are comma delineated.
Note: Mailing lists require SMTP server setup in order to operate. Before
adding mailing lists, confirm that SMTP is configured.
Notifications for Custom alarms can override notifications from Global Custom
or Default alarms.
Actions Control buttons to:
Edit a row
Insert a row
Delete a row
Drag-and-drop a row up or down
Copy a row
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
• To add a new alarm, click Edit (if this is the first entry) or Insert .
Monitoring the StorageGRID Webscale system | 73
d. In the list of results, click Copy next to the alarm you want to modify.
The Default alarm is copied to the Global Custom alarms table.
Heading Description
Enabled Select or clear to enable or disable the alarm.
Attribute Select the name and code of the attribute being monitored from the list of all
attributes applicable to the selected service or component.
To display information about the attribute, click Info next to the attribute’s
name.
Severity The icon and text indicating the level of the alarm.
Message The reason for the alarm (connection lost, storage space below 10%, and so
on).
Operator Operators for testing the current attribute value against the Value threshold:
• = equals
Value The alarm’s threshold value used to test against the attribute’s actual value
using the operator.
The entry can be a single number, a range of numbers specified with a colon
(1:3), or a comma delineated list of numbers and/or ranges.
Additional A supplementary list of email addresses to be notified when the alarm is
Recipients triggered. This is in addition to the mailing list’s configuration on the
Configuration > Notifications > Main page. Lists are comma delineated.
Note: Mailing lists require SMTP server setup in order to operate. Before
adding mailing lists, confirm that SMTP is configured.
Notifications for Custom alarms can override notifications from Global Custom
or Default alarms.
74 | StorageGRID Webscale 11.1 Administrator Guide
Heading Description
Actions Control buttons to:
Edit a row
Insert a row
Delete a row
Drag-and-drop a row up or down
Copy a row
Disabling alarms
Alarms are enabled by default, but you can disable alarms that are not required.
Disabling an alarm for an attribute that currently has an alarm triggered does not clear the current
alarm. The alarm will be disabled the next time the attribute crosses the alarm threshold, or you can
clear the triggered alarm.
Attention: There are consequences to disabling alarms and extreme care should be taken.
Disabling an alarm can result in no alarm being triggered. Because alarms are evaluated by alarm
class and then severity level within the class, disabling an alarm at a higher class does not
necessarily result in a lower class alarm being evaluated. All alarms for a specific attribute must be
disabled before a lower alarm class will be evaluated.
Related tasks
Clearing triggered alarms on page 78
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
4. In the Default Alarms table, click Edit next to the alarm you want to disable.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
a. In the Default Alarms section, select Filter by > Attribute Code or Attribute Name.
Note: Selecting Disabled Defaults displays a list of all currently disabled Default alarms.
3. In the Default Alarms table, click the Edit icon next to the alarm you want to disable.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
you want to ensure that all these Custom alarms have the same configuration, you can create a Global
Custom alarm, disable it, and then enable it for selected services as a Custom alarm.
If you want to create a Global Custom alarm and disable it for selected services, you must create a
local Custom alarm for that service that will never be triggered. A local Custom alarm that is never
triggered overrides all Global Custom alarms for that service.
Note: Alarms cannot be disabled for individual rows in a table.
Steps
2. In the Global Custom alarm table, click next to the alarm you want to disable.
The alarm is copied to the Custom Alarms table.
Related tasks
Creating custom service or component alarms on page 70
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. In the Global Custom Alarms table, click Edit next to the alarm you want to disable.
Steps
Related concepts
Disabling alarms on page 74
Monitoring the StorageGRID Webscale system | 79
What AutoSupport is
AutoSupport enables technical support to proactively monitor the health of your StorageGRID
Webscale system.
You can use any combination of the following choices to send AutoSupport messages to technical
support:
• Weekly: Automatically send AutoSupport messages once per week (default setting: Enabled)
By analyzing AutoSupport information, technical support can help you determine the health and
status of your StorageGRID Webscale system and troubleshoot any problems that might occur.
Technical support can also monitor the storage needs of the system, such as the need to expand.
Related information
NetApp Support
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
3. Click Send.
80 | StorageGRID Webscale 11.1 Administrator Guide
• Events information as listed on the Nodes > Grid Node > Events page
• NMS entities
Related tasks
Configuring email server settings on page 57
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Monitoring the StorageGRID Webscale system | 81
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Attempts to resend the AutoSupport message 15 times every four minutes for one hour.
3. After one hour of send failures, updates the Most Recent Result attribute to Failed.
5. Maintains the regular AutoSupport schedule if the message fails because the NMS service is
unavailable, and if a message is sent before seven days pass.
6. When the NMS service is available again, sends an AutoSupport message immediately if a
message has not been sent for seven days or more.
1. Displays an error message if the error is known. For example, an email server configuration error
could appear.
Check that the StorageGRID Webscale system’s email server is correctly configured and that your
email server is running.
Related tasks
Configuring email server settings on page 57
Suppressing email notifications system wide on page 62
Using reports
You can use reports to monitor the state of the StorageGRID Webscale system and troubleshoot
problems. The types of reports available in the Grid Manager include pie charts (on the Dashboard
only), graphs, and text reports.
Types of charts
In addition to the summary pie charts shown on the Dashboard, you can access more detailed graphs
that present the data with the attribute value (vertical axis) over a specified time span (horizontal
axis).
The Dashboard provides access to pie charts summarizing available storage as well as graphs for
ILM and protocol operations.
In addition, graphs are available from the Nodes page and the Grid Topology tree. There are three
types of graphs:
• Line graph: Used to plot the values of an attribute that has a unit value (such as NTP Frequency
Offset, in ppm). The changes in the value are plotted in regular data intervals (bins) over time.
• Area graph: Used to plot volumetric quantities, such as object counts or service load values. Area
graphs are similar to line graphs, but include a light brown shading below the line. The changes in
the value are plotted in regular data intervals (bins) over time.
84 | StorageGRID Webscale 11.1 Administrator Guide
• State graph: State graphs are used to plot values that represent distinct states such as a service
state that can be online, standby, or offline. State graphs are similar to line graphs, but the
transition is discontinuous, that is, the value jumps from one state value to another.
Chart legend
The lines and colors used to draw charts have specific meaning.
Sample Meaning
Reported attribute values are plotted using dark green lines.
Light green shading around dark green lines indicates that the
actual values in that time range vary and have been “binned” for
faster plotting. The dark line represents the weighted average. The
range in light green indicates the maximum and minimum values
within the bin. Light brown shading is used for area graphs to
indicate volumetric data.
Blank areas (no data plotted) indicate that the attribute values were
unavailable. The background can be blue, gray, or a mixture of
gray and blue, depending on the state of the service reporting the
attribute.
Monitoring the StorageGRID Webscale system | 85
Sample Meaning
Light blue shading indicates that some or all of the attribute values
at that time were indeterminate; the attribute was not reporting
values because the service was in an unknown state.
Displaying charts
The Nodes page contains the charts you should access regularly to monitor attributes such as storage
capacity and throughput. In some cases, especially when working with technical support, you might
use the Grid Topology tree to access additional charts.
Steps
2. Select the appropriate tab, or select Chart to the right of an attribute to display a chart.
For some attributes, the chart appears when you select the tab. For other attributes, you select the
Chart icon to display the chart.
86 | StorageGRID Webscale 11.1 Administrator Guide
3. To display additional attributes and charts, select Support > Grid Topology.
4. Select grid node > component or service > Overview > Main.
Monitoring the StorageGRID Webscale system | 87
Generating charts
Charts display a graphical representation of attribute data values. You can report on a data center site,
grid node, component, or service.
• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.
Steps
2. Select grid node > component or service > Reports > Charts.
4. To force the Y-axis to start at zero, deselect the Vertical Scaling check box.
5. To show values at full precision, select the Raw Data checkbox, or to round values to a maximum
of three decimal places (for example, for attributes reported as percentages), deselect the Raw
Data checkbox.
6. Select the time period to report on from the Quick Query drop-down list.
88 | StorageGRID Webscale 11.1 Administrator Guide
7. If you selected Custom Query, customize the time period for the chart by entering the Start Date
and End Date.
Use the format YYYY/MM/DD HH:MM:SS in local time. Leading zeros are required to match the
format. For example, 2017/4/6 7:30:00 fails validation. The correct format is: 2017/04/06
07:30:00.
8. Click Update.
A chart is generated after a few moments. Allow several minutes for tabulation of long time
ranges. Depending on the length of time set for the query, either a raw text report or aggregate
text report is displayed.
9. If you want to print the chart, right-click and select Print, and modify any necessary printer
settings and click Print.
• Time Received: Local date and time that a sample value of an attribute’s data was processed by
the NMS service.
• Sample Time: Local date and time that an attribute value was sampled or changed at the source.
• Aggregate Time: Last local date and time that the NMS service aggregated (collected) a set of
changed attribute values.
• Average Value: The average of the attribute’s value over the aggregated time period.
• Minimum Value: The minimum value over the aggregated time period.
• Maximum Value: The maximum value over the aggregated time period.
• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.
Steps
2. Select grid node > component or service > Reports > Text.
4. Select the number of results per page from the Results per Page drop-down list.
5. To round values to a maximum of three decimal places (for example, for attributes reported as
percentages), deselect the Raw Data check box.
6. Select the time period to report on from the Quick Query drop-down list.
Select the Custom Query option to select a specific time range.
The report appears after a few moments. Allow several minutes for tabulation of long time ranges.
7. If you selected Custom Query, you need to customize the time period to report on by entering the
Start Date and End Date.
Use the format YYYY/MM/DD HH:MM:SS in local time. Leading zeros are required to match the
format. For example, 2017/4/6 7:30:00 fails validation. The correct format is: 2017/04/06
07:30:00.
8. Click Update.
A text report is generated after a few moments. Allow several minutes for tabulation of long time
ranges. Depending on the length of time set for the query, either a raw text report or aggregate
text report is displayed.
9. If you want to print the report, right-click and select Print, and modify any necessary printer
settings and click Print.
Steps
3. Click Export .
Monitoring the StorageGRID Webscale system | 91
4. Select and copy the contents of the Export Text Report window.
This data can now be pasted into a third-party document such as a spreadsheet.
92
• Enterprise use case: If you are administering a StorageGRID Webscale system in an enterprise
application, you might want to segregate the grid's object storage by the different departments in
your organization. In this case, you could create tenant accounts for the Marketing department,
the Customer Support department, the Human Resources department, and so on.
Note: If you use the S3 client protocol, you can simply use S3 buckets and bucket policies to
segregate objects between the departments in an enterprise. You do not need to use tenant
accounts. See the instructions for implementing S3 client applications for more information.
• Service provider use case: If you are administering a StorageGRID Webscale system as a
service provider, you can segregate the grid's object storage by the different entities that will lease
the storage on your grid. In this case, you would create tenant accounts for Company A, Company
B, Company C, and so on.
• Display name for the tenant account (the tenant's account ID is assigned automatically and cannot
be changed)
• Which client protocol will be used by the tenant account (S3 or Swift)
• Whether the tenant account will use its own identity source or share the grid's identity source
• For S3 tenant accounts: Whether the tenant account has permission to use platform services with
S3 buckets. If you permit tenant accounts to use platform services, you must ensure that the grid
is configured to support their use. See “Managing platform services” for more information.
• Optionally, a storage quota for the tenant account—the maximum number of gigabytes, terabytes,
or petabytes available for the tenant's objects. A tenant's storage quota represents a logical amount
(object size), not a physical amount (size on disk).
Configuring S3 tenants
After an S3 tenant account is created, tenant users can access the Tenant Manager to perform tasks
such as the following:
Creating and managing tenant accounts | 93
• Setting up identity federation (unless the identity source is shared with the grid), or creating local
groups and users
Attention: S3 tenant users can create and manage S3 buckets with the Tenant Manager, but they
must have S3 access keys and use the S3 REST API to ingest and manage objects.
• Setting up identity federation (unless the identity source is shared with the grid), or creating local
groups and users
Attention: The tenant's root user can sign in the Tenant Manager. However, the tenant's root user
does not have permission to use the Swift REST API. To authenticate into the Swift REST API to
create containers and ingest objects, the user must belong to a group with the Administrator
permission. However, administrator users cannot sign in to the Tenant Manager.
Steps
1. Creating a tenant account on page 93
2. Changing the password for a tenant account's root user on page 96
3. Editing a tenant account on page 97
4. Deleting tenant accounts on page 99
5. Managing platform services on page 100
Related tasks
Managing platform services on page 100
Related information
Implementing S3 client applications
Implementing Swift client applications
Using tenant accounts
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
94 | StorageGRID Webscale 11.1 Administrator Guide
Steps
1. Select Tenants.
The Tenant Accounts page appears.
2. Click Create.
Step 1 - Create Tenant Account appears.
3. In the Display Name text box, enter the display name for this tenant account.
When the tenant account is created, it receives a unique, numeric Account ID; for this reason,
display names are not required to be unique.
4. Select the client protocol that will be used by this tenant account, either S3 or Swift.
5. Uncheck the Uses Own Identity Source checkbox if this tenant account will use the identity
source that was configured for the Grid Manager. See “Configuring identity federation” for more
information.
If this checkbox is selected (default), you must configure a unique identity source for this tenant if
you want to use identity federation for tenant groups and users. See the instructions for using
tenant accounts.
Creating and managing tenant accounts | 95
6. Uncheck the Allow Platform Services checkbox if you do not want this tenant to use platform
services for S3 buckets.
If platform services are enabled, a tenant can use features, such as CloudMirror replication, that
access external services. You might want to disable the use of these features to limit the amount
of network bandwidth or other resources a tenant consumes. See “Managing platform services”
for more information.
7. Optionally, enter the maximum number of gigabytes, terabytes, or petabytes that you want to
make available for this tenant's objects in the Storage Quota text box. Then, select the units from
the drop-down list.
Leave this field blank if you want this tenant to have an unlimited quota.
Note: A tenant's storage quota represents a logical amount (object size), not a physical amount
(size on disk). ILM copies and erasure coding do not contribute to the amount of quota used. If
the quota is exceeded, the tenant account cannot create new objects.
Note: You can monitor tenant storage usage from the Dashboard in the Tenant Manager or with
the Tenant Management API. Note that a tenant's storage usage values might become out of
date if nodes are isolated from other nodes in the grid. The totals will be updated when network
connectivity is restored.
8. In the Tenant Root User Password section, enter a password for the tenant account's root user.
9. Click Save.
The tenant account is created, and Step 2 - Configure Tenant Account appears.
10. Decide whether to configure the tenant account now or later, as follows:
• If you are ready to configure the new tenant account, go to step 11.
• If you or someone else will configure the tenant account later, go to step 13.
12. Select one or more of the following links to configure the tenant account in the Tenant Manager.
a. If you created an S3 tenant and you want to create and manage the S3 buckets for this
account, select Buckets.
The Buckets page for the Tenant Manager opens on a new tab. To complete this page, see the
instructions for using tenant accounts.
b. If you want to set up an identity source for the tenant, select Identity Federation.
Note: This link appears only if you left the Uses Own Identity Source checkbox selected
in step 5.
The Identity Federation page for the Tenant Manager opens on a new tab. To complete this
page, see the instructions for using tenant accounts.
c. If you want to configure the groups who can access the tenant, select Groups.
The Groups page for the Tenant Manager opens on a new tab. To complete this page, see the
instructions for using tenant accounts.
d. If you want to configure local users who can access the tenant, select Users.
If you are using federated groups, you do not need to configure users.
The Users page for the Tenant Manager opens on a new tab. To complete this page, see the
instructions for using tenant accounts.
Related concepts
Controlling system access with administration user accounts and groups on page 249
Related tasks
Configuring identity federation on page 249
Managing platform services on page 100
Related information
Using tenant accounts
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Creating and managing tenant accounts | 97
Steps
1. Select Tenants.
The Tenant Accounts page appears.
5. Click Save.
Related concepts
Controlling system access with administration user accounts and groups on page 249
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
98 | StorageGRID Webscale 11.1 Administrator Guide
Steps
1. Select Tenants.
The Tenant Accounts page appears.
b. Change the setting of the Uses Own Identity Source checkbox to determine whether the
tenant account will use its own identity source or the identity source that was configured for
the Grid Manager.
If the tenant has already enabled its own identity source, you cannot unselect the Uses Own
Identity Source checkbox. A tenant must disable its identity source before it can use the
identity source that was configured for the Grid Manager.
c. Change the setting of the Allow Platform Services checkbox to determine whether the tenant
account can use platform services for their S3 buckets.
Attention: If you disable platform services for a tenant who is already using them, the
services that they have configured for their S3 buckets will stop working. No error message
is sent to the tenant. For example, if the tenant has configured CloudMirror replication for
an S3 bucket, they can still store objects in the bucket, but copies of those objects will no
longer be made in the external S3 bucket that they have configured as an endpoint.
Creating and managing tenant accounts | 99
d. For Storage Quota, change the number of maximum number of gigabytes, terabytes, or
petabytes available for this tenant's objects, or leave the field blank if you want this tenant to
have an unlimited quota.
A tenant's storage quota represents a logical amount (object size), not a physical amount (size
on disk).
Note: You can monitor tenant storage usage from the Dashboard in the Tenant Manager or
with the Tenant Management API. Note that a tenant's storage usage values might become
out of date if nodes are isolated from other nodes in the grid. The totals will be updated
when network connectivity is restored.
5. Click Save.
Related concepts
Controlling system access with administration user accounts and groups on page 249
Related tasks
Managing platform services on page 100
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• You must have removed all buckets (S3), containers (Swift), and objects associated with the
tenant account.
Steps
1. Select Tenants.
3. Click Remove.
4. Click OK.
Related concepts
Controlling system access with administration user accounts and groups on page 249
100 | StorageGRID Webscale 11.1 Administrator Guide
• Notifications: Per-bucket event notifications are used to send notifications about specific actions
performed on objects to a specified external Simple Notification Service (SNS).
• Search integration service: The search integration service is used to send S3 object metadata to
a specified Elasticsearch index where the metadata can be searched or analyzed using the external
service.
Platform services give tenants the ability to use external storage resources, notification services, and
search or analysis services with their data. Because the target location for platform services is
typically external to your StorageGRID Webscale deployment, you must decide if you want to permit
tenants to use these services. If you do, you must enable the use of platform services when you create
or edit tenant accounts. You must also must configure your network such that the platform services
messages that tenants generate can reach their destinations.
• NetApp recommends that you use no more than 100 active tenants with S3 requests requiring
CloudMirror replication, notifications, and search integration. Having more than 100 active
tenants can result in slower S3 client performance.
Related information
Using tenant accounts
• A local application that supports receiving Simple Notification Service (SNS) messages
To ensure that platform services messages can be delivered, you must configure the network or
networks containing the ADC Storage Nodes. You must ensure that the following ports can be used
to send platform services messages to the destination endpoints.
By default, platform services messages are sent on the following ports:
Tenants can specify a different port when they create or edit an endpoint.
Note: If a StorageGRID Webscale deployment is used as the destination for CloudMirror
replication, replication messages are received by an API Gateway Node on port 8082. Ensure that
this port is accessible through your enterprise network.
If you use a non-transparent proxy server, you must also configure platform services settings to allow
messages to be sent to external endpoints, such as an endpoint on the internet.
Related information
Using tenant accounts
• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.
Steps
5. Click Save.
After the proxy is saved, tenants can configure and test their endpoints.
Note: Proxy changes can take up to 10 minutes to take effect.
If the client subsequently performs an S3 API Delete operation on that same object from Data Center
Site 2, the notification about the delete action is triggered and sent from Data Center Site 2.
Creating and managing tenant accounts | 103
Make sure that the networking at each site is configured such that platform services messages can be
delivered to their destinations.
If StorageGRID Webscale encounters a recoverable error, the platform request will be retried until it
succeeds.
Other errors are unrecoverable. For example, an unrecoverable error occurs if the endpoint is deleted.
If StorageGRID Webscale encounters an unrecoverable endpoint error, the Total Events (SMTT)
alarm is triggered in the Grid Manager. To view the Total Events alarm:
1. Select Nodes.
4. Follow the guidance provided in the SMTT alarm contents to correct the issue.
6. Notify the tenant of the objects whose platform services messages have not been delivered.
7. Instruct the tenant to trigger the failed replication or notification by updating the object's metadata
or tags.
The tenant can resubmit the existing values to avoid making unwanted changes.
Throttling only occurs when there is a backlog of requests waiting to be sent to the destination
endpoint.
The only visible effect is that the incoming S3 requests will take longer to execute. If you start to
detect significantly slower performance, you should reduce the ingest rate or use an endpoint with
higher capacity. If the backlog of requests continues to grow, client S3 operations (such as PUT
requests) will eventually fail.
CloudMirror requests are more likely to be affected by the performance of the destination endpoint
because these requests typically involve more data transfer than search integration or event
notification requests.
106
1. When an object is ingested, one copy is placed on disk (Storage Node) at Data Center 1 (DC1),
one copy is placed on disk at Data Center 2 (DC2), and one copy is placed on archival media
(Archive Node) at DC2.
Related tasks
Managing S3 buckets and objects for compliance on page 162
Related information
Implementing S3 client applications
Implementing Swift client applications
• Where an object’s data is stored and the type of storage used (storage grades and storage pools)
• How the object’s data is managed over time, where it is stored, and how it is protected from loss
(placement instructions)
Object metadata is not managed by ILM rules. Instead, object metadata is stored in a Cassandra
database in what is known as a metadata store. Three copies of object metadata are automatically
maintained at each site to protect the data from loss. The copies are load balanced across all Storage
Nodes.
Managing objects through information lifecycle management | 109
What replication is
Replication is one of two mechanisms used by StorageGRID Webscale to store object data. When
StorageGRID Webscale matches objects to an ILM rule that is configured to create replicated copies,
the system creates exact copies of object data and stores the copies on Storage Nodes or Archive
Nodes.
When you configure an ILM rule to create replicated copies, you specify how many copies should be
created, where those copies should be placed, and how long the copies should be stored at each
location.
In the following example, the ILM rule specifies that two replicated copies of each object be placed
in a storage pool that contains three Storage Nodes.
When StorageGRID Webscale matches objects to this rule, it creates two copies of the object, placing
each copy on a different Storage Node in the storage pool. The two copies might be placed on any
two of the three available Storage Nodes. In this case, the rule placed object copies on Storage Nodes
2 and 3. Because there are two copies, the object can be retrieved if any of the nodes in the storage
pool fails.
Related concepts
Using multiple storage pools for cross-site replication on page 116
three parity fragments are computed from the object data. Each of the nine fragments is stored on a
different node across multiple sites to provide data protection for node failures or site loss.
In the example, the object can be retrieved using any six of the nine fragments. Up to three fragments
can be lost without loss of the object data. If an entire data center site is lost, the object can still be
retrieved or repaired, as long as all of the other fragments remain accessible.
Related concepts
What erasure coding schemes are on page 110
Related tasks
Configuring an Erasure Coding profile on page 119
The following example shows the 6+3 erasure coding scheme, which splits each object into six data
fragments and adds three parity fragments. This erasure coding scheme requires a minimum of nine
Storage Nodes, with three Storage Nodes at each of three different sites. An object can be retrieved as
long as any six (k) of the nine fragments (data or parity) remain available.
If more than three (m) Storage Nodes are lost, the object is not retrievable.
112 | StorageGRID Webscale 11.1 Administrator Guide
• Reliability: Reliability is gauged in terms of fault tolerance—that is, the number of simultaneous
failures that can be sustained without loss of data. With replication, multiple identical copies are
stored on different nodes and across sites. With erasure coding, an object is encoded into data and
parity fragments and distributed across many nodes and sites. This dispersal provides both site
and node failure protection. When compared to replication, erasure coding provides improved
reliability at comparable storage costs.
• Availability: Availability can be defined as the ability to retrieve objects if Storage Nodes fail or
become inaccessible. When compared to replication, erasure coding provides increased
availability at comparable storage costs.
• Storage efficiency: For similar levels of availability and reliability, objects protected through
erasure coding consume less disk space than the same objects would if protected through
replication. For example, a 10 MB object that is replicated to two sites consumes 20 MB of disk
space (two copies), while an object that is erasure coded across three sites with a 6+3 erasure
coding scheme only consumes 15 MB of disk space.
Note: Disk space for erasure coded objects is calculated as the object size plus the storage
overhead. The storage overhead percentage is the number of parity fragments divided by the
number of data fragments.
• Increased number of Storage Nodes and sites required. For example, if you use an erasure coding
scheme of 6+3, you must have at least three Storage Nodes at three different sites. In contrast, if
you simply replicate object data, you require only one Storage Node for each copy.
• Increased retrieval latencies when you use erasure coding across geographically distributed sites.
The object fragments for an object that is erasure coded and distributed across remote sites take
longer to retrieve over WAN connections than an object that is replicated and available locally
(the same site to which the client connects).
• When you use erasure coding across geographically distributed sites, higher WAN network traffic
usage for retrievals and repairs, especially for frequently retrieved objects or object repairs over
WAN network connections.
• Storage efficiency.
• Single-site deployments that require efficient data protection with only a single erasure-coded
copy rather than multiple replicated copies.
• Consider what types of object copies you want to make (replicated or erasure coded) and the
number of copies of each object that are required.
• Determine what types of object metadata are used in the applications that connect to the
StorageGRID Webscale system. ILM rules filter objects based on their metadata.
Steps
1. Creating and assigning storage grades on page 113
2. Configuring storage pools on page 115
3. Configuring an Erasure Coding profile on page 119
4. Configuring regions (optional and S3 only) on page 122
5. Creating an ILM rule on page 123
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
nodes. Storage grades assigned to only one node can cause ILM backlogs if that node becomes
unavailable.
Note: You cannot configure storage grades for Archive Nodes.
Steps
a. For each storage grade you need to define, click Insert to add a row and enter a label for
the storage grade.
The Default storage grade cannot be modified. It is reserved for new LDR services added
during a StorageGRID Webscale system expansion.
b. To edit an existing storage grade, click Edit and modify the label as required.
Note: You cannot delete storage grades.
a. For each Storage Node's LDR service, click Edit and select a storage grade from the list.
Managing objects through information lifecycle management | 115
Attention: Assign a storage grade to a given Storage Node only once. A Storage Node
recovered from failure maintains the previously assigned storage grade. Do not change this
assignment once the ILM policy is activated. If the assignment is changed, data is stored
based on the new storage grade.
• Storage grade: For Storage Nodes, the relative performance of backing storage.
Storage pools are used in ILM rules to determine where object data is stored. When you configure
ILM rules for replication, you select one or more storage pools that include either Storage Nodes or
Archive Nodes. When you create Erasure Coding profiles, you select a storage pool that includes
Storage Nodes.
Steps
1. Guidelines for creating storage pools on page 115
2. Using multiple storage pools for cross-site replication on page 116
3. Configuring storage pools on page 118
• Create storage pools with as many nodes as possible. Each storage pool should contain two or
more nodes. A storage pool with insufficient nodes can cause ILM backlogs if that node becomes
unavailable.
116 | StorageGRID Webscale 11.1 Administrator Guide
• Avoid creating or using storage pools that overlap (contain one or more of the same nodes). If
storage pools overlap, more than one copy of object data might be saved on the same node.
• When using a storage pool that includes Archive Nodes, you should also maintain at least one
replicated or erasure-coded copy on a storage pool that includes Storage Nodes.
• If the global Compliance setting is enabled and you are creating a compliant ILM rule, you
cannot use a storage pool that includes Archive Nodes. See “Managing S3 buckets and objects for
compliance.”
• If an Archive Node's Target Type is Cloud Tiering - Simple Storage Service (S3), the Archive
Node must be in its own storage pool. See “Configuring connection settings for S3 API” for
information.
• Note that the number of Storage Nodes and sites contained in the storage pool determine which
erasure coding schemes are available.
• If possible, a storage pool should include more than the minimum number of Storage Nodes
required for the erasure coding scheme you select. For example, if you use a 6+3 erasure coding
scheme, you must have at least nine Storage Nodes. However, having one additional Storage
Node per site is recommended.
• Distribute Storage Nodes across sites as evenly as possible. For example, to support a 4+2 erasure
coding scheme, configure a storage pool that includes three Storage Nodes at three sites.
Related concepts
What replication is on page 109
What erasure coding is on page 109
What erasure coding schemes are on page 110
Using multiple storage pools for cross-site replication on page 116
Related tasks
Managing S3 buckets and objects for compliance on page 162
Configuring connection settings for S3 API on page 205
disk usage among the storage pools balanced, while ensuring that the two copies are stored at
different sites.
The following example illustrates what can happen if an ILM rule places replicated object copies to a
single storage pool containing Storage Nodes from two sites. Because the system uses any available
nodes in the storage pool when it places the replicated copies, it might place all copies of some
objects within only one of the sites. In this example, the system stored two copies of object AAA on
Storage Nodes at Site 1, and two copies of object CCC on Storage Nodes at Site 2. Only object BBB
is protected if one of the sites fails or becomes inaccessible.
In contrast, this example illustrates how objects are stored when you use multiple storage pools. In
the example, the ILM rule specifies that two replicated copies of each object be created, and that the
copies be distributed to two storage pools. Each storage pool contains all Storage Nodes at one site.
Because a copy of each object is stored at each site, object data is protected from site failure or
inaccessibility.
When using multiple storage pools, keep the following rules in mind:
• If you are creating n copies, you must add n or more storage pools. For example, if a rule is
configured to make three copies, you must specify three or more storage pools.
• If the number of copies equals the number of storage pools, one copy of the object is stored in
each storage pool.
• If the number of copies is less than the number of storage pools, the system distributes the copies
to keep disk usage among the pools balanced and to ensure that two or more copies are not stored
in the same storage pool.
• If the storage pools overlap (contain the same Storage Nodes), all copies of the object might be
saved at only one site. You must ensure that the selected storage pools do not contain the same
Storage Nodes.
118 | StorageGRID Webscale 11.1 Administrator Guide
Steps
a. Click Insert at the end of the row for the last storage pool.
c. From the Storage Grade drop-down list, set the type of storage to which object data will be
copied if an ILM rule uses this storage pool.
The values All Disks and Archive Nodes are system-generated.
d. From the Site drop-down list, set the site to which object data will be copied if an ILM rule
uses this storage pool.
The value All Sites is system-generated.
When you select a Site, the number of grid nodes and storage capacity information (Installed,
Used, and Available) are automatically updated. Make sure that storage pools have sufficient
Managing objects through information lifecycle management | 119
storage and Storage Nodes to support planned ILM rules and the types of copies that will be
made.
e. To add another storage grade/site combination to the storage pool, click Insert next to
Site.
You cannot create storage pools that include LDR and ARC services in the same storage pool.
A storage pool includes either disks or archive media, but not both.
3. To delete a storage pool, click Delete next to the storage pool name. You cannot delete a
storage pool that is used in a saved ILM rule.
Related concepts
Guidelines for creating storage pools on page 115
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• You must have created a storage pool that includes exactly one site or a storage pool that includes
three or more sites.
After you save the Erasure Coding profile, you can change its name, but you cannot select a different
storage pool or erasure coding scheme. You also cannot delete the profile.
Steps
2. Click Create.
The Create EC Profile dialog box appears. By default, the Storage Pool field shows All Storage
Nodes and lists any available erasure coding schemes, based on the total number of Storage
Nodes and sites available in your StorageGRID Webscale system.
• Erasure Code: The name of the erasure coding scheme in the following format: data
fragments + parity fragments.
• Storage Overhead (%): The additional storage required for parity fragments relative to the
object's data size. Storage Overhead = Total number of parity fragments / Total number of data
fragments.
• Storage Node Redundancy: The number of Storage Nodes that can be lost while still
maintaining the ability to retrieve object data.
Managing objects through information lifecycle management | 121
• Site Redundancy: Whether the selected erasure code allows the object data to be retrieved if
a site is lost.
To support site redundancy, the selected storage pool must include multiple sites, each with
enough Storage Nodes to allow any site to be lost. For example, to support site redundancy
using a 6+3 erasure coding scheme, the selected storage pool must include at least three sites
with at least three Storage Nodes at each site.
• The storage pool you selected does not provide site redundancy. The following message is
expected when the selected storage pool includes only one site. You can use this Erasure
Coding profile in ILM rules to protect against node failures.
• The storage pool you selected does not satisfy the requirements for any erasure coding
scheme. For example, the following message is expected when the selected storage pool
includes only two sites. If you want to use erasure coding to protect object data, you must
select a storage pool with exactly one site or a storage pool with three or more sites.
5. If more than one erasure coding scheme is listed, select the one you want to use.
When deciding which erasure coding scheme to use, you should balance fault tolerance (achieved
by having more parity segments) against the network traffic requirements for repairs (more
fragments equals more network traffic). For example, when deciding between a 4+2 scheme and
6+3 scheme, select the 6+3 scheme if additional parity and fault tolerance are required. Select the
4+2 scheme if network resources are constrained to reduce network usage during node repairs.
6. Click Save.
Related concepts
What erasure coding schemes are on page 110
Configuring storage pools on page 115
What erasure coding is on page 109
Related tasks
Creating an ILM rule on page 123
122 | StorageGRID Webscale 11.1 Administrator Guide
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• You must create the regions using the Grid Manager before you can specify a non-default region
when creating buckets using the Tenant Manager or Tenant Management API or with the
LocationConstraint request element for S3 PUT Bucket API requests. An error occurs if a PUT
Bucket request uses a region that has not been defined in StorageGRID Webscale.
• You must use the exact region name when you create the S3 bucket. Region names are case
sensitive and must contain between 2 and 32 characters. Valid characters are numbers, letters, and
hyphens.
Note: EU is not considered to be an alias for eu-west-1. If you want to use the EU or eu-west-1
region, you must use the exact name.
• You cannot delete or modify a region if it is currently used within the active ILM policy or the
proposed ILM policy.
• If the region used as the advanced filter in an ILM rule is invalid, it is still possible to add that rule
to the proposed policy. However, an error occurs if you attempt to save or activate the proposed
policy. (An invalid region can result if you use a region as an advanced filter in an ILM rule but
you later delete that region, or if you use the Grid Management API to create a rule and specify a
region that you have not defined.)
• If you delete a region after using it to create an S3 bucket, you will need to re-add the region if
you ever want to use the Location Constraint advanced filter to find objects in that bucket.
Steps
The Regions page appears, with the currently defined regions listed. Region 1 shows the default
region, us-east-1, which cannot be modified or removed.
Managing objects through information lifecycle management | 123
2. To add a region:
Related concepts
Using advanced filters in ILM rules on page 128
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• If you plan to use last access time metadata, Last Access Time updates must be enabled by bucket
for S3 or by container for Swift.
• If you are creating erasure-coded copies, you have configured an Erasure Coding profile.
124 | StorageGRID Webscale 11.1 Administrator Guide
Steps
Note: If the global Compliance setting has been enabled for the StorageGRID Webscale
system, the ILM Rules page indicates which ILM rules are compliant. The summary table
includes a Compliant column, and the details for the selected rule include a Compliance
Compatible field. See “Managing S3 buckets and objects for compliance” for more
information.
2. Click Create.
Step 1 of the Create ILM Rule wizard appears.
b. Optionally, enter a short description for the rule in the Description field.
You should describe the rule's purpose or function so you can recognize the rule later.
c. From the Tenant Account drop-down list, optionally select the S3 or Swift tenant account to
which this rule applies. If this rule applies to all tenants, select Ignore (default).
d. Use the Bucket Name field to specify the S3 buckets or Swift containers to which this rule
applies.
If matches all is selected (default), the rule applies to all S3 buckets or Swift containers.
4. Click Next.
Step 2 of the wizard appears.
5. For Reference Time, select the time used to calculate the start time for a placement instruction.
Option Description
Ingest Time The time when the object was ingested.
Last Access Time The time when the object was last retrieved (read or viewed).
Note: To use this option, updates to Last Access Time must be
enabled for the S3 bucket or Swift container.
126 | StorageGRID Webscale 11.1 Administrator Guide
Option Description
Noncurrent Time The time an object version became noncurrent because a new version was
ingested and replaced it as the current version.
Note: The Noncurrent Time applies only to S3 objects in versioning-
enabled buckets.
You can use this option to reduce the storage impact of versioned objects
by filtering for noncurrent object versions updated with a new current
version or delete marker.
Note: If you want to create a compliant rule, you must select Ingest Time. See “Managing S3
buckets and objects for compliance.”
6. In the Placements section, select a starting time and a duration for the first time period.
For example, you might want to specify where to store objects for the first year (“day 0 for 365
days”). At least one instruction must start at day 0.
c. Optionally, click Add Pool and select one or more storage pools.
If you are specifying more than one storage pool, keep these rules in mind:
• If you are specifying more than one storage pool and creating n copies, you must add n or
more pools. For example, if a rule is configured to make three copies, you must specify
three or more storage pools.
• If the number of copies equals the number of storage pools, one copy of the object is
stored in each storage pool.
• If the number of copies is less than the number of storage pools, the system distributes the
copies to keep disk usage among the pools balanced, while ensuring that one copy goes
only to one site.
• If the storage pools overlap (contain the same Storage Nodes), all copies of the object
might be saved at only one site. For this reason, do not specify the default storage pool
(All Storage Nodes) and another storage pool.
d. If you are using only a single storage pool, specify a temporary storage pool.
Managing objects through information lifecycle management | 127
Specifying a temporary storage pool is optional, but recommended. If the preferred storage
pool is unavailable, a copy is made in the temporary storage pool. As soon as the preferred
storage pool becomes available, a copy is made in the preferred storage pool, and the copy in
the temporary storage pool is deleted.
Attention: Failing to specify a temporary storage pool puts object data at risk if the
preferred pool is unavailable.
9. Optionally, click the plus icons to add placement instructions that specify different time periods or
create additional copies at different locations.
10. Click Refresh to update the Retention Diagram and to confirm your placement instructions.
Each line in the diagram represents a placement instruction and shows where and when object
copies will be placed. The type of copy is represented by one of the following icons:
In this example, replicated copies will be saved to three storage pools (DC1, DC2, and DC3) for
one year. Then, an erasure-coded copy will be saved forever, using the Erasure Coding profile
associated with the All Storage Nodes storage pool.
128 | StorageGRID Webscale 11.1 Administrator Guide
Attention: When adding a rule that makes an erasure-coded copy to the ILM policy, you must
ensure that the policy has at least one rule that filters by Object Size. Due to the overhead of
managing the number of fragments associated with an erasure coded copy, do not erasure code
objects smaller than 200 KB.
Related concepts
Using advanced filters in ILM rules on page 128
Configuring storage pools on page 115
Creating, simulating, and activating an ILM policy on page 133
Related tasks
Using Ingest Time or Last Access Time in ILM rules on page 132
Configuring an Erasure Coding profile on page 119
Managing S3 buckets and objects for compliance on page 162
• starts with
Last Access Time Time and date the object was Yes Yes
• equals
(microseconds) last retrieved (read or viewed)
• does not equal in microseconds since Unix
Epoch.
• less than
See “Using Ingest Time or
• less than or equals Last Access Time in ILM
rules” for more information
• greater than on how to calculate this
value.
• greater than or equals
Note: If you plan to use
• exists last access time as an
• does not exist advanced filter, Last
Access Time updates must
be enabled for the S3
bucket or Swift container.
• starts with
• The second metadata value specifies objects less than or equal to 100 MB
Using multiple entries allows you to have precise control over which objects are matched. In the
following example, the rule applies to objects that have a Brand A or Brand B as the value of the
camera_type user metadata. However, the rule only applies to those Brand B objects that are smaller
than 10 MB.
Related tasks
Using Ingest Time or Last Access Time in ILM rules on page 132
Configuring regions (optional and S3 only) on page 122
Related information
Implementing S3 client applications
Implementing Swift client applications
132 | StorageGRID Webscale 11.1 Administrator Guide
The table summarizes the behavior applied to all objects in the bucket when last access time is
disabled or enabled.
Type of request Behavior if last access time is Behavior if last access time is
disabled (default) enabled
Last access time Object added to Last access time Object added to
updated? ILM evaluation updated? ILM evaluation
queue? queue?
Request to No No Yes Yes
retrieve an object,
its access control
list, or its
metadata
Request to update Yes Yes Yes Yes
an object’s
metadata
Request to copy • No, for the • No, for the • Yes, for the • Yes, for the
an object from source copy source copy source copy source copy
one bucket to
another • Yes, for the • Yes, for the • Yes, for the • Yes, for the
destination destination destination destination
copy copy copy copy
Request to Yes, for the Yes, for the Yes, for the Yes, for the
complete a assembled object assembled object assembled object assembled object
multipart upload
Steps
1. If you are using Ingest Time or Last Access Time as advanced filters, determine the UTC date and
time you want to use in the filter.
You might need to convert from your local time zone to UTC.
2. Convert the UTC date and time to microseconds since Unix Epoch.
Managing objects through information lifecycle management | 133
3. If you are using Last Access Time as an advanced filter or as a reference time, enable last access
time updates on each S3 bucket specified in that rule.
You can use the Tenant Manager or the Tenant API to enable updates to last access time for S3
buckets. See the instructions for using tenant accounts.
Attention: Be aware that enabling last access time updates can reduce performance, especially
in systems with small objects. The performance impact occurs because StorageGRID Webscale
must perform these additional steps every time objects are retrieved:
• Add the objects to the ILM queue, so they can be reevaluated against current ILM rules and
policy
Related information
Implementing S3 client applications
Using tenant accounts
• Consider all of the different types of objects that might be ingested into your grid. Make sure the
policy includes rules to match and place these objects as required. If no rules match an object, the
policy's default rule controls where that object is placed and for how long it is retained.
• Make sure that the rules in the policy are in the correct order. After an object has been matched by
a rule, none of the following rules in the policy are applied to that object.
• Keep the ILM policy as simple as possible. This avoids potentially dangerous situations where
object data is not protected as intended when changes are made to the StorageGRID Webscale
system over time.
Caution: An ILM policy that has been incorrectly configured can result in unrecoverable data loss.
Before activating an ILM policy, carefully review the ILM policy and its ILM rules, and then
simulate the ILM policy. Always confirm that the ILM policy will work as intended.
Steps
1. Creating a proposed ILM policy on page 134
2. Simulating an ILM policy on page 137
3. Activating the ILM policy on page 145
4. Verifying an ILM policy with object metadata lookup on page 146
134 | StorageGRID Webscale 11.1 Administrator Guide
Related concepts
What an information lifecycle management policy is on page 106
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• You have created the rules you want to add to the proposed policy. Note that you can save a
proposed policy, create additional rules, and then edit the policy to add the new rules.
• New storage retention requirements were defined. For example, there was a change in regulatory
requirements.
The proposed ILM policy must include at least one ILM rule.
Steps
Note: If the global Compliance setting is enabled, the ILM Policies page indicates which ILM
rules are compliant. See “Managing S3 buckets and objects for compliance.”
Option Steps
Create a new proposed policy
a. If a proposed ILM policy currently exists, select that policy, and click
that has no rules already
Remove.
selected
You cannot create a new proposed policy if a proposed policy already
exists.
c. Click Clone.
If you are cloning the active policy, the Name field shows the name of the active policy, appended
by a version number (“v2” in the example). The rules used in the active policy are selected and
shown in their current order.
3. Enter a unique name for the proposed policy in the Name field.
You must enter between 1 and 64 characters. If you are cloning the active policy, you can use the
current name with the appended version number or you can enter a new name.
4. Enter the reason you are creating a new proposed policy in the Reason for change field.
You must enter between 1 and 128 characters.
The Select Rules for Policy dialog box appears, with all defined rules listed. If you are cloning the
active policy, any rules that are currently in use are selected.
6. Select and unselect the check boxes to choose the rules you want to add to the proposed policy.
You can click the rule name or the more details icon to view the settings for each rule. Click
Close when you are done viewing rule details.
7. When you are done selecting rules for the proposed policy, click Apply.
9. Select the radio button to specify which rule you want to be the default rule for this policy.
Every ILM policy must contain one default ILM rule. The placement instructions for the default
rule are applied to any objects that are not matched by the other rules in the policy.
Note: If the global Compliance setting is enabled, the default rule must be a compliant rule.
See “Managing S3 buckets and objects for compliance.”
Managing objects through information lifecycle management | 137
10. As required, click the delete icon to delete any rules that you do not want in the policy, or click
Select Rules to add more rules.
Related concepts
What an information lifecycle management policy is on page 106
Related tasks
Managing S3 buckets and objects for compliance on page 162
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• You must know the S3 bucket/object-key or the Swift container/object-name for each object you
want to test, and you must have already ingested those objects.
to test the policy thoroughly. If the policy includes a default rule to place all other objects, you must
test at least one object from another bucket.
When simulating a policy, the following considerations apply:
• After you make changes to a policy, save the proposed policy. Then, simulate the behavior of the
saved proposed policy.
• When you simulate a policy, the ILM rules in the policy filter the test objects, so you can see
which rule was applied to each object. However, no object copies are made and no objects are
placed. Running a simulation does not modify your data, rules, or the policy in any way.
• The Simulation page retains the objects you tested until you close, navigate away from, or refresh
the ILM Policies page.
• Simulation returns the name of the matched rule. To determine which storage pool or Erasure
Coding profile is in effect, you can view the Retention Diagram by clicking the rule name or the
more details icon .
• If S3 Versioning is enabled, the policy is only simulated against the current version of the object.
Steps
1. Select and arrange the rules, and save the proposed policy.
The Demo policy in this example has three rules:
• The first rule, X-men, applies only to objects in a specific tenant account and uses an
advanced filter to match series=x-men user metadata.
• The last rule is the stock rule, Make 2 Copies, and it is selected as the default. This rule
applies to any objects that do not match the other rules.
2. Click Simulate.
The Simulation ILM Policy dialog box appears.
3. In the Object field, enter the S3 bucket/object-key or the Swift container/object-name for a test
object, and click Simulate.
Note: A message appears if you specify an object that has not been ingested.
Managing objects through information lifecycle management | 139
4. Under Simulation Results, confirm that each object was matched by the correct rule.
In the example, the Havok.png and Warpath.jpg objects were correctly matched by the X-men
rule. The Fullsteam.png object, which does not include series=x-men user metadata, was
not matched by the X-men rule but was correctly matched by the PNGs rule. None of the test
objects was matched by the Make 2 Copies rule
Choices
• Example 1: Verifying rules when simulating a proposed ILM policy on page 139
• Example 2: Reordering rules when simulating a proposed ILM policy on page 140
• Example 3: Correcting a rule when simulating a proposed ILM policy on page 142
• The first rule, X-men, applies only to objects in a specific tenant account and filters for
series=x-men user metadata. This rule is also marked as the default rule for the policy.
• The second rule, PNGs, filters for keys ending with .png.
• The last rule, JPGs, filters for keys ending with .jpg.
140 | StorageGRID Webscale 11.1 Administrator Guide
Steps
1. After adding the rules and saving the policy, click Simulate.
The Simulate ILM Policy dialog box appears.
2. In the Object field, enter the S3 bucket/object-key or the Swift container/object-name for a test
object, and click Simulate.
The Simulation Results appear, showing which rule in the policy matched each object you tested.
a. The Fullsteam.png object did not match the X-men rule, but did match the PNGs rule.
b. The Havok.png and the Warpath.jpg objects both matched the X-men rule, which was
evaluated first.
Note that even if these two files had not matched the X-men rule, they would have matched
one of the subsequent rules: either PNGs or JPGs.
• The first rule, PNGs, filters for key names that end in .png.
Managing objects through information lifecycle management | 141
• The second rule, X-men, applies only to objects in a specific tenant account and filters for
series=x-men user metadata.
• The last rule is the default rule, Make 2 Copies, which will match any objects that do not match
the first two rules.
Steps
1. After adding the rules and saving the policy, click Simulate.
2. In the Object field, enter the S3 bucket/object-key or the Swift container/object-name for a test
object, and click Simulate.
The Simulation Results appear, showing that the Havok.png object was matched by the PNGs
rule.
However, the rule that the Havok.png object was meant to test was the X-men rule.
d. Click Save.
4. Click Simulate.
The objects you previously tested are re-evaluated against the updated policy, and the new
simulation results are shown. In the example, the Rule Matched column shows that the
Havok.png object now matches the X-men metadata rule, as expected. The Previous Match
column shows that the PNGs rule matched the object in the previous simulation.
Note: If you stay on the Configure Policies page, you can re-simulate a policy after making
changes without needing to re-enter the names of the test objects.
When a test object is not matched by the expected rule in the policy, you must examine each rule in
the policy and correct any errors.
Steps
1. For each rule in the policy, view the rule settings by clicking the rule name or the more details
icon on any dialog box where the rule is displayed.
2. Review the rule's tenant account, reference time, and filtering criteria.
In this example, the metadata for the X-men rule includes an error. The metadata value was
entered as “x-men1” instead of “x-men.”
• If the rule is part of the proposed policy, you can either clone the rule or remove the rule from
the policy and then edit it.
• If the rule is part of the active policy, you must clone the rule. You cannot edit or remove a
rule from the active policy.
144 | StorageGRID Webscale 11.1 Administrator Guide
Option Description
Cloning the rule
a. Select ILM > Rules.
g. Select the check box for the new rule, unselect the checkbox for the
original rule, and click Apply.
h. Click Save.
b. Click the delete icon to remove the incorrect rule, and click Save.
In this example, the corrected X-men rule now matches the Beast.jpg object based on the
series=x-men user metadata, as expected.
Managing objects through information lifecycle management | 145
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• If you make policy changes that increase data redundancy or durability, those changes are
implemented immediately. For example, if you activate a new policy that uses a Make 3 Copies
rule instead of a Make 2 Copies rule, that policy will be implemented right away because it
increases data redundancy.
• If you make policy changes that could decrease data redundancy or durability, those changes will
not be implemented until all grid nodes are available. For example, if you activate a new policy
that uses a Make 2 Copies rule instead of a Make 3 Copies rule, the new policy will be marked as
“Active,” but it will not take effect until all nodes are online and available.
Steps
1. When you are ready to activate a proposed policy, select the policy on the ILM Policies page and
click Activate.
A warning message is displayed, prompting you to confirm that you want to activate the proposed
policy.
2. Click OK.
146 | StorageGRID Webscale 11.1 Administrator Guide
Result
When a new ILM policy has been activated:
• The policy is shown with a Policy State of Active in the table on the ILM Policies page. The Start
Date entry indicates the date and time the policy was activated.
• The previously active policy is shown with a Policy State of Historical. The Start Date and End
Date entries indicate when the policy became active and when it was no longer in effect.
• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.
◦ UUID: The object's Universally Unique Identifier. Enter the UUID in all uppercase.
◦ CBID: The object's unique identifier within StorageGRID Webscale. You can obtain an
object's CBID from the audit log. Enter the CBID in all uppercase.
◦ S3 bucket and object key: When an object is ingested through the S3 interface, the client
application uses a bucket and object key combination to store and identify the object.
◦ Swift container and object name: When an object is ingested through the Swift interface, the
client application uses a container and object name combination to store and identify the
object.
Steps
• System metadata, including the object ID (UUID), the object name, the name of the container,
the tenant account name or ID, the logical size of the object, the date and time the object was
first created, and the date and time the object was last modified.
• Any custom user metadata key-value pairs associated with the object.
• For S3 objects, any object tag key-value pairs associated with the object.
• For replicated object copies, the current storage location of each copy, including the name of
the grid node and the full path to the disk location of the object.
• For erasure-coded object copies, the current storage location of each fragment, including the
name of the grid node and the type of fragment (data or parity).
• For segmented objects, a list of object segments including segment identifiers and data sizes.
For objects with more than 100 segments, only the first 100 segments are shown.
5. Confirm that the object is stored in the correct location or locations and that it is the correct type
of copy.
Note: If the Audit option is enabled, you can also monitor the audit log for the ORLM Object
Rules Met message. The ORLM audit message can provide you with more information about
the status of the ILM evaluation process, but it cannot give you information about the
correctness of the object data’s placement or the completeness of the ILM policy. You must
evaluate this yourself. For details, see the information about understanding audit messages.
Related concepts
Configuring audit client access on page 234
Related information
Understanding audit messages
Implementing S3 client applications
Managing objects through information lifecycle management | 149
Choices
• Deleting an ILM rule on page 149
• Editing an ILM rule on page 149
• Cloning an ILM rule on page 150
• Viewing the ILM policy activity queue on page 151
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Select the ILM rule you want to delete, and click Remove.
Related concepts
Creating, simulating, and activating an ILM policy on page 133
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
The ILM Rules page appears. This page shows all available rules and indicates which rules are
being used in the active policy or the proposed policy.
3. Complete the pages of the Edit ILM Rule wizard, following the steps for creating an ILM rule
and using advanced filters, as necessary.
When editing an ILM rule, you cannot change its name.
4. Click Save.
Related concepts
Using advanced filters in ILM rules on page 128
Related tasks
Creating an ILM rule on page 123
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Select the ILM rule you want to clone, and click Clone.
The Create ILM Rule wizard opens.
3. Update the cloned rule by following the steps for editing an ILM rule and using advanced filters.
When cloning an ILM rule, you must enter a new name.
4. Click Save.
The new ILM rule is created.
Related concepts
Using advanced filters in ILM rules on page 128
Related tasks
Editing an ILM rule on page 149
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
1. Select Dashboard.
152 | StorageGRID Webscale 11.1 Administrator Guide
ILM rule 1 for example 1: Copy object data to two data centers
This example ILM rule copies object data to storage pools in two data centers.
ILM rule 2 for example 1: Erasure Coding profile with bucket matching
This example ILM rule uses an Erasure Coding profile and an S3 bucket to determine where and how
long the object is stored.
• At ingest, use 4+2 Erasure Coding to store all objects belonging to the S3 Bucket FinanceReports
across three data centers.
• If an object does not match the first ILM rule, use the policy's default ILM rule, Two Copies Two
Data Centers, to store a copy of that object in two data centers, DC1 and DC2.
156 | StorageGRID Webscale 11.1 Administrator Guide
ILM rule 1 for example 2: Use EC for all objects larger than 200 KB
This example ILM rule erasure codes all objects larger than 200 KB (0.20 MB).
The placement instructions specify that one erasure coded copy be created in all Storage Nodes.
Managing objects through information lifecycle management | 157
ILM policy for example 2: Use EC for objects larger than 200 KB
In this example policy, objects larger than 200 KB are erasure coded, and any other objects that are
smaller than 200 KB are replicated using the default catch-all Make 2 Copies rule.
This example ILM policy includes the following ILM rules:
• If an object does not match the first ILM rule, use the default ILM rule to create two replicated
copies of that object. Because objects larger than 200 KB have been filtered out by rule 1, rule 2
only applies to objects that are 200 KB or smaller.
Example 3: ILM rules and policy for better protection for image files
You can use the following example rules and policy to ensure that images larger than 200 KB are
erasure coded and that three copies are made of smaller images.
Caution: The following ILM rules and policy are only examples. There are many ways to
configure ILM rules. Carefully analyze your ILM rules before adding them to an ILM policy to
confirm that they will work as intended to protect content from loss.
ILM rule 1 for example 3: Use EC for image files larger than 200 KB
This example ILM rule uses advanced filtering to erasure code all image files larger than 200 KB.
Because this rule is configured as the first rule in the policy, the erasure coding placement
instructions will only apply to images that are larger than 200 KB.
160 | StorageGRID Webscale 11.1 Administrator Guide
ILM rule 2 for example 3: Replicate 3 copies for all remaining image files
This example ILM rule uses advanced filtering to specify that image files be replicated.
Because the first rule in the policy has already matched image files larger than 200 KB, these
placement instructions only apply to image files 200 KB or smaller.
Managing objects through information lifecycle management | 161
• Create three copies of any remaining image files (that is, images that are 200 KB or smaller).
• Apply the stock rule, Make 2 Copies, as the default to any remaining objects (that is, all non-
image files).
162
• Internal public key infrastructure and node certificates are used to authenticate and encrypt
internode communication. Internode communication is secured by TLS.
• Rules for firewalls and iptables are automatically configured to control incoming and outgoing
network traffic, as well as closing unused ports.
• The base operating system of StorageGRID Webscale appliances and virtual nodes is hardened;
unrelated software packages are removed.
Managing S3 buckets and objects for compliance | 163
• Root login over SSH is disabled on all grid nodes. SSH access between nodes uses certificate
authentication.
• Separate networks are available for Client, Admin, and internal Grid traffic
Compliance workflow
The workflow diagram shows the high-level steps for enabling the global Compliance setting, for
creating and managing compliant ILM rules and ILM policies, and for creating and managing S3
buckets that are compliant. As a grid administrator, you must coordinate closely with the tenant
administrator to ensure that the objects in compliant buckets are protected in a manner that satisfies
regulatory requirements.
As the workflow diagram shows, a grid administrator must enable the global Compliance setting for
the entire StorageGRID Webscale system before an S3 tenant can create compliant buckets. The grid
administrator must also ensure that the default rule in the grid's active ILM policy satisfies the data-
protection requirements of objects in compliant buckets.
Then, once the global Compliance setting has been enabled, tenants can create and manage compliant
buckets using the Tenant Manager, the Tenant Management API, or the S3 REST API.
164 | StorageGRID Webscale 11.1 Administrator Guide
Related information
Using tenant accounts
Implementing S3 client applications
• After you enable the global Compliance setting, you cannot disable this setting.
• Enabling the global Compliance setting allows all S3 tenant accounts to use the Tenant Manager,
the Tenant Management API, or the S3 REST API to create and manage compliant buckets. Users
with the appropriate permissions can create compliant buckets, set and increase the retention
period for objects in the bucket, specify how objects can be deleted at the end of their retention
period, and optionally place all objects in the bucket under a legal hold or lift a legal hold.
For example, this tenant user is creating a compliant bucket named bank-records in the default
us-east-1 region. Objects in this bucket will be retained for 6 years and then deleted
automatically. This bucket is not currently under a legal hold.
Managing S3 buckets and objects for compliance | 165
• When the global Compliance setting is enabled, you cannot create a new proposed ILM policy or
activate an existing proposed ILM policy unless the default rule in the policy satisfies the
requirements of S3 compliant buckets. The ILM Rules and ILM Policies pages indicate which
ILM rules are compliant.
In the following example, the ILM Rules page lists two rules that are compatible with compliant
buckets.
• The retention period for the bucket specifies the minimum amount of time each object in that
bucket must be preserved (stored) within StorageGRID Webscale.
• Tenant users can edit bucket settings to increase the retention period, but they can never decrease
this value.
• If a tenant account is notified of a pending legal action or regulatory investigation, users can
preserve relevant information by placing a legal hold on the bucket. When a bucket is under a
legal hold, no object in that bucket can be deleted even if its retention period has ended. As soon
as the legal hold is lifted, objects in the bucket can be deleted when their retention periods end.
• Objects can be added to a compliant bucket at any time, regardless of the bucket's compliance
settings.
• Objects can be retrieved from a compliant bucket at any time, regardless of the bucket's
compliance settings.
1. Object ingest
• When an object is ingested, the system generates metadata for the object that includes a
unique object identifier (UUID) and the ingest date and time. The object inherits the
compliance settings from the bucket.
166 | StorageGRID Webscale 11.1 Administrator Guide
• After an object is ingested into a compliant bucket, its data, S3 user-defined metadata, or S3
object tags cannot be modified, even after the retention period expires.
• StorageGRID Webscale maintains three copies of all object metadata at each site to provide
redundancy and protect object metadata from loss. Metadata is stored independently of object
data.
2. Retention period
• The retention period for an object starts when the object is ingested into the bucket.
• Each time the object is accessed or looked up, the compliance settings for the bucket are also
looked up. The system uses the object's ingest time and date and the bucket's retention period
setting to calculate when the object's retention period will expire.
• During an object's retention period, multiple copies of the object are stored by StorageGRID
Webscale. The exact number and type of copies and the storage locations are determined by
the compliant rules in the active ILM policy.
Note: As required, you might need to add new ILM rules to manage the objects in a
particular bucket.
• During an object's retention period, or when legal hold is enabled for the bucket, the object
cannot be deleted.
3. Object deletion
• When an object's retention period ends, all copies of the object can be deleted, unless legal
hold is enabled for the bucket.
• When an object’s retention period ends, a bucket-level compliance setting allows tenant users
to control how objects are deleted: by users when required or automatically by the system.
• If the bucket setting is to delete objects automatically, all copies of the object are removed by
the scanning ILM process in StorageGRID Webscale. When an object’s retention period ends,
the object is scheduled for deletion. You can look for the IDEL (ILM Initiated Delete)
message in the audit log to determine when ILM has started the process of auto-deleting an
object because its retention period has expired (assuming the auto-delete setting is enabled
and legal hold is off).
Note: The actual amount of time needed to delete all object copies can vary, depending on
the number of objects in the grid and how busy the grid processes are.
• It must create at least two replicated object copies or one erasure-coded copy.
• These copies must exist on Storage Nodes for the entire duration of each line in the placement
instructions.
• At least one line of the placement instructions must start at day 0, using Ingest Time as the
reference time.
• At least one line of the placement instructions must be “forever.” The actual meaning of “forever”
is determined by the compliance settings for each bucket.
Managing S3 buckets and objects for compliance | 167
For example, this rule satisfies the requirements of compliant S3 buckets. It stores three replicated
object copies from Ingest Time (day 0) to “forever.” The objects will be stored on Storage Nodes at
three data centers.
Note: The Make 2 Copies stock rule is compliant. You can use it as the default rule in a compliant
policy.
When you configure the placement instructions for a compliant rule, you must consider where the
object copies will be stored. For example, if your deployment includes more than one site, you can
enable site-loss protection for compliant objects by creating a storage pool for each site and
specifying both storage pools in the rule's placement instructions. See “Using multiple storage pools
for cross-site replication.”
• The default rule in the active or any proposed ILM policy must be compliant.
As illustrated in “Example: Using compliant ILM rules in an ILM policy,” a compliant ILM policy
might include these three rules:
1. A compliant rule that creates erasure-coded copies of the objects in a specific compliant S3
bucket. The EC copies are stored on Storage Nodes from day 0 to forever.
2. A non-compliant rule that creates two replicated object copies on Storage Nodes for a year and
then moves one object copy to Archive Nodes and stores that copy forever. This rule only applies
to non-compliant buckets because it stores only one object copy forever and it uses Archive
Nodes.
3. A default, compliant rule that creates two replicated object copies on Storage Nodes from day 0 to
forever. This rule applies to any object in any compliant or non-compliant bucket that was not
filtered out by the first two rules.
Related concepts
Using multiple storage pools for cross-site replication on page 116
Example: Using compliant ILM rules in an ILM policy on page 169
Related information
Using tenant accounts
Implementing S3 client applications
Understanding audit messages
168 | StorageGRID Webscale 11.1 Administrator Guide
Enabling compliance
Enabling the global Compliance setting allows all S3 tenant accounts to create and manage compliant
buckets. If S3 tenant accounts need to comply with regulatory requirements when saving object data,
you can enable compliance for your entire StorageGRID Webscale system.
• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.
• You must have reviewed the compliance workflow, and you must understand the considerations
for compliance.
Steps
3. Click Apply.
A confirmation dialog box appears.
Managing S3 buckets and objects for compliance | 169
4. If you are sure you want to enable compliance for the grid, click OK.
When you click OK:
• If the default rule in the active ILM policy is compliant, compliance is now enabled for the
entire grid and cannot be disabled.
• If the default rule is not compliant, an error appears, indicating that you must create and
activate a new ILM policy that includes a compliant rule as its default rule. Click OK, and
create a new proposed policy, simulate it, and activate it.
Related concepts
Creating, simulating, and activating an ILM policy on page 133
Related tasks
Creating an ILM rule on page 123
ILM rule 1 for compliance example: Erasure Coding profile with bucket
matching
This example ILM rule applies only to the S3 tenant account named Bank of ABC. It matches any
object in the bank-records bucket and then places an erasure-coded copy of the object on Storage
Nodes at three data center sites using a 6+3 Erasure Coding profile. This rule satisfies the
requirements of compliant S3 buckets: erasure-coded copies are kept on Storage Nodes from day 0 to
forever, using Ingest Time as the reference time.
• On Day 365, keep one replicated copy on Archive Nodes forever; temporary
copies in Data Center 1
Managing S3 buckets and objects for compliance | 173
1. A compliant rule that creates erasure-coded copies of the objects in a specific compliant S3
bucket. The EC copies are stored on Storage Nodes from day 0 to forever.
2. A non-compliant rule that creates two replicated object copies on Storage Nodes for a year and
then moves one object copy to Archive Nodes and stores that copy forever. This rule only applies
to non-compliant buckets because it stores only one object copy forever and it uses Archive
Nodes.
3. A compliant rule that creates two replicated object copies on Storage Nodes from day 0 to
forever.
• A test object in the bucket bank-records for the Bank of ABC tenant would be matched by the
EC objects compliant rule.
• A test object in any non-compliant bucket for any tenant account would be matched by the non-
compliant rule.
• A test object in a compliant bucket named customer-records for Bank of ABC or any other
tenant would be matched by the default rule. This is because the bucket name does not match
bank-records and the non-compliant rule does not apply to objects in compliant buckets.
Related concepts
Creating, simulating, and activating an ILM policy on page 133
Managing S3 buckets and objects for compliance | 177
• Tenant Management API users and S3 API users receive a response code of 503 Service
Unavailable with similar message text.
Steps
1. Attempt to make all Storage Nodes or sites available again as soon as possible.
2. If you are unable to make enough of the Storage Nodes at each site available, contact technical
support, who can help you recover nodes and ensure that compliance changes are consistently
applied across the grid.
3. Once the underlying issue has been resolved, remind the tenant user to retry their compliance
configuration changes.
Related information
Using tenant accounts
Implementing S3 client applications
Recovery and maintenance
178
• Queries
• Object deleting
Queries
LDR queries include queries for object location during retrieve and archive operations. You can
identify the average time that it takes to run a query, the total number of successful queries, and the
total number of queries that failed because of a timeout issue.
You can review query information to monitor the health of the metadata store, which impacts the
system’s ingest and retrieval performance. For example, if the latency for an average query is slow
and the number of failed queries due to timeouts is high, the metadata store might be encountering a
higher load or performing another operation.
You can also view the total number of queries that failed because of consistency failures. Consistency
level failures result from an insufficient number of available metadata stores at the time a query is
performed through the specific LDR service.
ILM Activity
Information Lifecycle Management (ILM) metrics allow you to monitor the rate at which objects are
evaluated for ILM implementation. You can view some of these metrics on the Dashboard.
Object stores
The underlying data storage of an LDR service is divided into a fixed number of object stores (also
known as storage volumes). Each object store is a separate mount point.
180 | StorageGRID Webscale 11.1 Administrator Guide
The object stores in a Storage Node are identified by a hexadecimal number from 0000 to 000F,
which is known as the volume ID. By default, 3 TB of space is reserved in the first object store
(volume 0) for object metadata in a Cassandra database; any remaining space on that volume is used
for object data. All other object stores are used exclusively for object data, which includes replicated
copies and erasure coded fragments.
To ensure even space usage for replicated copies, object data for a given object is stored to one object
store based on available storage space. When one or more object stores fill to capacity, the remaining
object stores continue to store objects until there is no more room on the Storage Node.
Object counts
The DDS service lists the total number of objects ingested into the StorageGRID Webscale system as
well as the total number of objects ingested through each of the system’s supported interfaces (S3 or
Swift).
Managing disk storage | 181
Because object metadata synchronization occurs over time, object count attributes (see DDS > Data
Store > Overview > Main) can differ between DDS services. Eventually, all metadata stores will
synchronize and counts should become the same.
Queries
You can identify the average time that it takes to run a query against the metadata store through the
specific DDS service, the total number of successful queries, and the total number of queries that
failed because of a timeout issue.
You might want to review query information to monitor the health of the metadata store, Cassandra,
which impacts the system’s ingest and retrieval performance. For example, if the latency for an
average query is slow and the number of failed queries due to timeouts is high, the metadata store
might be encountering a higher load or performing another operation.
You can also view the total number of queries that failed because of consistency failures. Consistency
level failures result from an insufficient number of available metadata stores at the time a query is
performed through the specific DDS service.
Metadata protection
Object metadata is information related to or a description of an object; for example, object
modification time, or storage location. StorageGRID Webscale stores object metadata in a Cassandra
database, which interfaces with the DDS service.
To ensure redundancy and thus protection against loss, three copies of object metadata are
maintained. The copies are load balanced across all Storage Nodes at each site. This replication is
non-configurable and performed automatically.
Related tasks
Configuring ILM rules on page 113
Related concepts
Managing full Storage Nodes on page 189
186 | StorageGRID Webscale 11.1 Administrator Guide
Auto-Start HTAS Enable the HTTP component when the LDR service is
HTTP restarted. If not selected, the HTTP interface remains
Offline until explicitly enabled.
If Auto-Start HTTP is selected, the state of the system
on restart depends on the state of the LDR > Storage
component. If the LDR > Storage component is Read-
only on restart, the HTTP interface is also Read-only. If
the LDR > Storage component is Online, then HTTP
is also Online. Otherwise, the HTTP interface remains
in the Offline state.
LDR > Data Reset Lost RCOR Reset to zero the counter for the number of lost objects
Store Objects Count on this service.
Managing disk storage | 187
Health Check SHCT The time limit in seconds within which a health check
Timeout test must complete in order for a storage volume to be
considered healthy. Only change this value when
directed to do so by Support.
LDR > Reset Missing VCMI Resets the count of Missing Objects Detected (OMIS).
Verification Objects Count Use only after foreground verification completes.
Missing replicated object data is restored automatically
by the StorageGRID Webscale system.
Verify FVOV Select object stores on which to perform foreground
verification.
Verification VPRI Set the priority rate at which background verification
Rate takes place. See information on configuring the
background verification rate.
Reset Corrupt VCCR Reset the counter for corrupt replicated object data
Objects Count found during background verification. This option can
be used to clear the Corrupt Objects Detected (OCOR)
alarm condition. For details, see the information about
troubleshooting.
188 | StorageGRID Webscale 11.1 Administrator Guide
Related tasks
Configuring the background verification rate on page 196
Related information
Troubleshooting StorageGRID Webscale
Related information
Expanding a StorageGRID Webscale grid
Choices
• Configuring stored object encryption on page 189
• Configuring stored object hashing on page 190
• Configuring stored object compression on page 191
• Enabling Prevent Client Modify on page 192
• Enabling HTTP for client communications on page 193
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
190 | StorageGRID Webscale 11.1 Administrator Guide
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Note: If you change this setting, it will take about one minute for the new setting to be applied.
The configured value is cached for performance and scaling.
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• Applications that save objects to StorageGRID Webscale might compress objects before saving
them. If a client application has already compressed an object before saving it to StorageGRID
Webscale, enabling Stored Object Compression will not further reduce an object’s size.
• If the Stored Object Compression grid option is enabled, S3 and Swift client applications should
avoid performing GET Object operations that specify a range of bytes be returned. These “range
read” operations are inefficient because StorageGRID Webscale must effectively uncompress the
objects to access the requested bytes. GET Object operations that request a small range of bytes
from a very large object are especially inefficient; for example, it is inefficient to read a 10 MB
range from a 50 GB compressed object.
If ranges are read from compressed objects, client requests can time out.
Note: If you need to compress objects and your client application must use range reads,
increase the read timeout for the application.
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• S3 REST API
◦ Delete Bucket requests
◦ Any requests to modify an existing object's data, user-defined metadata, or S3 object tagging
Note: This setting does not apply to buckets with versioning enabled. Versioning already
prevents modifications to object data, user-defined metadata, and object tagging.
Steps
communications between S3 and Swift clients and StorageGRID Webscale in addition to HTTPS
communications. For example, you might use HTTP when testing a non-production grid.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
Related information
Implementing S3 client applications
Implementing Swift client applications
Managing disk storage | 195
If your StorageGRID Webscale system includes an Archive Node whose Target Type is Cloud
Tiering – Simple Storage Service and the targeted archival storage system is Amazon Web Services
(AWS), the Maximum Segment Size must be less than or equal to 4.5 GiB (4,831,838,208 bytes).
This upper limit ensures that the AWS PUT limitation of five GBs is not exceeded. Requests to AWS
that exceed this value fail.
On retrieval of a segment container, the LDR service assembles the original object from its segments
and returns the object to the client.
The container and segments are not necessarily stored on the same Storage Node. Container and
segments can be stored on any Storage Node.
Each segment is treated by the StorageGRID Webscale system independently and contributes to the
count of attributes such as Managed Objects and Stored Objects. For example, if an object stored to
the StorageGRID Webscale system is split into two segments, the value of Managed Objects
increases by three after the ingest is complete, as follows:
segment container + segment 1 + segment 2 = three stored objects
You can improve performance when handling large objects by ensuring that:
• Each Gateway and Storage Node has sufficient network bandwidth for the throughput required.
For example, configure separate Grid and Client Networks on 10 Gbps Ethernet interfaces.
• Enough Gateway and Storage Nodes are deployed for the throughput required.
• Each Storage Node has sufficient disk IO performance for the throughput required.
196 | StorageGRID Webscale 11.1 Administrator Guide
Related concepts
What background verification is on page 196
What foreground verification is on page 198
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• Adaptive: Default setting. The task is designed to verify at a maximum of 4 MB/s or 10 objects/s
(whichever is exceeded first).
• High: Storage verification proceeds quickly, at a rate that can slow ordinary system activities.
Use the High verification rate only when you suspect that a hardware or software fault might have
corrupted object data. After the High priority background verification completes, the Verification
Rate automatically resets to Adaptive.
Steps
5. Under Background Verification, select Verification Rate > High or Verification Rate >
Adaptive.
Note: Setting the Verification Rate to High triggers a Notice level alarm for VPRI (Verification
Rate).
a. Go to LDR > Verification > Overview > Main and monitor the attribute Corrupt Objects
Detected (OCOR).
If background verification finds corrupt replicated object data, the attribute Corrupt Objects
Detected is incremented. The LDR service recovers by quarantining the corrupt object data
and sending a message to the DDS service to create a new copy of the object data. The new
copy can be made anywhere in the StorageGRID Webscale system that satisfies the active
ILM policy.
b. Go to LDR > Erasure Coding > Overview > Main and monitor the attribute Corrupt
Fragments Detected (ECCD).
If background verification finds corrupt fragments of erasure coded object data, the attribute
Corrupt Fragments Detected is incremented. The LDR service recovers by rebuilding the
corrupt fragment in place on the same Storage Node.
8. If corrupt replicated object data is found, contact technical support to clear the quarantined copies
from the StorageGRID Webscale system and determine the root cause of the corruption.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Managing disk storage | 199
• You have ensured that the following grid tasks are not running:
If these grid tasks are running, wait for them to complete or release their lock, or abort them as
appropriate.
• You have ensured that the storage is online. (Select Support > Grid Topology. Then, select
Storage Node > LDR > Storage > Overview > Main. Ensure that Storage State - Current is
Online.)
• You have ensured that the following recovery procedures are not running on the same Storage
Node:
Foreground verification does not provide useful information while recovery procedures are in
progress.
• If foreground verification finds large amounts of missing object data, there is likely an issue with
the Storage Node's storage that needs to be investigated and addressed.
• If foreground verification finds a serious storage error associated with erasure coded data, it will
notify you. You must perform storage volume recovery to repair the error.
You can configure foreground verification to check all of a Storage Node's object stores or only
specific object stores.
If foreground verification finds missing object data, the StorageGRID Webscale system attempts to
replace it. If a replacement copy cannot be made, the LOST (Lost Objects) alarm might be triggered.
Foreground verification generates an LDR Foreground Verification grid task that, depending on the
number of objects stored on a Storage Node, can take days or weeks to complete. It is possible to
select multiple Storage Nodes at the same time; however, these grid tasks are not run simultaneously.
Instead, they are queued and run one after the other until completion. When foreground verification is
in progress on a Storage Node, you cannot start another foreground verification task on that same
Storage Node even though the option to verify additional volumes might appear to be available for
the Storage Node.
If a Storage Node other than the one where foreground verification is being run goes offline, the grid
task continues to run until the % Complete attribute reaches 99.99 percent. The % Complete
attribute then falls back to 50 percent and waits for the Storage Node to return to online status. When
the Storage Node's state returns to online, the LDR Foreground Verification grid task continues until
it completes.
Steps
3. Under Foreground Verification, select the check box for each storage volume ID you want to
verify.
200 | StorageGRID Webscale 11.1 Administrator Guide
b. On the Overview tab under Verification Results, note the value of Missing Objects
Detected.
If the count for the attribute Missing Objects Detected is large (if there are a hundreds of
missing objects), there is likely an issue with the Storage Node's storage. In this case, cancel
foreground verification by aborting the Foreground Verification grid task, resolve the storage
issue, and then rerun foreground verification for the Storage Node.
d. On the Overview tab under Verification Results, note the value of Missing Fragments
Detected.
If the count for the attribute Missing Fragments Detected is large (if there are a hundreds of
missing fragments), there is likely an issue with the Storage Node's storage. In this case,
cancel foreground verification by aborting the Foreground Verification grid task, resolve the
storage issue, and then rerun foreground verification for the Storage Node.
If foreground verification does not detect a significant number of missing replicated object copies
or a significant number of missing fragments, then the storage is operating normally.
a. Select Support > Grid Topology. Then select site > Admin Node > CMN > Grid Task >
Overview > Main.
b. Verify that the foreground verification grid task is progressing without errors.
Note: A notice-level alarm is triggered on grid task status (SCAS) if the foreground
verification grid task pauses.
c. If the grid task pauses with a critical storage error, recover the affected volume and
then run foreground verification on the remaining volumes to check for additional errors.
Attention: If the foreground verification grid task pauses with the message
Encountered a critical storage error in volume volID
you must perform the procedure for recovering a failed storage volume. See the recovery
and maintenance instructions.
Related information
Recovery and maintenance
service. The CLB services operates as a connection pipeline between the client application and an
LDR service.
203
Object data that cannot be deleted, but is not regularly accessed, can at any time be moved off of a
Storage Node's spinning disks and onto external archival storage such as the cloud or tape. This
archiving of object data is accomplished through the configuration of a data center site's Archive
Node and then the configuration of ILM rules where this Archive Node is selected as the "target" for
content placement instructions. The Archive Node does not manage archived object data itself; this is
achieved by the external archive device.
Note: Object metadata is not archived, but remains on Storage Nodes.
that same sequential order. Requests are then queued for submission to the storage device. Depending
upon the archival device, multiple requests for objects on different volumes can be processed
simultaneously.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
◦ The bucket must be dedicated to a single Archive Node. It cannot be used by other Archive
Nodes or other applications.
◦ The bucket must have the appropriate region selected for your location.
• Object Segmentation must be enabled and the Maximum Segment Size must be less than or equal
to 4.5 GiB (4,831,838,208 bytes). S3 API requests that exceed this value will fail if Simple
Storage Service (S3) is used as the external archival storage system.
Steps
4. Select Cloud Tiering - Simple Storage Service (S3) from the Target Type drop-down list.
Note: Configuration settings are unavailable until you select a Target Type.
5. Configure the cloud tiering (S3) account through which the Archive Node will connect to the
target external S3 capable archival storage system.
Most of the fields on this page are self-explanatory. The following describes fields for which you
might need guidance.
• Region: Only available if Use AWS is selected. The region you select must match the bucket's
region.
• Endpoint and Use AWS: For Amazon Web Services (AWS), select Use AWS. Endpoint is then
automatically populated with an endpoint URL based on the Bucket Name and Region
attributes. For example, https://bucket.region.amazonaws.com
For a non-AWS target, enter the URL of the system hosting the bucket, including the port
number. For example, https://system.com:1080
• End Point Authentication: Enabled by default. Clear to disable endpoint SSL certificate and
host name verification for the targeted external archival storage system. Only clear the
checkbox if the network to the external archival storage system is trusted. If another instance
of a StorageGRID Webscale system is the target archival storage device and the system is
configured with publicly signed certificates, you do not need to clear the checkbox.
• Storage Class: Select Standard, the default value, for regular storage, or Reduced Redundancy,
which provides lower cost storage with less reliability for objects that can be easily recreated.
If the targeted archival storage system is another instance of the StorageGRID Webscale
system, Storage Class controls the target system's dual-commit behavior.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
4. Select Tivoli Storage Manager (TSM) from the Target Type drop-down list.
5. By default, the Tivoli Storage Manager State is set to Online, which means that the Archive
Node is able to retrieve object data from the TSM middleware server. Select Offline to prevent
retrievals from the TSM middleware server.
• Server IP or Hostname: Specify the IP address or Fully Qualified Domain Name (FQDN) of
the TSM middleware server used by the ARC service. The default IP address is 127.0.0.1.
• Server Port: Specify the port number on the TSM middleware server that the ARC service will
connect to. The default is 1500.
• Node Name: Specify the name of the Archive Node. You must enter the name (arc‐user) that
you registered on the TSM middleware server.
• User Name: Specify the user name the ARC service uses to log in to the TSM server. Enter
the default user name (arc‐user) or the administrative user you specified for the Archive Node.
• Password: Specify the password used by the ARC service to log in to the TSM server.
• Management Class: Specify the default management class to use if a management class is not
specified when the object is being save to the StorageGRID Webscale system, or the specified
management class is not defined on the TSM middleware server.
If the specified management class does not exist on the TSM server, the object cannot be
saved to the TSM archive. The object remains in the queue on the StorageGRID Webscale
system and the CMS > Content > Overview > Objects with ILM Evaluation Pending
count is incremented.
210 | StorageGRID Webscale 11.1 Administrator Guide
• Number of Sessions: Specify the number of tape drives on the TSM middleware server that
are dedicated to the Archive Node. The Archive Node concurrently creates a maximum of one
session per mount point plus a small number of additional sessions (less than five).
You need to change this value to be the same as the value set for MAXNUMMP (maximum
number of mount points) when the Archive Node was registered or updated. (In the register
command, the default value of MAXNUMMP used is 1, if no value is set.)
You must also change the value of MAXSESSIONS for the TSM server to a number that is at
least as large as the Number of Sessions set for the ARC service. The default value of
MAXSESSIONS on the TSM server is 25.
• Maximum Retrieve Sessions: Specify the maximum number of sessions that the ARC service
can open to the TSM middleware server for retrieve operations. In most cases, the appropriate
value is Number of Sessions minus Maximum Store Sessions. If you need to share one tape
drive for storage and retrieval, specify a value equal to the Number of Sessions.
• Maximum Store Sessions: Specify the maximum number of concurrent sessions that the ARC
service can open to the TSM middleware server for archive operations.
This value should be set to one except when the targeted archival storage system is full and
only retrievals can be performed. Set this value to zero to use all sessions for retrievals.
Choices
• Optimizing Archive Node for TSM middleware sessions on page 210
• Managing an Archive Node when TSM server reaches capacity on page 211
• Configuring Archive Node replication on page 213
• Configuring retrieve settings on page 214
• Configuring the archive store on page 215
• Setting Custom alarms for the Archive Node on page 216
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
used concurrently for retrieval, and, at most, one of these drives can also be used for storage if
applicable.
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
4. Change Maximum Retrieve Sessions to be the same as the number of concurrent sessions listed in
Number of Sessions.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
• Reset Inbound Replication Failure Count: Select to reset the counter for inbound replication
failures. This can be used to clear the RIRF (Inbound Replications – Failed) alarm.
• Reset Outbound Replication Failure Count: Select to reset the counter for outbound
replication failures. This can be used to clear the RORF (Outbound Replications – Failed)
alarm.
• Disable Outbound Replication: Select the checkbox to disable outbound replication (including
content requests for HTTP retrievals) as part of a maintenance or testing procedure. Leave
unchecked during normal operation.
214 | StorageGRID Webscale 11.1 Administrator Guide
When outbound replication is disabled, object data can be copied to this ARC service to
satisfy ILM rules, but object data cannot be retrieved from the ARC service to be copied to
other locations in the StorageGRID Webscale system. The ARC service is write‐only.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
◦ Online: The grid node is available to retrieve object data from the archival media device.
• Reset Request Failures Count: Select the checkbox to reset the counter for request failures.
This can be used to clear the ARRF (Request Failures) alarm.
• Reset Verification Failure Count: Select the checkbox to reset the counter for verification
failures on retrieved object data. This can be used to clear the ARRV (Verification Failures)
alarm.
Related tasks
Managing an Archive Node when TSM server reaches capacity on page 211
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
◦ Online: The Archive Node is available to process object data for storage to the archival
storage system.
◦ Offline: The Archive Node is not available to process object data for storage to the archival
storage system.
• Archive Store Disabled on Startup: When selected, the Archive Store component remains in
the Read-only state when restarted. Used to persistently disable storage to the targeted the
216 | StorageGRID Webscale 11.1 Administrator Guide
archival storage system. Useful when the targeted the archival storage system is unable to
accept content.
• Reset Store Failure Count: Reset the counter for store failures. This can be used to clear the
ARVF (Stores Failure) alarm.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
• ARQL: Average Queue Length. The average time, in microseconds, that object data is queued for
retrieval from the archival storage system.
• ARRL: Average Request Latency. The average time, in microseconds, needed by the Archive
Node to retrieve object data from the archival storage system.
The acceptable values for these attributes depend on how the archival storage system is configured
and used. (Go to ARC > Retrieve > Overview > Main.) The values set for request timeouts and the
number of sessions made available for retrieve requests are particularly influential.
Managing archival storage | 217
After integration is complete, monitor the Archive Node's object data retrievals to establish values for
normal retrieval times and queue lengths. Then, create Custom alarms for ARQL and ARRL that will
trigger if an abnormal operating condition arises.
Related tasks
Creating custom service or component alarms on page 70
218
Related tasks
Verifying an ILM policy with object metadata lookup on page 146
What an Admin Node is | 219
Related concepts
Alarm acknowledgments on page 219
Alarm acknowledgments
Alarm acknowledgments made from one Admin Node are not copied to any other Admin Node.
Because acknowledgments are not copied to other Admin Nodes, it is possible that the Grid
Topology tree will not look the same for each Admin Node.
This difference can be useful when connecting web clients. Web clients can have different views of
the StorageGRID Webscale system based on the administrator needs.
220 | StorageGRID Webscale 11.1 Administrator Guide
Note that notifications are sent from the Admin Node where the acknowledgment occurs.
• It is possible that while the StorageGRID Webscale system is running in this “switch-over”
scenario, where the standby sender assumes the task of sending notifications and AutoSupport
messages, the preferred sender will maintain the ability to send notifications and AutoSupport
messages. If this occurs, duplicate notifications and AutoSupport messages are sent: one from the
preferred sender and one from the standby sender. When the Admin Node configured as the
standby sender no longer detects errors on the preferred sender, it switches to “standby” status
and stops sending notifications and AutoSupport messages. Notifications and AutoSupport
messages are once again sent only by the preferred sender.
• If the standby sender cannot detect the preferred sender, the standby sender switches to online and
sends notifications and AutoSupport messages. In this scenario, the preferred sender and standby
senders are “islanded” from each other. Each sender (Admin Node) can be operating and
monitoring the system normally, but because the standby sender cannot detect the other Admin
Node of the preferred sender, both the preferred sender and the standby sender send notifications
and AutoSupport messages.
When sending a test email, all NMS services send a test email.
Related concepts
About alarms and email notifications on page 55
What AutoSupport is on page 79
Related tasks
Selecting a preferred sender on page 63
221
Managing networking
Because the topology of your StorageGRID Webscale system is that of a group of interconnected
servers, over time as your system changes and grows you may be required to perform various updates
to the system's networking.
You can change the configuration of the Grid, Client, or Admin Networks, or you can add new Client
and Admin Networks. You can also update external NTP source IP addresses and DNS IP addresses
at any time.
Note: To use the Grid Network editor to modify or add a network for a grid node, see the recovery
and maintenance instructions. For more information about network topology, see Grid primer.
Grid Network
Required. The Grid Network is the communication link between grid nodes. All hosts on the Grid
Network must be able to talk to all other hosts. This network is used for all internal StorageGRID
Webscale system communications.
Admin Network
Optional. The Admin Network allows for restricted access to the StorageGRID Webscale system for
maintenance and administration.
Client Network
Optional. The Client Network can communicate with any subnet reachable through the local gateway.
Guidelines
• A StorageGRID Webscale grid node requires a dedicated network interface, IP address, subnet
mask, and gateway for each network it is assigned to.
• A grid node is not permitted to have more than one interface on a network.
• A single gateway, per network, per grid node is supported, and it has to be on the same subnet as
the node. You can implement more complex routing in the gateway, if required.
• If the node is connected to a StorageGRID Webscale appliance, specific ports are used for each
network
• The default route is generated automatically, per node. If eth2 is enabled, then 0.0.0.0/0 uses the
Client Network on eth2. If eth2 is not enabled, then 0.0.0.0/0 uses the Grid Network on eth0.
222 | StorageGRID Webscale 11.1 Administrator Guide
• The Client Network does not become operational until the grid node has joined the grid
• The Admin Network can be configured during VM deployment to allow access to the installation
UI before the grid is fully installed.
Related information
Recovery and maintenance
Grid primer
SG6000 appliance installation and maintenance
SG5700 appliance installation and maintenance
SG5600 appliance installation and maintenance
Viewing IP addresses
You can view the IP address for each grid node that makes up your StorageGRID Webscale system.
You can then use this IP address to log into the grid node at the command line and perform various
maintenance procedures.
Steps
Example
For VM-based grid nodes, the IP address assigned to eth0 is the node's Grid Network IP address.
For StorageGRID Webscale appliance Storage Nodes, the IP address assigned to hic2 and hic4 is
the node's Grid Network IP address.
The Network Addresses table displays link-local IPv6 addresses beginning with fe80::, which
are automatically assigned by Linux.
Related information
Recovery and maintenance
Managing networking | 223
Steps
3. To convert the randomly generated community strings to strings of your choice, edit /etc/snmp/
snmpd.conf to replace the rocommunity and rocommunity6 randomly generated community
strings.
Detailed registry
The OID is displayed on third-party monitor servers.
This OID reports the overall system status of the StorageGRID Webscale system.
Element Values
OID 1.3.6.1.4.1.28669.1.0.1.1.1
Hierarchy iso.org.dod.internet.mgmt.private.enterprises.bycast.version1.common.n
msmi.system.status
Values One of the following values is displayed:
1 = unknown
11 = adminDown
21 = normal
31 = notice
41 = minor
51 = major
61 = critical
The MIB contains this enumeration mapping. If the monitor uses SNMP
GET, the textual value will appear instead of the numerical value.
Element Values
OID 1.3.6.1.4.1.28669.1.0.1.1.2
Hierarchy iso.org.dod.internet.mgmt.private.enterprises.bycast.version1.common.n
msmi.system.label
Values Text string of the system label.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Select a site under Link Source and enter a cost value between 0 and 100 under Link
Destination.
You cannot change the link cost if the source is the same as the destination.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
Configuring certificates
You can customize the certificates used by the StorageGRID Webscale system.
The StorageGRID Webscale system uses security certificates for two distinct purposes:
• Management Interface Server Certificates: Used to secure access to the Grid Manager and the
Grid Management API.
• Storage API Server Certificates: Used to secure access to the Storage Nodes and API Gateway
Nodes, which API client applications use to upload and download object data.
You can use the default certificates created during installation, or you can replace either, or both, of
these default types of certificates with your own custom certificates.
Steps
2. In the Management Interface Server Certificate section, click Install Custom Certificate.
• Server Certificate Private Key: The custom server certificate private key file (.key).
• CA Bundle: A single file containing the certificates from each intermediate issuing Certificate
Authority (CA). The file should contain each of the PEM-encoded CA certificate files,
concatenated in certificate chain order.
4. Click Save.
The custom server certificates are used for all subsequent new client connections.
Steps
2. In the Manage Interface Server Certificate section, click Use Default Certificates.
Steps
2. In the Object Storage API Service Endpoints Server Certificate section, click Install Custom
Certificate.
• Server Certificate Private Key: The custom server certificate private key file (.key).
Managing networking | 229
• CA Bundle: A single file containing the certificates from each intermediate issuing Certificate
Authority (CA). The file should contain each of the PEM-encoded CA certificate files,
concatenated in certificate chain order.
4. Click Save.
The custom server certificates are used for all subsequent new API client connections.
Steps
2. In the Object Storage API Service Endpoints Server Certificate section, click Use Default
Certificates.
• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.
Steps
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
1. Obtain the fully qualified domain name (FQDN) of each API Gateway Node.
• For --domains, use wildcards to represent the fully qualified domain names of all API
Gateway Nodes. For example, *.sgws.foo.com uses the * wildcard to represent
gn1.sgws.foo.com and gn2.sgws.foo.com.
• Set --type to storage to configure the certificate used by S3 and Swift storage clients.
• By default, generated certificates are valid for one year (365 days) and must be recreated
before they expire. You can use the --days argument to override the default validity period.
Note: A certificate's validity period begins when make-certificate is run. You must
ensure the S3 client is synchronized to the same time source as StorageGRID Webscale;
otherwise, the client might reject the certificate.
Example
The resulting output contains the public certificate needed by your S3 client.
b. Select Configuration > Server Certificates > Object Storage API Service Endpoints
Server Certificate.
7. Configure your S3 client to use the public certificate you copied. Include the BEGIN and END
tags.
• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.
• If S3 clients are connecting to one or more Storage Nodes, you must include the domain name of
each Storage Node.
• If S3 clients are connecting through an external load balancer, you must include the domain name
of the load balancer.
Steps
2. Using the (+) icon to add additional fields, enter the list of S3 API endpoint domain names in the
Endpoint fields.
If this list is empty, support for S3 virtual hosted-style requests is disabled.
3. Click Save.
4. Obtain a custom server certificate with the wildcard Subject Alternative Name (SAN) for the
endpoint domain name, the endpoint domain name, and any other domain names that must be
supported.
This step is required to validate the SSL certificate and to verify the hostname when API client
applications connect to the endpoint.
Example
If the endpoint is s3.company.com, obtain a custom server certificate that includes the
s3.company.com endpoint and the endpoint's wildcard SAN: *.s3.company.com.
5. Select Configuration > Server Certificates. Then, install the custom certificate in the Object
Storage API Service Endpoints Server Certificate section.
6. Confirm that the DNS server also supports the endpoint and the wildcard SAN.
Now, when the endpoint bucket.s3.company.com is used, the DNS server resolves to the
correct endpoint and the certificate authenticates the endpoint as expected.
Managing networking | 233
Related tasks
Configuring custom server certificates for storage API endpoints on page 228
Related information
Implementing S3 client applications
234
Related concepts
What an Admin Node is on page 218
Related information
Understanding audit messages
Upgrading StorageGRID Webscale
Related information
Upgrading StorageGRID Webscale
• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).
• You must have the Configuration.txt file (available in the SAID package).
Steps
---------------------------------------------------------------------
| Shares | Authentication | Config |
---------------------------------------------------------------------
| add-audit-share | set-authentication | validate-config |
| enable-disable-share | set-netbios-name | help |
| add-user-to-share | join-domain | exit |
| remove-user-from-share | add-password-server | |
| modify-group | remove-password-server | |
| | add-wins-server | |
| | remove-wins-server | |
---------------------------------------------------------------------
a. Enter: set-authentication
b. When prompted for Windows Workgroup or Active Directory installation, enter: workgroup
a. Enter: add-audit-share
Note: The share is automatically added as read-only.
Note: There is no need to enter a directory. The audit directory name is predefined.
7. If more than one user or group is permitted to access the audit share, add the additional users:
a. Enter: add-user-to-share
A numbered list of enabled shares is displayed.
d. When prompted, enter the name of the audit user or group: audit_user or audit_group
f. Repeat step 7 for each additional user or group that has access to the audit share.
b. Repeat steps 4 through 9 to configure the audit share for each additional Admin Node.
c. Close the remote secure shell login to the remote Admin Node: exit
Related information
Upgrading StorageGRID Webscale
• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).
• You must have the CIFS Active Directory username and password.
• You must have the Configuration.txt file (available in the SAID package).
Steps
---------------------------------------------------------------------
| Shares | Authentication | Config |
---------------------------------------------------------------------
| add-audit-share | set-authentication | validate-config |
| enable-disable-share | set-netbios-name | help |
| add-user-to-share | join-domain | exit |
| remove-user-from-share | add-password-server | |
238 | StorageGRID Webscale 11.1 Administrator Guide
| modify-group | remove-password-server | |
| | add-wins-server | |
| | remove-wins-server | |
---------------------------------------------------------------------
b. When prompted, enter the name of the AD domain (short domain name).
c. When prompted, enter the domain controller’s IP address or DNS host name.
c. You are prompted to test if the Admin Node is currently a valid member of the domain. If this
Admin Node has not previously joined the domain, enter: no
b. When prompted to test if the server is currently a valid member of the domain, enter: y
If you receive the message “Join is OK,” you have successfully joined the domain. If you do
not get this response, try setting authentication and joining the domain again.
b. When prompted to enter the audit user name, enter the audit user name.
9. If more than one user or group is permitted to access the audit share, add additional users: add-
user-to-share
c. When prompted for the audit group name, enter the name of the audit user group.
e. Repeat step 9 for each additional user or group that has access to the audit share.
b. Repeat steps 4 through 11 to configure the audit shares for each Admin Node.
240 | StorageGRID Webscale 11.1 Administrator Guide
c. Close the remote secure shell login to the Admin Node: exit
Related information
Upgrading StorageGRID Webscale
• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).
• You must have the Configuration.txt file (available in the SAID package).
Steps
2. Confirm that all services have a state of Running or Verified. Enter: storagegrid-status
If all services are not Running or Verified, resolve issues before continuing.
---------------------------------------------------------------------
| Shares | Authentication | Config |
---------------------------------------------------------------------
| add-audit-share | set-authentication | validate-config |
| enable-disable-share | set-netbios-name | help |
| add-user-to-share | join-domain | exit |
| remove-user-from-share | add-password-server | |
| modify-group | remove-password-server | |
| | add-wins-server | |
| | remove-wins-server | |
---------------------------------------------------------------------
6. When prompted, enter the number for the audit share (audit-export): audit_share_number
You are asked if you would like to give a user or a group access to this audit share.
8. When prompted for the user or group name for this AD audit share, enter the name.
The user or group is added as read-only for the audit share both in the server’s operating system
and in the CIFS service. The Samba configuration is reloaded to enable the user or group to
access the audit client share.
10. Repeat steps 5 to 8 for each user or group that has access to the audit share.
b. Repeat steps 4 through 12 to configure the audit shares for each Admin Node.
c. Close the remote secure shell login to the remote Admin Node: exit
• You must have the Passwords.txt file with the root account passwords (available in the SAID
package).
• You must have the Configuration.txt file (available in the SAID package).
Steps
---------------------------------------------------------------------
| Shares | Authentication | Config |
---------------------------------------------------------------------
| add-audit-share | set-authentication | validate-config |
| enable-disable-share | set-netbios-name | help |
| add-user-to-share | join-domain | exit |
| remove-user-from-share | add-password-server | |
| modify-group | remove-password-server | |
| | add-wins-server | |
| | remove-wins-server | |
---------------------------------------------------------------------
6. Enter the number corresponding to the user or group you want to remove: number
The audit share is updated, and the user or group is no longer permitted access to the audit share.
For example:
Configuring audit client access | 243
Enabled shares
1. audit-export
Select the share to change: 1
Remove user or group? [User/group]: User
Valid users for this share
1. audituser
2. newaudituser
Select the user to remove: 1
8. If the StorageGRID Webscale deployment includes Admin Nodes at other sites, disable the audit
share at each site as required.
Related information
Upgrading StorageGRID Webscale
Steps
1. Add a new user or group with the updated name to the audit share.
Related tasks
Adding a user or group to a CIFS audit share on page 240
Removing a user or group from a CIFS audit share on page 242
Related information
Upgrading StorageGRID Webscale
• You must have the Passwords.txt file with the root/admin password (available in the SAID
package).
• You must have the Configuration.txt file (available in the SAID package).
Steps
2. Confirm that all services have a state of Running or Verified. Enter: storagegrid-status
If any services are not listed as Running or Verified, resolve issues before continuing.
-----------------------------------------------------------------
| Shares | Clients | Config |
-----------------------------------------------------------------
| add-audit-share | add-ip-to-share | validate-config |
| enable-disable-share | remove-ip-from-share | refresh-config |
| | | help |
| | | exit |
-----------------------------------------------------------------
a. When prompted, enter the audit client’s IP address or IP address range for the audit share:
client_IP_address
6. If more than one audit client is permitted to access the audit share, add the IP address of the
additional user: add-ip-to-share
b. When prompted, enter the audit client’s IP address or IP address range for the audit share:
client_IP_address
d. Repeat step 6 for each additional audit client that has access to the audit share.
• If the StorageGRID Webscale deployment includes Admin Nodes at other sites, enable these
audit shares as required:
b. Repeat steps 4 through 7.c to configure the audit shares for each additional Admin Node.
c. Close the remote secure shell login to the remote Admin Node. Enter: exit
• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).
• You must have the Configuration.txt file (available in the SAID package).
Steps
-----------------------------------------------------------------
| Shares | Clients | Config |
-----------------------------------------------------------------
| add-audit-share | add-ip-to-share | validate-config |
| enable-disable-share | remove-ip-from-share | refresh-config |
| | | help |
| | | exit |
-----------------------------------------------------------------
3. Enter: add-ip-to-share
A list of NFS audit shares enabled on the Admin Node is displayed. The audit share is listed
as: /var/local/audit/export
5. When prompted, enter the audit client’s IP address or IP address range for the audit share:
client_IP_address
7. Repeat from step 3 for each audit client that should be added to the audit share.
b. Repeat steps 2 through 9 to configure the audit shares for each Admin Node.
Configuring audit client access | 247
c. Close the remote secure shell login to the remote Admin Node: exit
Steps
1. Verify connectivity (or variant for the client system) using the client-side IP address of the Admin
Node hosting the AMS service. Enter: ping IP_address
Verify that the server responds, indicating connectivity.
2. Mount the audit read-only share using a command appropriate to the client operating system. A
sample Linux command is (enter on one line):
mount -t nfs -o hard,intr Admin_Node_IP_address:/var/local/audit/export
myAudit
Use the IP address of the Admin Node hosting the AMS service and the predefined share name
for the audit system. The mount point can be any name selected by the client (for example,
myAudit in the previous command).
3. Verify that the files are available from the audit share. Enter: ls myAudit /*
where myAudit is the mount point of the audit share. There should be at least one log file listed.
• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).
• You must have the Configuration.txt file (available in the SAID package).
Steps
-----------------------------------------------------------------
| Shares | Clients | Config |
-----------------------------------------------------------------
| add-audit-share | add-ip-to-share | validate-config |
| enable-disable-share | remove-ip-from-share | refresh-config |
| | | help |
| | | exit |
-----------------------------------------------------------------
8. If your StorageGRID Webscale deployment is a multiple data center site deployment with
additional Admin Nodes at the other sites, disable these audit shares as required:
b. Repeat steps 2 through 7 to configure the audit shares for each additional Admin Node.
c. Close the remote secure shell login to the remote Admin Node: exit
Steps
Related tasks
Adding an NFS audit client to an audit share on page 245
Removing an NFS audit client from the audit share on page 247
249
• Configure a federated identity source (such as Active Directory or OpenLDAP) so you can import
administration groups and users.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• Administration (or “admin”) groups. The users in these groups can sign in to the Grid Manager
and perform tasks, based on the management permissions assigned to the group. See “About
administration user groups.”
• Tenant account groups, assuming that the tenant is not using its own identity source (that is,
assuming the Uses Own Identity Source checkbox is unchecked for the tenant account). Users in
tenant account groups can sign in to the Tenant Manager and perform tasks, based on the
permissions assigned to the group. See the information about using tenant accounts.
Note: When using identity federation, be aware that users who only belong to a primary group on
Active Directory are not allowed to sign in to the Grid Manager or the Tenant Manager. To allow
these users to sign in, grant them membership in a user-created group.
Steps
3. Select the type of LDAP service you want to configure from the LDAP Service Type drop-down
list.
You can select Active Directory, OpenLDAP, or Other.
Note: If you select OpenLDAP, you must configure the OpenLDAP server. See “Guidelines
for configuring an OpenLDAP server.”
4. If you selected Other, complete the fields in the LDAP Attributes section.
• Unique User Name: The name of the attribute that contains the unique identifier of an LDAP
user. This attribute is equivalent to sAMAccountName for Active Directory and uid for
OpenLDAP.
• User UUID: The name of the attribute that contains the permanent unique identifier of an
LDAP user. This attribute is equivalent to objectGUID for Active Directory and entryUUID
for OpenLDAP.
• Group Unique Name: The name of the attribute that contains the unique identifier of an
LDAP group. This attribute is equivalent to sAMAccountName for Active Directory and cn
for OpenLDAP.
• Group UUID: The name of the attribute that contains the permanent unique identifier of an
LDAP group. This attribute is equivalent to objectGUID for Active Directory and
entryUUID for OpenLDAP.
• Port: The port used to connect to the LDAP server. This is typically 389.
• Username: The username used to access the LDAP server, including the domain.
The specified user must have permission to list groups and users and to access the following
attributes:
◦ cn
◦ sAMAccountName or uid
◦ objectGUID or entryUUID
◦ memberOf
• Group Base DN: The fully qualified Distinguished Name (DN) of an LDAP subtree you want
to search for groups. In the example, all groups whose Distinguished Name is relative to the
base DN (DC=storagegrid,DC=example,DC=com) can be used as federated groups.
Note: The Unique Group Name values must be unique within the Group Base DN they
belong to.
• User Base DN: The fully qualified Distinguished Name (DN) of an LDAP subtree you want
to search for users.
Note: The Unique User Name values must be unique within the User Base DN they belong
to.
6. Select a security setting from the Transport Layer Security (TLS) drop-down list to specify if
TLS is used to secure communications with the LDAP server.
Controlling system access with administration user accounts and groups | 251
• Use operating system CA certificate: Use the default CA certificate installed on the
operating system to secure connections.
• Do not use TLS: The network traffic between the StorageGRID Webscale system and the
LDAP server will not be secured.
Example
The following screenshot shows example configuration values for an LDAP server that uses
Active Directory.
7. Optionally, click Test Connection to validate your connection settings for the LDAP server.
8. Click Save.
Related concepts
Admin group permissions on page 253
252 | StorageGRID Webscale 11.1 Administrator Guide
Related tasks
Creating a tenant account on page 93
Related information
Using tenant accounts
Indexing
You must configure the following OpenLDAP attributes with the specified index keywords:
olcDbIndex: objectClass eq
olcDbIndex: uid eq,pres,sub
olcDbIndex: cn eq,pres,sub
olcDbIndex: entryUUID eq
In addition, ensure the fields mentioned in the help for Username are indexed for optimal
performance.
See the information about reverse group membership maintenance in the administrator’s guide for
OpenLDAP.
Related information
OpenLDAP documentation: Version 2.4 Administrator's Guide
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Click Synchronize.
A confirmation message is displayed indicating that synchronization started successfully.
Controlling system access with administration user accounts and groups | 253
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
• Federated users who are currently signed in will retain access to the StorageGRID Webscale
system until their session expires, but they will be unable to sign in after their session expires.
• Synchronization between the StorageGRID Webscale system and the identity source will not
occur, and alarms will not be raised for accounts that have not been synchronized.
Steps
3. Click Save.
• Monitor alarms
The table shows the permissions you can assign when creating or editing an admin group for
StorageGRID Webscale access. Any functionality not explicitly mentioned in the table requires the
Root Access permission.
Note: You can use the Grid Management API to completely deactivate certain features. When a
feature has been deactivated, the corresponding Management Permission no longer appears on the
Groups page.
Grid Topology Page Configuration Provides access to the Configuration tabs in Grid
Topology.
ILM Provides access to the following menu options:
◦ Decommission
◦ Recovery
◦ DNS Servers*
◦ NTP Servers*
◦ License*
◦ Recovery Package
◦ Software Upgrade
◦ Domain Names*
◦ Server Certificates*
◦ Audit*
◦ Grid Options
◦ Link Cost
◦ Storage Options
◦ Display Options
◦ Global Alarms
◦ Notifications
◦ Email Setup
◦ AutoSupport
◦ Events
• ILM:
◦ Storage Pools
◦ Storage Grades
Tenant Accounts Provides access to the Tenant Accounts page from the
Tenants option. Users who have this permission can
add, edit, or remove tenant accounts. Users with this
permission can also set the initial root password for the
tenant and can sign into the tenant until the root
password is changed. Users who do not have this
permission do not see the Tenants option in the menu.
Note: Version 1 of the Grid Management API
(which has been deprecated) uses this permission to
manage tenant group policies, reset Swift admin
passwords, and manage root user S3 access keys.
Related tasks
Deactivating features from the Grid Management API on page 257
Controlling system access with administration user accounts and groups | 257
For details, see the instructions for implementing S3 or Swift client applications.
Steps
3. To deactivate a feature, such as Change Tenant Root Password, send a body to the API like this:
When the request is complete, the Change Tenant Root Password feature is disabled. The Change
Tenant Root Password management permission no longer appears in the user interface, and any
API request that attempts to change the root password for a tenant will fail with “403 Forbidden.”
{ "grid": null }
When this request is complete, all features, including the Change Tenant Root Password feature,
are reactivated. The Change Tenant Root Password management permission now appears in the
user interface, and any API request that attempts to change the root password for a tenant will
succeed, assuming the user has the Root Access or Change Tenant Root Password management
permission.
258 | StorageGRID Webscale 11.1 Administrator Guide
Note: The previous example causes all deactivated features to be reactivated. If other features
have been deactivated that should remain deactivated, you must explicitly specify them in the
PUT request. For example, to reactivate the Change Tenant Root Password feature and
continue to deactivate the Alarm Acknowledgment feature, send this PUT request:
Related concepts
Understanding the Grid Management API on page 12
• Local users: You can create admin user accounts that are local to StorageGRID Webscale system
and add these users to StorageGRID Webscale local admin groups.
• Federated users: You can use a federated identity source (such as Active Directory or
OpenLDAP) to import administration groups and users. The identity source manages the groups
to which users belong, so you cannot add federated users to local groups. Also, you cannot edit
federated user information; this information is synchronized with the external identity source.
Although you can add and delete local users, you cannot delete the root user. After creating groups,
you assign users to one or more local groups.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
2. Click Add.
4. For local groups, enter the group's name that will appear to users, for example, “Development
US”.
5. Enter a unique name without spaces for the group, for example, “Dev_US”.
7. Click Save.
A new group is created and added to the list of group names available for user accounts. User
accounts can now be associated with the new group.
Related concepts
Admin group permissions on page 253
Related tasks
Creating an admin user account on page 260
Modifying a local user's account on page 261
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
3. Click Edit.
260 | StorageGRID Webscale 11.1 Administrator Guide
4. For local groups, enter the group's name that will appear to users, for example, “Development
US”.
You cannot change the unique name, which is the internal group name.
6. Click Save.
Related concepts
Admin group permissions on page 253
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
3. Click Remove.
4. Click OK.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Controlling system access with administration user accounts and groups | 261
Steps
2. Click Create.
The list of group names is generated from the Groups table.
4. Assign the user to one or more groups that govern the access permissions.
5. Click Save.
Related tasks
Creating admin groups on page 258
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
3. Click Edit.
5. Optionally, to prevent the user from accessing the system temporarily, check Deny Access.
6. Click Save.
The new settings are applied the next time the user signs out and then signs back in to the
StorageGRID Webscale system.
262 | StorageGRID Webscale 11.1 Administrator Guide
Steps
If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.
3. Click Remove.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
Related concepts
Managing disk storage on page 178
Related tasks
Monitoring storage capacity for the entire grid on page 30
Caution: An ILM policy that has been incorrectly specified can cause unrecoverable data loss.
Carefully review all changes you make to an ILM policy before activating it to make sure the
policy will work as intended.
Related concepts
What an information lifecycle management policy is on page 106
Related tasks
Configuring ILM rules on page 113
Related concepts
Monitoring data migration on page 266
266 | StorageGRID Webscale 11.1 Administrator Guide
Monitor Description
Number of objects
1. Select Support > Grid Topology.
waiting for ILM
evaluation 2. Select deployment > Overview > Main.
Targeted archival If the ILM policy saves a copy of the migrated data to a targeted
system's storage archival storage system (tape or the cloud), monitor the capacity of the
capacity targeted archival storage system to ensure that there is sufficient
capacity for the migrated data.
Archive Node > ARC If an alarm for this attribute is triggered, the targeted archival storage
> Store > Store system might have reached capacity. Check the targeted archival storage
Failures (ARVF) system and resolve any issues that triggered an alarm.
• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Steps
1. Create an email list that includes all administrators responsible for monitoring the data migration.
What data migration is | 267
Optionally, you can create a template to customize the subject line, header, and footer of data
migration notification emails.
2. Create a Global Custom alarm for each attribute you need to monitor during data migration.
b. Under Default Alarms, search for the Default alarms for the first attribute. Under Filter by,
select Attribute Code, then type the four letter code for the attribute. For example, ARVF.
c. Click Submit .
d. In the results list, click Copy next to the alarm you want to modify.
The alarm moves to the Global Custom Alarms table.
e. Under Global Custom Alarms, in the Mailing List column for the copied attribute, add the
mailing list.
Related tasks
Configuring email server settings on page 57
Creating mailing lists on page 59
268
• Because the Archive Node does not aggregate objects before saving them to the TSM server, the
TSM database must be sized to hold references to all objects that will be written to the Archive
Node.
• Archive Node software cannot tolerate the latency involved in writing objects directly to tape or
other removable media. Therefore, the TSM server must be configured with a disk storage pool
for the initial storage of data saved by the Archive Node whenever removable media are used.
• You must configure TSM retention policies to use event‐based retention. The Archive Node does
not support creation-based TSM retention policies. Use the following recommended settings of
retmin=0 and retver=0 in the retention policy (which indicates that retention begins when the
Archive Node triggers a retention event, and is retained for 0 days after that). However, these
values for retmin and retver are optional.
The disk pool must be configured to migrate data to the tape pool (that is, the tape pool must be the
NXTSTGPOOL of the disk pool). The tape pool must not be configured as a copy pool of the disk
pool with simultaneous write to both pools (that is, the tape pool cannot be a COPYSTGPOOL for
the disk pool). To create offline copies of the tapes containing Archive Node data, configure the TSM
server with a second tape pool that is a copy pool of the tape pool used for Archive Node data.
Related concepts
Managing archival storage on page 203
• Defining a disk storage pool, and a tape storage pool (if required) on the TSM server
• Defining a domain policy that uses the TSM management class for the data saved from the
Archive Node, and registering a node to use this domain policy
270 | StorageGRID Webscale 11.1 Administrator Guide
These instructions are provided for your guidance only; they are not intended to replace TSM
documentation, or to provide complete and comprehensive instructions suitable for all configurations.
Deployment specific instructions should be provided by a TSM administrator who is familiar both
with your detailed requirements, and with the complete set of TSM Server documentation.
Steps
Where tapelibrary is an arbitrary name chosen for the tape library, and the value of libtype
can vary depending upon the type of tape library.
You might want to configure an additional drive or drives, depending upon your hardware
configuration. (For example, if the TSM server is connected to a Fibre Channel switch that has
two inputs from a tape library, you might want to define a drive for each input.)
Repeat for each drive that you have defined for the tape library, using a separate drivename and
drive-dname for each drive.
• SGWSTapePool is the name of the Archive Node’s tape storage pool. You can select any name
for the tape storage pool (as long as the name uses the syntax conventions expected by the
TSM server).
• DeviceClassName is the name of the device class name for the tape library.
• description is a description of the storage pool that can be displayed on the TSM server
using the query stgpool command. For example: “Tape storage pool for the Archive
Node.”
• collocate=filespace specifies that the TSM server should write objects from the same
file space into a single tape.
◦ The number of empty tapes in the tape library (in the case that the Archive Node is the
only application using the library).
◦ The number of tapes allocated for use by the StorageGRID Webscale system (in instances
where the tape library is shared).
8. On a TSM server, create a disk storage pool. At the TSM server’s administrative console, enter
define stgpool SGWSDiskPool disk description=description
maxsize=maximum_file_size nextstgpool=SGWSTapePool highmig=percent_high
lowmig=percent_low
• SGWSDiskPool is the name of the Archive Node’s disk pool. You can select any name for the
disk storage pool (as long as the name uses the syntax conventions expected by the TSM).
• description is a description of the storage pool that can be displayed on the TSM server
using the query stgpool command. For example, “Disk storage pool for the Archive
Node.”
272 | StorageGRID Webscale 11.1 Administrator Guide
• maximum_file_size forces objects larger than this size to be written directly to tape, rather
than being cached in the disk pool. It is recommended to set maximum_file_size to 10 GB.
• nextstgpool=SGWSTapePool refers the disk storage pool to the tape storage pool defined
for the Archive Node.
• percent_high sets the value at which the disk pool begins to migrate its contents to the tape
pool. It is recommended to set percent_high to 0 so that data migration begins immediately
• percent_low sets the value at which migration to the tape pool stops. It is recommended to
set percent_low to 0 to clear out the disk pool.
9. On a TSM server, create a disk volume (or volumes) and assign it to the disk pool.
define volume SGWSDiskPool volume_name formatsize=size
• volume_name is the full path to the location of the volume (for example, /var/local/arc/
stage6.dsm) on the TSM server where it writes the contents of the disk pool in preparation
for transfer to tape.
For example, to create a single disk volume such that the contents of a disk pool fill a single tape,
set the value of size to 200000 when the tape volume has a capacity of 200 GB.
However, it might be desirable to create multiple disk volumes of a smaller size, as the TSM
server can write to each volume in the disk pool. For example, if the tape size is 250 GB, create
25 disk volumes with a size of 10 GB (10000) each.
The TSM server preallocates space in the directory for the disk volume. This can take some time
to complete (more than three hours for a 200 GB disk volume).
When registering a node on the TSM server for the use of the Archive Node (or updating an existing
node), you must specify the number of mount points that the node can use for write operations by
specifying the MAXNUMMP parameter to the REGISTER NODE command. The number of mount
points is typically equivalent to the number of tape drive heads allocated to the Archive Node. The
number specified for MAXNUMMP on the TSM server must be at least as large as the value set for
the ARC > Target > Configuration > Main > Maximum Store Sessions for the Archive Node,
which is set to a value of 0 or 1, as concurrent store sessions are not supported by the Archive Node.
The value of MAXSESSIONS set for the TSM server controls the maximum number of sessions that
can be opened to the TSM server by all client applications. The value of MAXSESSIONS specified
on the TSM must be at least as large as the value specified for ARC > Target > Configuration >
Main > Number of Sessions in the Grid Manager for the Archive Node. The Archive Node
concurrently creates at most one session per mount point plus a small number (< 5) of additional
sessions.
The TSM node assigned to the Archive Node uses a custom domain policy tsm-domain. The tsm-
domain domain policy is a modified version of the “standard” domain policy, configured to write to
tape and with the archive destination set to be the StorageGRID Webscale system’s storage pool
(SGWSDiskPool).
Integrating Tivoli Storage Manager | 273
Note: You must log in to the TSM server with administrative privileges and use the dsmadmc tool
to create and activate the domain policy.
Steps
2. If you are not using an existing management class, enter one of the following:
define policyset tsm-domain standard
define mgmtclass tsm-domain standard default
3. Create a copygroup to the appropriate storage pool. Enter (on one line):
define copygroup tsm-domain standard default type=archive
destination=SGWSDiskPool retinit=event retmin=0 retver=0
default is the default Management Class for the Archive Node. The values of retinit, retmin,
and retver have been chosen to reflect the retention behavior currently used by the Archive Node
Note: Do not set retinit to retinit=create. Setting retinit=create blocks the Archive Node from
deleting content since retention events are used to remove content from the TSM server.
Ignore the “no backup copy group” warning that appears when you enter the activate command.
6. Register a node to use the new policy set on the TSM server. On the TSM server, enter (on one
line):
register node arc-user arc-password passexp=0 domain=tsm-domain
MAXNUMMP=number-of-sessions
arc-user and arc-password are same client node name and password as you define on the Archive
Node, and the value of MAXNUMMP is set to the number of tape drives reserved for Archive
Node store sessions.
Note: By default, registering a node creates an administrative user ID with client owner
authority, with the password defined for the node.
274
Copyright information
Copyright © 2018 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
Data contained herein pertains to a commercial item (as defined in FAR 2.101) and is proprietary to
NetApp, Inc. The U.S. Government has a non-exclusive, non-transferrable, non-sublicensable,
worldwide, limited irrevocable license to use the Data only in connection with and in support of the
U.S. Government contract under which the Data was delivered. Except as provided herein, the Data
may not be used, disclosed, reproduced, modified, performed, or displayed without the prior written
approval of NetApp, Inc. United States Government license rights for the Department of Defense are
limited to those rights identified in DFARS clause 252.227-7015(b).
275
Trademark information
NETAPP, the NETAPP logo, and the marks listed on the NetApp Trademarks page are trademarks of
NetApp, Inc. Other company and product names may be trademarks of their respective owners.
http://www.netapp.com/us/legal/netapptmlist.aspx
276
Index
A custom notification migration 266
customizing alarms 63
aborting grid tasks Default 64
procedure for 223 disabling 74
accounts, group disabling Default alarms 74, 75
deleting 260 disabling Global Custom alarms, service level 76
accounts, user disabling Global Custom alarms, system wide 77
adding 260 email notifications 55
creating 260 examples of how triggered 66
deleting 262 Global Custom 65
Acknowledge Alarms permission 253 icons for 28
Active Directory monitoring 21, 66
adding users or groups to audit share 240 new services 69
audit clients 237 notifications 55, 56
changing audit client share user or group name 243 of same severity 68
configuring audit clients 237 overriding higher priority alarm 69
removing users from audit share 242 severity changes 69
active ILM policy severity levels 28
defined 106 SMTT Total Events 54
ADC service 182 table, displayed in 74
add-audit-share command, in config_cifs.rb 234, 237 triggering evaluation order 66
add-ip-to-share command, in config_nfs.rb 245 triggering logic 66
add-user-to-share command 240 types 64
add-user-to-share command in config_cifs.rb 240 viewing Default 64
add-user-to-share command, in config_cifs.rb 234 All Storage Nodes storage pool 118
adding API
Storage Nodes 189 cross-site request forgery (CSRF) 16
storage volumes 189 Grid Management 12
admin groups versioning 15
managing 249 API Gateway Node
permissions 253 description of 201
Admin Node appliance
defined 218 viewing events 53
primary Admin Node, defined 218 viewing Storage Nodes 44
redundancy 219 ARC
admin users archive read-only on startup 215
managing 249 archive store state 215
Administrative Domain Controller configuration
See ADC Target component 208
Administratively Down description of service 203
operational state 28 optimizing for Tivoli Storage Manager 210
advanced filtering resetting store failure count 215, 216
in ILM rules 128 retrieve component 214
AES-128 189 Tivoli Storage Manager
AES-256 189 unavailable 211
aggregate text report 88 archive
alarms monitoring storage capacity 40
acknowledgments 219 read-only on startup 215
by code retrieve state 214
SAVP Total Usable Space (Percent) 183 store set 215, 216
SSTS Storage Status 183 Archive Node
class overrides 69 capacity, full 212
clearing triggered alarms 78 configurations described 268
colors 28 configure replication settings 213
configuring email notifications 57 configure target 204
creating Custom alarms 70 configuring cloud connections 205
creating email mailing lists 59 configuring S3 connections 205
creating Global Custom alarms 72 description 203
Custom 66 destination 204
278 | StorageGRID Webscale 11.1 Administrator Guide
Connected E
operational state 28
consistency level editing tenant accounts 96
compliance configuration for S3 bucket 177 Elasticsearch
content and the search integration service 100
storage capacity 30 email notifications
content block identifier configure global notification 60
See CBID configuring email server 57
content placement instructions 123 create global notification 60
content verification 196 create mailing lists 59
corrupt objects 186 create templates 58
CPU status 41 description 55
CPU usage 21 events 55
creating tenant accounts 93 for alarms 69
Critical islanded Admin Nodes 63
alarm 28 mail server settings 57
cross-site replication preferred sender 63, 220
using multiple storage pools 116 service state notifications 55
CSRF severity level in notifications 55
protecting against attacks for API clients 16 suppressing for entire system 62
Custom alarms suppressing for mailing lists 61
creating 70 template 58
described 64 test email 57
triggering logic 66 email server
configuring 57
email template 58
D encryption
Dashboard disabling 189
described 18 network transfer 226
data center topology, example ILM policy 155, 158, 161 endpoints
data migration proxy configuration for platform services 101
attributes, monitoring erasure coding
ARVF 266 advantages and disadvantages 112
Awaiting - All (XQUZ) 266 compared to replication 109
Awaiting - Client (XCQZ) 266 overview 109
creating Custom alarms 266 requirements 112
grid capacity, check 264 schemes 110
ILM policy 264 Erasure Coding profile
impact on grid operations 265 configuring 119
notifications 266 eth0 222
schedule time of day 265 eth1 222
DDS service eth2 222
described 180 events
object count 180 alarms 55
object metadata 182 events, hardware
queries 181 viewing 53
Default alarms example policies 155, 158, 161
described 64 examples
disabling 74, 75 alarm triggering 66
triggering logic 66 exporting text reports 90
deleting tenant accounts 99
disable inbound replication 186 F
disable outbound replication 186
disabling identity federation 253 FabricPool
documentation StorageGRID Webscale certificates 230
how to receive automatic notification of changes to feedback
276 how to send comments about documentation 276
how to send feedback about 276 foreground verification
domain policy missing data 198
activating for TSM 273
creating for TSM 273
dual commit
described 108
280 | StorageGRID Webscale 11.1 Administrator Guide
J M
join-domain command, in config_cifs.rb 237 mailing lists
suppressing email notifications from 61
Maintenance permission 253
K Major
key alarm 28
using as ILM rule filter 128 Make 2 Copies rule, ILM 118
management class, Tivoli Storage Manager 204
memory usage 21
L metadata
for advanced filtering in ILM rules 128
last access time
in ILM rules 106
ILM rules 132
Metadata Reserved Space (CAWM)
282 | StorageGRID Webscale 11.1 Administrator Guide
monitoring 35 O
Metrics Query permission 253
MIB object count
OID values 224 DDS service 180
SNMP 223 object data
SNMP monitoring 223 corrupt 196
microseconds missing 196, 198
how to calculate for ILM rules 132 verify integrity 196
Minor object metadata
alarm 28 DDS service 182
MINS E-mail Notification Status alarm 55 monitoring space used per Storage Node 35
monitoring object metadata lookup
CPU usage 21 using to verify ILM policy 146
memory usage 21 Object Metadata Lookup permission 253
nodes 20 object segmentation 195
object metadata capacity per Storage Node 35 object size
storage capacity per Storage Node 33 using as ILM rule filter 128
system capacity 30 object store
volume ID 179
object stores 179
N object tag
NetBIOS name using as ILM rule filter 128
file share configuration 237 objects
network connections compliant S3 buckets 164
viewing for appliance 44 OID
network transfer encryption SNMP 223
disable 226 values 224
enable 226 ONTAP
networking requirements StorageGRID Webscale certificates 230
for platform services 100 OpenLDAP
NFS audit share configuration guidelines for 252
removing clients 247 optimizing performance, middleware sessions
verifying integration 247 Tivoli Storage Manager 204
NFS share configuration optimizing storage 195
adding client to an audit share 245 Other Grid Configuration permission 253
changing client IP address 248
configure the audit client 244 P
removing client from the audit share 247
NMS service password for tenant account 93
defined 218 passwords
Interface Engine page 55 changing 10
Nodes page changing for others 262
described 20 changing for tenant account's root user 96
Hardware tab 21 Percentage Storage Capacity Used attribute (PSCU) 30
ILM tab 26 permissions
Network tab 21 setting for groups 258
Objects tab 26 platform services
Overview tab 21 allowing for an existing tenant 97
nodetool repair networking and ports for 100
Storage Nodes 182 non-transparent proxy 101
Normal overview 100
alarm 28 policies
Notice activating 133, 145
alarm 28 active 106
notifications configuring 133
configuring 56 historical 106
for tenant accounts 100 migrated data 264
suppressing for entire system 62 order of rules 106
suppressing for mailing lists 61 proposed 106, 134
simulating 137
verifying 133, 146
ports
configuration for platform services 100
Index | 283
configuration T
Resources component 43
reset event counters 42 template 58
services 41 tenant accounts
SSM service changing password for root user 96
events 54 creating 93
Total Events alarm (SMTT) 54 deleting 99
SSTS Storage Status alarm 183 delivery of platform services messages 102
STAS Total Usable Space 183 editing 97
STAS Total Usable Space attribute 30 overview 92
state graphs 83 permissions for 253
storage platform services and 100
background verification 196 platform services errors 103
storage capacity removing 99
archive media 40 Tenant Accounts permission 253
content 30 tenant root user password 93, 96
monitoring 30 text reports
monitoring per Storage Node 33 creating 89
monitoring system-wide 30 exporting 90
watermarks 183 printing 89
storage grade types of 88
assign to LDR 113 timeout period
configuring 113 changing 11
creating a list 113 Tivoli Storage Manager
storage grades ARC 204
defined 115 configuration best practices 268
Storage Node configure 208
background verification 38, 196 domain policy for 272
encryption 189 lifecycle and retention rules 204
foreground verification 38, 196, 198 management class 204
LDR 179 middleware 208
nodetool repair 182 register nodes for 272
object mapping 179 Tivoli Storage Manager ARC
viewing appliance information 44 optimizing Archive Node 210, 212
watermarks 183 optimizing performance 210
storage pools unavailable 211
configure 118 Total Events alarm (SMTT) 54
defined 115 Total Usable Space (Percent) SAVP
guidelines for creating 115 alarm 183
using with replicated object copies 109 Total Usable Space attribute (STAS) 30
Storage Status - Current SSCR 183 Total Usable Space STAS 183
Storage Status SSTS alarm 183 TSM tape storage pools
StorageGRID Webscale system defining 270
copying CA certificates 229 Twitter
stored object how to receive automatic notification of
configuring encryption 189 documentation changes 276
configuring hashing 190
enabling compression 191
suggestions
U
how to send feedback about documentation 276 Unknown
supplementary network IP addresses 221 operational state 28
Swift us-east-1
container/object name 146 default region 122
tenant account 92 user accounts
Swift clients adding 260
effect of HTTP option 193 changing password 10
effect of Prevent Client Modify option 192 creating 260
synchronization deleting 262
identity source 252 modifying 261
system events 18 permissions 10, 258
system status user metadata
SNMP 223 using as ILM rule filter 128
users
Index | 285
W
watermarks