Vous êtes sur la page 1sur 285

StorageGRID® Webscale 11.

Administrator Guide

October 2018 | 215-12796_B0


doccomments@netapp.com
Table of Contents | 3

Contents
Working with the StorageGRID Webscale system .................................... 8
Web browser requirements .......................................................................................... 8
Signing in to the Grid Manager ................................................................................... 9
Signing out of the Grid Manager ............................................................................... 10
Changing your password ........................................................................................... 10
Changing the browser session timeout period ........................................................... 11
Viewing StorageGRID Webscale license information .............................................. 12
Updating StorageGRID Webscale license information ............................................. 12
Understanding the Grid Management API ................................................................ 12
Grid Management API versioning ................................................................ 15
Protecting against Cross-Site Request Forgery (CSRF) ................................ 16
Monitoring the StorageGRID Webscale system ...................................... 18
Viewing the Dashboard ............................................................................................. 18
Viewing the Nodes page ............................................................................................ 20
Viewing common tabs for all node types ...................................................... 21
Viewing the Objects and ILM tabs for Storage Nodes .................................. 26
Viewing the Grid Topology tree ................................................................................ 28
Understanding node icons ......................................................................................... 28
Information that you should monitor regularly ......................................................... 29
Key attributes to monitor ............................................................................... 30
Monitoring storage capacity .......................................................................... 30
Monitoring the recovery point objective through ILM ................................. 37
Monitoring object verification operations ..................................................... 38
Monitoring archival capacity ......................................................................... 40
Monitoring servers and grid nodes ................................................................ 40
Monitoring the Total Events alarm ................................................................ 54
About alarms and email notifications ........................................................................ 55
Alarm notification types ................................................................................ 55
Notification status and queues ....................................................................... 55
Configuring notifications .............................................................................. 56
Suppressing email notifications for a mailing list ......................................... 61
Suppressing email notifications system wide ................................................ 62
Selecting a preferred sender .......................................................................... 63
Managing alarms ....................................................................................................... 63
Alarm class types .......................................................................................... 64
Alarm triggering logic ................................................................................... 66
Creating custom service or component alarms ............................................. 70
Creating Global Custom alarms .................................................................... 72
Disabling alarms ........................................................................................................ 74
Alarms and tables .......................................................................................... 74
Disabling Default alarms for services ........................................................... 74
4 | StorageGRID Webscale 11.1 Administrator Guide

Disabling a Default alarm system wide ......................................................... 75


Disabling Global Custom alarms for services ............................................... 76
Disabling Global Custom alarms system wide ............................................. 77
Clearing triggered alarms .............................................................................. 78
What AutoSupport is ................................................................................................. 79
Sending user-triggered AutoSupport messages ............................................. 79
Disabling weekly AutoSupport messages ..................................................... 80
Disabling event-triggered AutoSupport messages ........................................ 81
Troubleshooting AutoSupport messages ....................................................... 82
Using reports ............................................................................................................. 83
Types of charts ............................................................................................... 83
Chart legend .................................................................................................. 84
Displaying charts ........................................................................................... 85
Generating charts ........................................................................................... 87
Types of text reports ...................................................................................... 88
Generating text reports .................................................................................. 89
Exporting text reports .................................................................................... 90
Creating and managing tenant accounts .................................................. 92
Creating a tenant account .......................................................................................... 93
Changing the password for a tenant account's root user ........................................... 96
Editing a tenant account ............................................................................................ 97
Deleting tenant accounts ........................................................................................... 99
Managing platform services .................................................................................... 100
What platform services are .......................................................................... 100
Networking and ports for platform services ................................................ 100
Per-site delivery of platform services messages .......................................... 102
Troubleshooting platform services .............................................................. 103
Managing objects through information lifecycle management ............ 106
What an information lifecycle management policy is ............................................. 106
What dual commit is ................................................................................... 108
What an information lifecycle management rule is ................................................ 108
What replication and erasure coding are ..................................................... 109
Configuring ILM rules ............................................................................................ 113
Creating and assigning storage grades ........................................................ 113
Configuring storage pools ........................................................................... 115
Configuring an Erasure Coding profile ....................................................... 119
Configuring regions (optional and S3 only) ................................................ 122
Creating an ILM rule ................................................................................... 123
Creating, simulating, and activating an ILM policy ................................................ 133
Creating a proposed ILM policy ................................................................. 134
Simulating an ILM policy ........................................................................... 137
Activating the ILM policy ........................................................................... 145
Verifying an ILM policy with object metadata lookup ............................... 146
Working with ILM rules and ILM policies ............................................................. 149
Deleting an ILM rule ................................................................................... 149
Table of Contents | 5

Editing an ILM rule ..................................................................................... 149


Cloning an ILM rule .................................................................................... 150
Viewing the ILM policy activity queue ....................................................... 151
Example ILM rules and policies ............................................................................. 152
Example 1: ILM rules and policy for object storage ................................... 152
Example 2: ILM rules and policy for EC object size filtering .................... 156
Example 3: ILM rules and policy for better protection for image files ...... 158
Managing S3 buckets and objects for compliance ................................. 162
How StorageGRID Webscale protects compliant data ............................................ 162
Compliance workflow ............................................................................................. 163
Considerations for compliance ................................................................................ 164
Enabling compliance ............................................................................................... 168
Example: Using compliant ILM rules in an ILM policy ......................................... 169
S3 bucket for compliance example: bank-records ....................................... 169
ILM rule 1 for compliance example: Erasure Coding profile with bucket
matching ................................................................................................ 170
ILM rule 2 for compliance example: Non-compliant rule .......................... 172
ILM rule 3 for compliance example: Default rule ...................................... 173
ILM policy for compliance example ........................................................... 174
Resolving consistency errors when updating a bucket's compliance
configuration ...................................................................................................... 177
Managing disk storage ............................................................................. 178
What a Storage Node is ........................................................................................... 178
What the LDR service is ............................................................................. 178
What the DDS service is ............................................................................. 180
What the CMS service is ............................................................................. 182
What the ADC service is ............................................................................. 182
Managing Storage Nodes ........................................................................................ 183
What watermarks are ................................................................................... 183
Storage Node configuration settings ........................................................... 186
Managing full Storage Nodes ...................................................................... 189
Setting Grid Options ................................................................................................ 189
Configuring stored object encryption .......................................................... 189
Configuring stored object hashing .............................................................. 190
Configuring stored object compression ....................................................... 191
Enabling Prevent Client Modify .................................................................. 192
Enabling HTTP for client communications ................................................ 193
What object segmentation is ................................................................................... 195
Verifying object integrity ........................................................................................ 196
What background verification is ................................................................. 196
Configuring the background verification rate ............................................. 196
What foreground verification is ................................................................... 198
Running foreground verification ................................................................. 198
How load balancing works ...................................................................................... 201
Managing archival storage ...................................................................... 203
6 | StorageGRID Webscale 11.1 Administrator Guide

What an Archive Node is ........................................................................................ 203


What the ARC service is ............................................................................. 203
About supported archive targets .................................................................. 204
Managing connections to archival storage .............................................................. 204
Configuring connection settings for S3 API ............................................... 205
Modifying connection settings for S3 API .................................................. 207
Configuring connections to Tivoli Storage Manager middleware ............... 208
Managing Archive Nodes ........................................................................................ 210
Optimizing Archive Node for TSM middleware sessions ........................... 210
Managing an Archive Node when TSM server reaches capacity ................ 211
Configuring Archive Node replication ........................................................ 213
Configuring retrieve settings ....................................................................... 214
Configuring the archive store ...................................................................... 215
Setting Custom alarms for the Archive Node ............................................. 216
What an Admin Node is ........................................................................... 218
Admin Node redundancy ........................................................................................ 219
Alarm acknowledgments ......................................................................................... 219
Email notifications and AutoSupport messages ...................................................... 220
Managing networking .............................................................................. 221
Viewing IP addresses ............................................................................................... 222
Configuring SNMP monitoring ............................................................................... 223
Enabling the SNMP agent ........................................................................... 223
Management Information Base file ............................................................. 223
Detailed registry .......................................................................................... 224
What link costs are .................................................................................................. 224
Updating link costs ...................................................................................... 225
Changing network transfer encryption .................................................................... 226
Configuring certificates ........................................................................................... 227
Configuring custom server certificates for the Grid Manager ..................... 227
Restoring the default server certificates for the Grid Manager ................... 228
Configuring custom server certificates for storage API endpoints ............. 228
Restoring the default server certificates for storage API endpoints ............ 229
Copying the StorageGRID Webscale system's CA certificate .................... 229
Configuring StorageGRID Webscale certificates for ONTAP clients
using FabricPool .................................................................................... 230
Configuring S3 API endpoint domain names ......................................................... 231
Configuring audit client access ................................................................ 234
Configuring audit clients for CIFS .......................................................................... 234
Configuring audit clients for Workgroup .................................................... 234
Configuring audit clients for Active Directory ........................................... 237
Adding a user or group to a CIFS audit share ............................................. 240
Removing a user or group from a CIFS audit share .................................... 242
Changing a CIFS audit share user or group name ....................................... 243
Verifying CIFS audit integration ................................................................. 243
Configuring the audit client for NFS ....................................................................... 244
Table of Contents | 7

Adding an NFS audit client to an audit share .............................................. 245


Verifying NFS audit integration .................................................................. 247
Removing an NFS audit client from the audit share ................................... 247
Changing the IP address of an NFS audit client .......................................... 248
Controlling system access with administration user accounts and
groups ................................................................................................... 249
Configuring identity federation ............................................................................... 249
Guidelines for configuring an OpenLDAP server ....................................... 252
Forcing synchronization with the identity source ................................................... 252
Disabling identity federation ................................................................................... 253
Admin group permissions ....................................................................................... 253
Deactivating features from the Grid Management API ............................... 257
Admin user accounts ............................................................................................... 258
Creating admin groups ............................................................................................ 258
Modifying an admin group ...................................................................................... 259
Deleting an admin group ......................................................................................... 260
Creating an admin user account .............................................................................. 260
Modifying a local user's account ............................................................................. 261
Deleting a local user's account ................................................................................ 262
Changing a local user's password ............................................................................ 262
What data migration is ............................................................................ 264
Confirming capacity of the StorageGRID Webscale system .................................. 264
Determining the ILM policy for migrated data ....................................................... 264
Impact of migration on operations .......................................................................... 265
Scheduling data migration ....................................................................................... 265
Monitoring data migration ...................................................................................... 266
Creating custom notifications for migration alarms ................................................ 266
Integrating Tivoli Storage Manager ....................................................... 268
Archive Node configuration and operation ............................................................. 268
Configuration best practices .................................................................................... 268
Completing the Archive Node setup ....................................................................... 269
Installing a new TSM server ....................................................................... 269
Configuring the TSM server ........................................................................ 269
Copyright information ............................................................................. 274
Trademark information ........................................................................... 275
How to send comments about documentation and receive update
notifications .......................................................................................... 276
Index ........................................................................................................... 277
8

Working with the StorageGRID Webscale system


This Administrator Guide contains system administration information and procedures required to
manage and monitor the StorageGRID Webscale system on a day-to-day basis. This guide also
includes information on how to configure the StorageGRID Webscale system to meet a deployment's
unique operational requirements.

About this guide


This guide is for technical personnel trained to configure and support the StorageGRID Webscale
system.
This guide assumes a general understanding of the StorageGRID Webscale system. A fairly detailed
level of computer literacy is assumed, including knowledge of Linux/UNIX command shells,
networking, and server hardware setup and configuration.
For details about the architecture and operational features of the StorageGRID Webscale system, see
Grid primer.

The Grid Manager


The Grid Manager is the browser-based user interface that enables you to perform most day-to-day
activities in the StorageGRID Webscale system. Using the Grid Manager, you can configure and
monitor the system, create tenant accounts, configure ILM rules and policies, manage storage and
networks, and more.

Related information
Grid primer

Web browser requirements


You must use a supported web browser.

Web browser Minimum supported version


Google Chrome 54
Microsoft Internet Explorer 11 (Native Mode)
Mozilla Firefox 50

You should set the browser window to a recommended width.

Browser width Pixels


Minimum 1024
Optimum 1280
Working with the StorageGRID Webscale system | 9

Signing in to the Grid Manager


You access the Sign In page for the Grid Manager by entering the web address or host name defined
by your system administrator into the address bar of a supported web browser.

Before you begin

• You must have an authorized user name and password.


• You must have the IP address or fully qualified domain name of an Admin Node.

• You must have access to a supported web browser.

• Cookies must be enabled in your web browser.


• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.

• You had the default security certificate created during installation, or you have a custom
certificate.

About this task


When you sign in to the Grid Manager, you are connecting to an Admin Node. Each StorageGRID
Webscale system includes one primary Admin Node and any number of non-primary Admin Nodes.
You can connect to any Admin Node, and each Admin Node displays a similar view of the
StorageGRID Webscale system. However, the Admin Nodes are not entirely equivalent:

• Alarm acknowledgments made through one Admin Node are not copied to other Admin Nodes.
Therefore, the Grid Topology tree might not look the same for each Admin Node.

• Some maintenance procedures can only be performed from the primary Admin Node.

Steps

1. Launch a supported web browser.

2. In the browser’s address bar, enter the IP address or fully qualified domain name of the Admin
Node.
The StorageGRID Webscale system's Sign In page appears.

3. If you are prompted with a security alert, view and install the certificate using the browser’s
installation wizard.
The alert will not appear the next time you access this URL.
10 | StorageGRID Webscale 11.1 Administrator Guide

4. Enter your case-sensitive username and password, and click Sign In.
The home page of the Grid Manager appears, which includes the Dashboard.

Related concepts
Configuring certificates on page 227

Related references
Web browser requirements on page 8

Signing out of the Grid Manager


When you have completed working with the Grid Manager, you must sign out to keep the system
secure.

About this task


You must sign out of the Grid Manager to ensure that unauthorized users cannot access the
StorageGRID Webscale system. Closing your browser does not sign you out of the system.

Step

1. Click Sign Out located at the top-right corner of the page.


You are signed out, and the Sign In page appears.

Changing your password


If you are a local user of the Grid Manager, you can change your own password.

Before you begin


You must be signed in to the Grid Manager using a supported browser.

About this task


Federated users cannot change their passwords directly in Grid Manager; they must change
passwords in the external identity source, for example, Active Directory or OpenLDAP.

Steps

1. From the Grid Manager header, select your name > Change password.

2. Enter your current password.

3. Type a new password.


Your password must contain between 8 and 32 characters and is case-sensitive.

4. Re-enter the new password.

5. Click Save.
Working with the StorageGRID Webscale system | 11

Changing the browser session timeout period


You can specify the timeout period for the StorageGRID Webscale system's browser-based interface.
If a user is inactive for the specified timeout period, the StorageGRID Webscale system times out and
the Sign In page is displayed.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.


• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


The GUI Inactivity Timeout defaults to 900 seconds (15 minutes).
Note: To maintain the security of the system, a separate, non-configurable timer for each user's
authentication token will expire 16 hours after the user signs in. When a user's authentication
expires, that user is automatically signed out, even if the value for the GUI Inactivity Timeout has
not been reached. To renew the token, the user must sign back in.

Steps

1. Select Configuration > Display Options.

2. For GUI Inactivity Timeout, enter a timeout period of 60 seconds or more.


Set this field to 0 if you do not want to use this functionality. Users are signed out 16 hours after
they sign in, when their authentication tokens expire.

3. Click Apply Changes.


The new setting does not affect currently signed in users. Users must sign in again or refresh their
browsers for the new timeout setting to take effect.

4. Sign out of the StorageGRID Webscale system.

5. Sign in again.
12 | StorageGRID Webscale 11.1 Administrator Guide

Viewing StorageGRID Webscale license information


You can view the license information for your StorageGRID Webscale system, such as the maximum
storage capacity of your grid, whenever necessary.

Step

1. Select Maintenance > License.


The license information is displayed and includes the StorageGRID Webscale system ID, license
serial number, licensed storage capacity of the grid, and the contents of the license text file. This
information is read-only. For licenses issued before StorageGRID Webscale 10.3, the licensed
storage capacity is not included in the license file, and a “See License Agreement” message is
displayed instead of a value.

Updating StorageGRID Webscale license information


You must update the license information for your StorageGRID Webscale system any time the terms
of your license change. For example, you must update the license information if you purchase
additional storage capacity for your grid.

Before you begin

• You must have a new license file to apply to your StorageGRID Webscale system.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You must have the provisioning passphrase.

Steps

1. Select Maintenance > License.

2. Enter the provisioning passphrase for your StorageGRID Webscale system in the Provisioning
Passphrase text box.

3. Click Browse.

4. In the Open dialog box, locate and select the new license file (.txt), and click Open.
The new license file is validated and displayed.

5. Click Save.

Understanding the Grid Management API


You can perform system management tasks using the Grid Management REST API instead of the
Grid Manager user interface. For example, you might want to use the API to automate operations or
to create multiple entities, such as users, more quickly.
The Grid Management API uses the Swagger open source API platform. Swagger provides an
intuitive user interface that allows developers and non-developers to interact with the API. The
Swagger user interface provides complete details and documentation for each API operation.
To access the Swagger documentation for the Grid Management API:
Working with the StorageGRID Webscale system | 13

1. Sign in to the Grid Manager.

2. Select Help > API Docs from the web application header.

API operations
The Grid Management API organizes the available API operations into the following sections:

• accounts – Operations to manage storage tenant accounts, including creating new accounts and
retrieving storage usage for a given account.

• alarms – Operations to list current alarms, and return information about the health of the grid.

• audit – Operations to list and update the audit configuration.

• auth – Operations to perform user session authentication.


The Grid Management API supports the Bearer Token Authentication Scheme. To sign in, you
provide a username and password in the JSON body of the authentication request (that is,
POST /api/v2/authorize). If the user is successfully authenticated, a security token is
returned. This token must be provided in the header of subsequent API requests ("Authorization:
Bearer token").
See “Protecting against Cross-Site Request Forgery” for information on improving authentication
security.

• compliance – Operations to manage global compliance settings for the StorageGRID Webscale
system.

• config – Operations related to the product release and versions of the Grid Management API. You
can list the product release version and the major versions of the Grid Management API
supported by that release, and you can disable deprecated versions of the API.

• deactivated-features – Operations to view features that might have been deactivated.

• dns-servers – Operations to list and change configured external DNS servers.

• endpoint-domain-names – Operations to list and change endpoint domain names.

• erasure-coding – Operations on Erasure Coding profiles.

• expansion – Operations on expansion (procedure-level).

• expansion-nodes – Operations on expansion (node-level).

• expansion-sites – Operations on expansion (site-level).

• grid-networks – Operations to list and change the Grid Network List.

• groups – Operations to manage local Grid Administrator Groups and to retrieve federated Grid
Administrator Groups from an external LDAP server.

• identity-source – Operations to configure an external identity source and to manually


synchronize federated group and user information.

• ilm – Operations on information lifecycle management (ILM).

• license – Operations to retrieve and update the StorageGRID Webscale license.

• logs – Operations for collecting and downloading log files.

• metrics – Operations on StorageGRID Webscale metrics including instant metric queries at a


single point in time and range metric queries over a range of time.
14 | StorageGRID Webscale 11.1 Administrator Guide

Note: The Grid Management API uses the Prometheus systems monitoring tool as the backend
data source. For information about constructing Prometheus queries, see the Prometheus web
site.

• ntp-servers – Operations to list or update external Network Time Protocol (NTP) servers.

• objects – Operations on objects and object metadata.

• recovery – Operations to list the grid nodes available for recovery.

• recovery-package – Operations to download the Recovery Package.

• regions – Operations to view and create regions.

• server-certificates – Operations to view and update Grid Manager server certificates

• users – Operations to view and manage Grid Administrator users.

Operation details
When you expand each API operation, you can see its HTTP action, endpoint URL, a list of any
required or optional parameters, an example of the request body (when required), and the possible
responses.
Working with the StorageGRID Webscale system | 15

Issuing API requests


Attention: Any API operations you perform using the Swagger user interface are live operations.
Be careful not to create, update, or delete configuration or other data by mistake.

Steps
1. Click the HTTP action to see the request details.

2. Determine if the request requires additional parameters, such as a group or user ID. Then, obtain
these values. You might need to issue a different API request first to get the information you need.

3. Determine if you need to modify the example request body. If so, you can click Model to learn
the requirements for each field.

4. Click Try it out.

5. Provide any required parameters, or modify the request body as required.

6. Click Execute.

7. Review the response code to determine if the request was successful.

Top-level resources
The Grid Management API provides the following top-level resources:

• /grid: Access is restricted to Grid Administrator users.

• /org: Access is restricted to users who belong to a local or federated LDAP group for a tenant
account. For details, see the information about using tenant accounts.

• /private: Access is restricted to Grid Administrator users. These APIs are intended for internal
use only and are not publicly documented. These APIs are also subject to change without notice.

Related concepts
Protecting against Cross-Site Request Forgery (CSRF) on page 16

Related information
Using tenant accounts
Prometheus: Query basics

Grid Management API versioning


The Grid Management API uses versioning to support non-disruptive upgrades.
For example, this Request URL specifies version 2 of the API.
https://hostname_or_ip_address/api/v2/authorize

Changes in the Grid Management API that are backward incompatible bump the major version of the
API. For example, an incompatible API change bumps the version from 1.1 to 2.0. Changes in the
Grid Management API that are backward compatible bump the minor version instead. Backward-
compatible changes include the addition of new endpoints or new properties. For example, a
compatible API change bumps the version from 1.0 to 1.1.
When you install StorageGRID Webscale software for the first time, only the most recent version of
the Grid Management API is enabled. However, when you upgrade to a new major version of
StorageGRID Webscale, you continue to have access to the older API version for at least one major
StorageGRID Webscale release.
16 | StorageGRID Webscale 11.1 Administrator Guide

Note: You can use the Grid Management API to configure the supported versions. See the “config”
section of the Swagger API documentation for more information. You should deactivate support
for the older version after updating all Grid Management API clients to use the newer version.

Outdated requests are marked as deprecated in the following ways:

• The response header is "Deprecated: true"

• The JSON response body includes "deprecated": true

• A deprecated warning is added to nms.log. For example:

Received call to deprecated v1 API at POST "/api/v1/authorize"

Determining which API versions are supported in the current release


Use the following API request to return a list of the supported API major versions:

GET https://{{IP-Address}}/api/versions
{
"responseTime": "2016-10-03T14:49:16.587Z",
"status": "success",
"apiVersion": "2.0",
"data": [
1,
2
]
}

Specifying an API version for a request


You can specify the API version using a path parameter (/api/v1) or a header (Api-Version: 2).
If you provide both values, the header value overrides the path value.
curl https://[IP-Address]/api/v2/grid/accounts

curl -H "Api-Version: 2" https://[IP-Address]/api/grid/accounts

Protecting against Cross-Site Request Forgery (CSRF)


You can help protect against Cross-Site Request Forgery (CSRF) attacks against StorageGRID
Webscale by using CSRF tokens to enhance authentication that uses cookies. The Grid Manager and
Tenant Manager automatically enable this security feature; other API clients can choose whether to
enable it when they sign in.
An attacker that can trigger a request to a different site (such as with an HTTP form POST) can cause
certain requests to be made using the signed-in user’s cookies.
StorageGRID Webscale helps protect against CSRF attacks by using CSRF tokens. When enabled,
the contents of a specific cookie must match the contents of either a specific header or a specific
POST body parameter.
To enable the feature, set the csrfToken parameter to true during authentication. The default is
false.

curl -X POST --header "Content-Type: application/json" --header "Accept:


application/json" -d "{
\"username\": \"MyUserName\",
\"password\": \"MyPassword\",
\"cookie\": true,
\"csrfToken\": true
}" "https://example.com/api/v2/authorize"
Working with the StorageGRID Webscale system | 17

When true, a GridCsrfToken cookie is set with a random value for sign-ins to the Grid Manager,
and the AccountCsrfToken cookie is set with a random value for sign-ins to the Tenant Manager.
If the cookie is present, all requests that can modify the state of the system (POST, PUT, PATCH,
DELETE) must include one of the following:
• The X-Csrf-Token header, with the value of the header set to the value of the CSRF token
cookie.

• For endpoints that accept a form-encoded body: A csrfToken form-encoded request body
parameter.
See the online API documentation for additional examples and details.
Note: Requests that have a CSRF token cookie set will also enforce the "Content-Type:
application/json" header for any request that expects a JSON request body as an additional
protection against CSRF attacks.
18

Monitoring the StorageGRID Webscale system


The Grid Manager enables you to monitor the daily activities of the StorageGRID Webscale system,
including its health. Alarms and notifications help you evaluate and quickly resolve trouble spots that
sometimes occur during normal operation.
You can use these areas of the Grid Manager to monitor the StorageGRID Webscale system to the
level of detail you need:

• Dashboard

• Nodes page

• Support > Grid Topology pages

Related concepts
Viewing the Dashboard on page 18
Viewing the Nodes page on page 20

Viewing the Dashboard


When you first sign in to the Grid Manager, you can use the Dashboard to monitor system activities
at a glance. The Dashboard includes information about health and alerts, usage metrics, and
operational trends and charts.

• Health: Provides an indication of the system's health by showing the number of disconnected grid
nodes and the number of current alarms. Shows license status if there are license-related alerts.

• Information Lifecycle Management (ILM): Displays current ILM operations and ILM queues
for your StorageGRID Webscale system.
Monitoring the StorageGRID Webscale system | 19

• Protocol Operations: Displays the number of protocol specific operations performed by your
StorageGRID Webscale system. You can use this information to monitor your system's workloads
and efficiencies.

• Available Storage: Shows the available and used storage capacity on the grid, not including
archival media. With this information, you can compare the used storage with the available
storage and, in a multi-site grid, determine which site is consuming more storage.

To view more detailed information about each panel in the Grid Manager, click . Detailed
descriptions are also listed in the following table.

Panel Description View additional details Learn more


Health Provides an indication • To see current alarms and • See the
of the system’s health alarm history, view custom information about
by showing: alarms, and identify managing alarms.
• The number of whether alarm notifications
are sent, click View alarms • See Grid primer
current alarms
details. and the
• The number of information about
disconnected grid • To see details of all grid viewing the Nodes
nodes nodes, click View grid page.
node details.

Available Displays the available • To view the capacity, place For details about
Storage and used storage reviewing Storage
your cursor over the chart's
capacity in the entire Node information, see
available and used capacity
grid, not including the section about
sections.
archival media. managing disk
The Overall chart • To change the date range or storage.
presents grid-wide select other data, click the
totals. If this is a chart icon in the upper right
multi-site grid, of the panel.
additional charts The default date range in
appear for each data charts on this page is one
center site. month.
With this information, • To see available storage
you can compare the details, click Grid. Then,
used storage with the view the details for the
available storage and entire grid, an entire site, or
in multi-site grids, a single Storage Node.
determine which site
is consuming more.
Information Displays current ILM For details about
• To see the existing ILM
Lifecycle operations and ILM managing ILM rules
rules, click ILM > Rules.
Management queues for your and policies, see the
(ILM) system. • To see the existing ILM section about
With this information, policies, click ILM > managing objects
you can monitor your Policies. through information
system's workload. lifecycle management.
20 | StorageGRID Webscale 11.1 Administrator Guide

Panel Description View additional details Learn more


Protocol Displays the number • To change the date range or See the information
Operations of protocol-specific about using tenant
select other data, click the
operations (S3 or accounts.
chart icon in the upper right
Swift) performed by of the panel.
your system.
With this information, • To manage tenant accounts
you can monitor your for S3 or Swift, click
system's workloads Tenants.
and efficiencies.
Protocol rates are
averaged over the last
two minutes.

Related concepts
Managing alarms on page 63
Viewing the Nodes page on page 20
Managing disk storage on page 178
Managing objects through information lifecycle management on page 106
Creating and managing tenant accounts on page 92

Related information
Grid primer
Using tenant accounts

Viewing the Nodes page


When you need more detailed information about your StorageGRID Webscale system than the
Dashboard provides, you can use the Nodes page to learn more about each node, such as node type,
IP addresses, CPU utilization, network traffic, alarms, and more.
When you click Nodes on the top menu bar, the Nodes page appears.
Monitoring the StorageGRID Webscale system | 21

When you first select Nodes, the Nodes home page provides an overview of your StorageGRID
Webscale grid deployment. You can select the following tabs to view information for the entire grid:

• Network to view a graph showing network traffic

• Storage to view a graph showing storage used

• Objects to view a graph showing ingest and retrieve rates

• ILM to view ILM queue rates

To view information for a particular node, click the appropriate link on the left. The informational
tabs for each node vary by node type.
The graphs on the Nodes page use the Grafana visualization tool and the Prometheus systems
monitoring tool. Grafana displays time-series data in graph and chart formats, while Prometheus
serves as the backend data source.

Viewing common tabs for all node types


On the Nodes page, you can view the Overview, Hardware, Network, Storage, and Events tabs for all
node types.
The StorageGRID Webscale system contains Admin Nodes, Storage Nodes, and, optionally, Archive
Nodes and API Gateway Nodes.

Example Overview tab


On the Overview tab, you can view general information about each node, such as the name, software
version installed, and any unacknowledged alarms for the node. If any alarms are listed, you can click
the link in the Service column to get details about each alarm.
22 | StorageGRID Webscale 11.1 Administrator Guide

Example Hardware tab


The Hardware tab displays CPU utilization and memory information for each node.
Monitoring the StorageGRID Webscale system | 23

To display a different time interval, select one of the controls above the chart or graph. You can select
to display the information available for intervals of 1 hour, 1 day, 1 week, 1 month, or 1 year. Or you
can set a custom interval, which allows you to specify date and time ranges.

Example Network tab


The Network tab displays a graph showing network traffic for the node over time.
The Network Traffic graph indicates the total amount of network traffic across all of the network
interfaces on the node. Communication to the node is “Received” and communication from the node
is “Sent” or “Transmit.”
24 | StorageGRID Webscale 11.1 Administrator Guide

To display a different time interval, select one of the controls above the chart or graph. You can select
to display the information available for intervals of 1 hour, 1 day, 1 week, 1 month, or 1 year. Or you
can set a custom interval, which allows you to specify date and time ranges.

Example Storage tab


Although the Nodes page for each node type contains a Storage tab, the information on that tab
differs depending on the node type:

• Admin Nodes, Archive Nodes, and Gateway Nodes each contain a list of disk devices and
volumes on the node.
Monitoring the StorageGRID Webscale system | 25

• Storage Nodes contain graphs showing data storage and metadata storage used over time, as well
as a list of disk devices and volumes on the node.

Example Events tab


The Events tab displays system events that help with troubleshooting. The Last Event provides an
area of focus when an error occurs. After resolving issues, click Reset event counts to return the
counts to zero. For details, see the information about troubleshooting.
Note: To reset event counts, you must be a user who belongs to a group that has the Grid Topology
Page Configuration permission enabled.
26 | StorageGRID Webscale 11.1 Administrator Guide

Related concepts
Monitoring servers and grid nodes on page 40
Monitoring storage capacity on page 30

Related information
Grid primer
Troubleshooting StorageGRID Webscale

Viewing the Objects and ILM tabs for Storage Nodes


In addition to the Overview, Hardware, Network, Storage, and Events tabs, Storage Nodes also have
Objects and ILM tabs.
In the Objects tab, you can view information about object counts, queries, and verification.
Monitoring the StorageGRID Webscale system | 27

In the ILM tab, you can view information about ILM operations.

Related concepts
Managing objects through information lifecycle management on page 106

Related information
Grid primer
28 | StorageGRID Webscale 11.1 Administrator Guide

Viewing the Grid Topology tree


The Grid Topology tree provides access to detailed information about StorageGRID Webscale system
elements, including sites, grid nodes, services, and components. In most cases, you only need to
access the Grid Topology tree when working with technical support.
To access the Grid Topology tree, select Support > Grid Topology.

To expand or collapse the Grid Topology tree, click or at the site, node, or service level. To
expand or collapse all items in the entire site or in each node, hold down the <Ctrl> key and click.

Understanding node icons


The Grid Manager uses the following icons to show the health of each grid node. The icons indicate
whether nodes are connected to the grid and show if any alarms are active on the node.

Icon Color Node state Alarm severity Meaning


Green Connected Normal The node is functioning
normally. It is connected to the
grid and there are no alarms.
Yellow Connected Notice The node is connected to the
grid, but an unusual condition
exists that does not affect
normal operations.
Light Connected Minor The node is connected to the
Orange grid, but an abnormal
condition exists that could
affect operation in the future.
You should investigate to
prevent escalation.
Monitoring the StorageGRID Webscale system | 29

Icon Color Node state Alarm severity Meaning


Dark Connected Major The node is connected to the
Orange grid, but an abnormal
condition exists that currently
affects operation. This requires
prompt attention to prevent
escalation.
Red Connected Critical The node is connected to the
grid, but an abnormal
condition exists that has
stopped normal operations.
You should address the issue
immediately.
Gray Disconnected Administratively The node is not connected to
Down the grid for an expected reason.
For example, the node, or
services on the node, has been
gracefully shut down, the node
is rebooting, or the software is
being upgraded.
Blue Disconnected Unknown The node is not connected to
the grid, and the situation
requires immediate attention.
For example, the network
connection between nodes has
been lost or the power is down.
This is the most severe
condition.

Information that you should monitor regularly


Monitor the StorageGRID Webscale system’s key attributes regularly. Become familiar with system
operations and spot trends before they turn into problems.
The following table lists the tasks to be performed on a regular basis.

Task Frequency
Monitor system status. Note what has changed from the previous Daily
day.
Monitor the rate at which Storage Node capacity is being Weekly
consumed.
Check the capacity of the external archival storage system. Weekly

The important attributes to monitor relate to:

• Object storage capacity

• Metadata storage capacity

• Object management related activities


30 | StorageGRID Webscale 11.1 Administrator Guide

When monitoring capacity, look at the absolute value and at the rate at which capacity is being
consumed over time. Consumption rates can help you estimate when additional capacity might be
required.

Key attributes to monitor


To ensure that the StorageGRID Webscale system is able to store data as expected, you should
monitor the amounts of available and used storage on a regular basis. The Dashboard provides a
snapshot of used and free storage. The Nodes page contains graphs that enable you to monitor
capacities over time.
Monitor the storage capacities shown in the following table.

Category Component Attributes


Storage capacity
• Dashboard: Available • Dashboard Used and Free storage
Storage panel (Overall and for each Data Center)

• Dashboard: Chart (Reports) • Dashboard Percentage Storage


icon Capacity Used vs. Time (PSCU): The
percentage of installed storage
• Nodes > Storage Node capacity that has been consumed for
the entire StorageGRID Webscale
system or for each data center.
• Nodes page Storage Used - Object
Data graph: hover over graph to see
Used (%), Used, and Total storage.
The Total value is the Total Usable
Space (STAS) attribute.

Metadata storage Nodes > Storage Node Hover over the Storage Used - Object
capacity Metadata graph to see the percentage of
allowed space consumed by object
metadata. This value is the Metadata Used
Space (Percent) (CDLP) attribute.

Note: The attributes for storage capacity and metadata storage capacity do not include the capacity
used for archived content.

Monitoring storage capacity


You must monitor the total usable space available on Storage Nodes to ensure that the StorageGRID
Webscale system does not run out of storage space.
Storage capacity information is available at the grid, data center, and storage node levels of the
StorageGRID Webscale system.

Steps
1. Monitoring storage capacity for the entire grid on page 30
2. Monitoring storage capacity per Storage Node on page 33
3. Monitoring object metadata capacity per Storage Node on page 35

Monitoring storage capacity for the entire grid


The Dashboard shows the available storage capacity for the entire grid and for individual data
centers. You can use the Available Storage charts to quickly assess storage use for an entire
Monitoring the StorageGRID Webscale system | 31

StorageGRID Webscale system. In a multi-site grid, you can compare storage usage between sites
(data centers).

Before you begin


You must be signed in to the Grid Manager using a supported browser.

Steps

1. Select Dashboard.

2. In the Available Storage panel, note the overall summary of free and used storage capacity. For
multi-site grids, review the chart for each data center.
Note: The summary does not include archival media.

3. Place your cursor over the chart's Free or Used capacity sections to see exactly how much space is
free or used.

4. Click Chart (Reports) to view a graph showing capacity usage over time for Overall storage or
for individual data centers.
A graph showing Percentage Storage Capacity Used (%) vs. Time appears.

Example
In the following example, usable storage space is being consumed at a rate of approximately 4%
per month, which means that there are eight months left before this data center runs out of space.
32 | StorageGRID Webscale 11.1 Administrator Guide

5. To monitor additional storage attributes:

• Go to Nodes > storage node > Storage and view the graphs and tables.

• Technical support could ask you to use this path: Select Support > Grid Topology. Then
select StorageGRID Webscale Deployment > Overview > Main.

6. To maintain normal system operations, add Storage Nodes, add storage volumes, or archive object
data before usable space is consumed.

Related information
Expanding a StorageGRID Webscale grid
Monitoring the StorageGRID Webscale system | 33

Monitoring storage capacity per Storage Node


You must monitor the value of the Total Usable Space (STAS) attribute for each Storage Node to
ensure that the node has enough space for new object data.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

About this task


STAS is calculated by adding together the available space on all object stores within the Storage
Node.

Note: If the value of STAS for a Storage Node falls below the value of the Storage Volume Hard
Read-Only Watermark, the Storage Node becomes read-only and can no longer store new objects.
You should add storage volumes or Storage Nodes before this happens.

Steps

1. Select Nodes > Storage Node > Storage.


The graphs and tables for the node appear.

2. Hover your cursor over the Storage Used - Object Data graph.
A pop-up displays Used (%), Used, and Total capacities.
34 | StorageGRID Webscale 11.1 Administrator Guide

3. Review the values in the tables below the graphs. To view graphs of the values, click Chart in
the Available columns in the Volumes and Object Stores tables.

4. Monitor these values over time to estimate the rate at which usable storage space is being
consumed.
Usable space is the actual amount of storage space available to store objects.

5. To maintain normal system operations, add Storage Nodes, add storage volumes, or archive object
data before usable space is consumed.
Monitoring the StorageGRID Webscale system | 35

Related concepts
What watermarks are on page 183

Related information
Expanding a StorageGRID Webscale grid

Monitoring object metadata capacity per Storage Node


StorageGRID Webscale reserves space on storage volume 0 of each Storage Node for the Cassandra
database. This database stores three copies of all object metadata as well as certain configuration
data. To ensure that adequate space remains available for essential Cassandra operations, you must
monitor the metadata attributes for each Storage Node and add new Storage Nodes as required.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

About this task


The total space reserved on each Storage Node for metadata is known as Metadata Reserved Space,
or CAWM. CAWM is subdivided into the space available for object metadata (the Metadata Allowed
Space, or CEMS) and the space required for essential Cassandra operations, such as compaction and
repair.
If object metadata uses more than 100% of the Metadata Allowed Space, Cassandra operations
cannot run efficiently and errors will occur. For this reason, you must closely monitor how much
object metadata space has been used.
The Metadata Used Space (Percent) attribute, or CDLP, measures how full the Metadata Allowed
Space is. When the Metadata Used Space (Percent) reaches certain thresholds, the CDLP alarm is
triggered, as follows:

• CDLP = 70% (minor alarm): You should add new Storage Nodes as soon as possible.

• CDLP = 90% (major alarm): You should add new Storage Nodes immediately.

• CDLP = 100% (critical alarm): You must add new Storage Nodes immediately and you must stop
the ingest of new objects.

Note that when the Metadata Used Space (Percent) attribute reaches 70% (that is, when the Metadata
Allowed Space becomes 70% full), the CDLP alarm is triggered as a minor alarm. You should add
new Storage Nodes in an expansion procedure as soon as possible. When you add the new nodes, the
system automatically rebalances object metadata across all Storage Nodes, and the alarms clear.
Attention: When the Metadata Used Space (Percent) attribute reaches 90%, the CDLP alarm is
triggered as a major alarm, and a warning appears on the Dashboard. If this warning appears, you
must add new Storage Nodes immediately. You must never allow object metadata to use more than
100% of the allowed space.
36 | StorageGRID Webscale 11.1 Administrator Guide

Attention: If the first storage volume is smaller than the Metadata Reserved Space, the CDLP
calculation might be inaccurate.

Steps

1. Select Nodes > Storage Node > Storage.

2. View the Storage Used - Object Metadata graph.

3. If the Used % (CDLP Metadata Used Space (Percent)) value is 70% or higher, expand the
StorageGRID Webscale system by adding Storage Nodes.

4. To view alarm details, select the Alarms tab from the Data Store Overview page, or select
Alarms from the top menu bar.

Example

CDLP (Metadata Used Space (Percent)) alarms of all severities (minor, major, and critical) are
displayed on this page. In the example, all the alarms listed are critical.

5. If a major or critical CDLP alarm appears on the Current Alarms page, add Storage Nodes
immediately.
When you add the new nodes, the system automatically rebalances object metadata across all
Storage Nodes, and the alarms clear.

Related concepts
What watermarks are on page 183

Related information
Expanding a StorageGRID Webscale grid
Monitoring the StorageGRID Webscale system | 37

Monitoring the recovery point objective through ILM


You can track ILM evaluation attributes to determine the recovery point objective (RPO) of the
StorageGRID Webscale system as defined by the ILM policy. The RPO defines the maximum
tolerable period in which data might be lost because of a site failure, a Storage Node failure, or both.

Before you begin

You must be signed in to the Grid Manager using a supported browser.

About this task


The StorageGRID Webscale system manages objects by applying the defined ILM policy. The ILM
policy and associated ILM rules determine how many copies are made, how those copies are made,
the appropriate placement, and the length of time each copy is retained.
ILM rules are processed asynchronously after specific operations such as ingest or delete. ILM
processing classifies objects into four categories and operates simultaneously on all four categories to
ensure fairness and prioritization:

• Repair of replicated copies with only one copy remaining.

• Awaiting - Background: excludes repair of replicated copies with only one copy remaining.
• Awaiting - Client: includes new ingests and metadata updates; excludes deletions.

• Deletions.

Ingest or other activity can exceed the rate at which the system can process ILM. When this scenario
occurs, the system will begin to queue objects whose ILM can no longer be fulfilled in near real time.
In the example shown, the chart of the Awaiting—Client indicates that the number of objects
awaiting ILM evaluation temporarily increases in an unsustainable manner, then eventually
decreases. Such a trend indicates that ILM was temporarily not fulfilled in near real time.

Steps

1. Select Support > Grid Topology.

2. Select deployment > Overview > Main.


38 | StorageGRID Webscale 11.1 Administrator Guide

3. In the ILM Activity section, review the key attributes for ILM evaluations:
Awaiting - All
The total number of objects awaiting ILM evaluation.
Awaiting - Client
The total number of objects awaiting ILM evaluation from client operations (for example,
ingest).
Scan Rate
The rate at which objects in the grid are scanned and queued for ILM.
Scan Period - Estimated
The estimated time to complete a full ILM scan of all objects.
Note: A full scan does not guarantee that ILM has been applied to all objects.

Awaiting - Evaluation Rate


The current rate at which objects are evaluated against the ILM policy in the grid.
Repairs Attempted
The total number of object repair operations for replicated data that have been attempted.
This count increments each time an LDR tries to repair a high-risk object.
Note: The same object repair might increment again if replication failed after the repair.

Related information
Grid primer

Monitoring object verification operations


The StorageGRID Webscale system can verify the integrity of object data on Storage Nodes using
background and foreground verification.

Before you begin


You must be signed in to the Grid Manager using a supported browser.

About this task


There are two verification processes: background verification and foreground verification. They work
together to ensure data integrity. Background verification runs automatically, continuously checking
the correctness of object data. Foreground verification can triggered by a user to more quickly verify
the existence (although not the correctness) of object data.
Background verification automatically and continuously checks all Storage Nodes to determine if
there are corrupt copies of replicated and erasure coded object data. If problems are found, the
StorageGRID Webscale system automatically attempts to replace the corrupt object data from copies
stored elsewhere in the system. Background verification does not run on Archive Nodes.
The foreground verification process allows you to verify the existence of replicated and erasure
coded object data on a specific Storage Node, checking that each object that is expected to be present
is there. You can run foreground verification on all or some of a Storage Node's object stores to help
determine if there are integrity problems with a storage device. Large numbers of missing objects
might indicate that there is an issue with storage.
The LDR > Verification and LDR > Erasure Coding pages enable you to review results from
background and foreground verifications, such as corrupt or missing objects detected. You should
investigate any instances of corrupt or missing object data immediately, to determine the root cause.
Monitoring the StorageGRID Webscale system | 39

Steps

1. Select Support > Grid Topology.

2. To check the verification results:

• To check replicated object data verification, select Storage Node > LDR > Verification >
Overview > Main.

• To check erasure coded fragment verification, select Storage Node > LDR > Erasure
Coding > Overview > Main.
40 | StorageGRID Webscale 11.1 Administrator Guide

Monitoring archival capacity


You cannot directly monitor an external archival storage system's capacity through the StorageGRID
Webscale system. However, you can monitor whether the Archive Node is still able to send object
data to the archival destination, which may be an indication that an expansion of archival media is
required.

Before you begin


You must be signed in to the Grid Manager using a supported browser.

About this task


You can monitor the Store component to check if the Archive Node can still send object data to the
targeted archival storage system. As well, the triggering of the Store Failures (ARVF) alarm may be
an indication that the targeted archival storage system has reach capacity and can no longer accept
object data.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Overview> Main.

3. Check the Store State and Store Status attributes to confirm that the Store component is Online
with No Errors.

An offline Store component or one with errors might indicate that targeted archival storage
system can no longer accept object data because it has reached capacity.

Monitoring servers and grid nodes


Various services hosted by the StorageGRID Webscale system's grid nodes provide you with
mechanisms to monitor the system.

What the SSM service is


The Server Status Monitor (SSM) service is present on all grid nodes and monitors the grid node’s
status, services, and resources.
The SSM service monitors the condition of the server and related hardware, polls the server and
hardware drivers for information, and displays the processed data. Information monitored includes:
Monitoring the StorageGRID Webscale system | 41

• CPU information (type, mode, speed)

• Memory information (available, used)

• Performance (system load, load average, uptime, restarts)

• Volumes (status, available space)

• Network (addresses, interfaces, resources)

• Services
• NTP synchronization

Services
The Services component tracks the services and support modules running on a grid node. It reports
the service’s current version, status, the number of threads (CPU tasks) running, the current CPU
load, and the amount of RAM being used.
The services are listed as well as the support modules (such as time synchronization). Also listed is
the operating system and the StorageGRID Webscale software version installed on the grid node.
The status of a service is either Running or Not Running. A service is listed with a status of Not
Running when its state is Administratively Down.
42 | StorageGRID Webscale 11.1 Administrator Guide

Related concepts
Alarm notification types on page 55

Resetting event counters


On the Nodes page, the Events tab displays system events that help with troubleshooting. The Last
Event provides an area of focus when an error occurs. After resolving issues, click Reset event
counts to return the counts to zero.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Nodes > Grid Node > Events

2. Make sure that any event with a count greater than 0 has been resolved.

3. Click Reset event counts.


Monitoring the StorageGRID Webscale system | 43

4. Select the Reset check boxes for the specific counters you want to reset.

5. Click Apply Changes.

Resources
The SSM service uses the standard set of resources attributes that report on the service health and all
computational, disk device, and network resources. In addition, the Resources attributes report on
memory, storage hardware, network resources, network interfaces, network addresses, and receive
and transmit information. The Resources component of the SSM service provides the ability to reset
network error counters.
If the Storage Node is a StorageGRID Webscale appliance, appliance information appears in the
Resources section. For details, see the installation and maintenance instructions for your appliance.

Related information
SG6000 appliance installation and maintenance
44 | StorageGRID Webscale 11.1 Administrator Guide

SG5700 appliance installation and maintenance


SG5600 appliance installation and maintenance

Timing
The SSM service uses the set of timing attributes that report on the state of the grid node’s time and
the time recorded by neighboring grid nodes. In addition, the SSM Timing attributes report on NTP
Synchronization.

Monitoring StorageGRID Webscale appliance Storage Nodes


You can view status information for every installed and operational StorageGRID Webscale
appliance Storage Node. Status includes appliance hardware information, connectivity issues, alarm
notifications, services, and disk device information.

Viewing information about an appliance Storage Node


You can view information about an appliance Storage Node at any time. For example, you might
want to review appliance-specific information to ensure correct configuration and status.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Each StorageGRID Webscale appliance is represented as one Storage Node.
The Nodes page lists information about service health and all computational, disk device, and
network resources for each appliance Storage Node. You can also see memory, storage hardware,
network resources, network interfaces, network addresses, and receipt and transmittal information.
Note: To view this information on the Nodes page, you must be using StorageGRID Webscale
11.1.1. If you are using StorageGRID Webscale 11.1.0, select Support > Grid Topology >
appliance Storage Node > SSM > Resources to view similar information.

Steps

1. Select Nodes.

2. Select an appliance Storage Node.

3. Select Overview.
The Node Information table on the Overview tab displays the name of the node, the node type,
the software version installed, and the IP addresses associated with the node. The Interface
column contains the name of the interface. An eth interface is a Grid Network, Admin Network,
or Client Network, hic is a bonded port, and mtc (not shown in the following screen shot) is for
management IP addresses on the appliance.
Monitoring the StorageGRID Webscale system | 45

4. Select Hardware to see more information about the appliance.

a. View the CPU Utilization and Memory graphs to determine the percentages of CPU and
memory usage over time. To display a different time interval, select one of the controls above
the chart or graph. You can select to display the information available for intervals of 1 hour, 1
day, 1 week, 1 month, or 1 year. Or you can set a custom interval, which allows you to specify
date and time ranges.
46 | StorageGRID Webscale 11.1 Administrator Guide

b. Scroll down to view the table of components for the appliance. This table contains information
such as the model name of the appliance; controller names, serial numbers, and IP addresses;
and the status of each component.
Note: Some fields, such as Compute Controller BMC IP and Compute Hardware, appear
only for appliances with that feature.
Monitoring the StorageGRID Webscale system | 47

Field Description
Appliance Model The model number for this StorageGRID Webscale appliance
shown in SANtricity software.
Storage Controller Name The name for this StorageGRID Webscale appliance shown
in SANtricity software.
Storage Controller A IP address for management port 1 on storage controller A.
Management IP You use this IP to access SANtricity software to troubleshoot
storage issues.
Storage Controller B IP address for management port 1 on storage controller B.
Management IP You use this IP to access SANtricity software to troubleshoot
storage issues.
Storage Controller WWID The worldwide identifier of the storage controller shown in
SANtricity software.
Storage Appliance Chassis The chassis serial number of the appliance.
Serial Number
48 | StorageGRID Webscale 11.1 Administrator Guide

Field Description
Storage Hardware The overall status of the storage controller hardware.
If the Storage Node is a StorageGRID Webscale appliance
and it needs attention, then both the StorageGRID Webscale
and SANtricity systems indicate that the storage hardware
needs attention.
If the status is “needs attention,” first check the storage
controller using SANtricity software. Then, ensure that no
other alarms exist that apply to the compute controller.
Storage Controller Failed The number of drives that are not optimal.
Drive Count
Storage Controller A The status of storage controller A.
Storage Controller B The status of storage controller B.
Some appliance models do not have a storage controller B.
Storage Controller Power The status of power supply A for the storage controller.
Supply A
Storage Controller Power The status of power supply B for the storage controller.
Supply B
Storage Multipath The multipath connectivity state.
Connectivity For details about resolving performance or fault tolerance
issues, refer to the E-Series documents.
Overall Power Supply The status of all power supplies for the appliance.
Compute Controller BMC The IP address of the baseboard management controller
IP (BMC) port in the compute controller.
You use this IP to connect to the BMC interface to monitor
and diagnose the appliance hardware.
This field is not displayed for appliance models that do not
contain a BMC.
Compute Controller Serial The serial number of the compute controller.
Number
Compute Hardware The status of the compute controller hardware.
This field is not displayed for appliance models that do not
have separate compute hardware and storage hardware.
Compute Controller CPU The temperature status of the compute controller's CPU.
Temperature
Compute Controller The temperature status of the compute controller.
Chassis Temperature
Compute Controller Power The status of power supply A for the compute controller.
Supply A
Compute Controller Power The status of power supply B for the compute controller.
Supply B

c. Confirm that all statuses are “Nominal.”


The statuses in this section correspond to the following alarm codes.
Monitoring the StorageGRID Webscale system | 49

Field Alarm code


Storage Controller Failed Drive Count BADD
Compute Controller Chassis Temperature BRDT
Compute Controller Hardware CCNA
Compute Controller Power Supply A CPSA
Compute Controller Power Supply B CPSB
Compute Controller CPU Temperature CPUT
Overall Power Supply OPST
Storage Controller Power Supply A PSAS
Storage Controller Power Supply B PSBS
Storage Controller A SCSA
Storage Controller B SCSB
Storage Hardware SOSS

For details about alarms in StorageGRID Webscale, see the information about
troubleshooting.

5.

6. Select Network to view information for each network.


The Network Traffic graph provides a summary of overall network traffic.

a. Review the Network Interfaces section.


50 | StorageGRID Webscale 11.1 Administrator Guide

Use the following table with the values in the Speed column in the Network Interfaces table to
determine whether the 10/25-GbE network ports on the appliance were configured to use
active/backup mode or LACP mode.
Note: The values shown in the table assume all four links are used.

Link mode Bond mode Individual HIC link Expected Grid/Client


speed (hic1, hic2, hic3, Network speed
hic4) (eth0,eth2)
Aggregate LACP 25 100
Fixed LACP 25 50
Fixed Active/Backup 25 25
Aggregate LACP 10 40
Fixed LACP 10 20
Fixed Active/Backup 10 10

See the installation and maintenance instructions for your appliance for more information
about configuring the 10/25-GbE ports.

b. Review the Network Communication section.


The Receive and Transmit tables show how many bytes and packets have been received and
sent across each network as well as other receive and transmission metrics.
Monitoring the StorageGRID Webscale system | 51

7. Select Storage to view graphs that show the percentages of storage used over time for object data
and object metadata, as well as information about disk devices, volumes, and object stores.
52 | StorageGRID Webscale 11.1 Administrator Guide

a. Scroll down to view the amounts of available storage for each volume and object store.
The Worldwide Name for each disk matches the volume world-wide identifier (WWID) that
appears when you view standard volume properties in SANtricity software (the management
software connected to the appliance's storage controller).
To help you interpret disk read and write statistics related to volume mount points, the first
portion of the name shown in the Name column of the Disk Devices table (that is, sdc, sdd,
sde, and so on) matches the value shown in the Device column of the Volumes table.
Monitoring the StorageGRID Webscale system | 53

Related information
SG6000 appliance installation and maintenance
SG5700 appliance installation and maintenance
SG5600 appliance installation and maintenance
Troubleshooting StorageGRID Webscale
NetApp Documentation: SANtricity Storage Manager

Resolving appliance Storage Node hardware-based events


You can view events related to a StorageGRID Webscale appliance Storage Node at any time. You
might want to monitor events related to the storage and compute controllers, the multipath state of the
appliance connections, and the enclosure to ensure operational status. Some events trigger alarms.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


The StorageGRID Webscale appliance reports on events that impact the service health and all
computational, disk devices, hardware, and network resources. For the appliance, you can gauge the
hardware status of both the storage and the compute controllers by viewing storage hardware events.
54 | StorageGRID Webscale 11.1 Administrator Guide

For more information, see the troubleshooting information and the installation and maintenance
instructions for your appliance.

Steps

1. Select Support > Grid Topology.

2. Select an appliance Storage Node in the Grid Topology tree.

3. Select SSM > Events.

4. In the System Events section, note the count for Storage Hardware Events.

5. If a hardware event is noted here, identify the cause by completing the following steps:

a. Select SSM > Resources.

b. Look for abnormal conditions in the Storage Hardware section.

Related information
Troubleshooting StorageGRID Webscale
SG6000 appliance installation and maintenance
SG5700 appliance installation and maintenance
SG5600 appliance installation and maintenance

Monitoring the Total Events alarm


When the Total Events alarm is raised, monitor the situation and take appropriate action.

Category Code Service Notes


Total events SMTT SSM The total number of logged error or fault events (Total
Events SMTT) includes errors such as network errors.
Unless these errors have been cleared (that is, the count
has been reset to 0), total events alarms can be triggered.
Note: This alarm is safe to ignore only if the events
that triggered the alarm have been investigated.

For more information about alarms, see the troubleshooting information.


Monitoring the StorageGRID Webscale system | 55

Related information
Troubleshooting StorageGRID Webscale

About alarms and email notifications


An email notification is a message automatically sent by the StorageGRID Webscale system to
configured recipients, which notifies recipients of a newly triggered alarm or service state change.
You can configure email notifications and set mailing lists to receive these email notifications for any
particular alarm severity or state change.
If an email address (or list) belongs to multiple mailing lists, only one email notification is sent when
a notification triggering event occurs. For example, one group of administrators within your
organization can be configured to receive notifications for all alarms regardless of severity. Another
group might only require notifications for alarms with a severity of Critical. You can belong to both
lists. If a Critical alarm is triggered, you receive one notification, not two.

Alarm notification types


The StorageGRID Webscale systems sends out two types of alarm notifications: severity level and
service state.
Severity level notifications
Severity level notifications are sent at the alarm level and are associated with attributes. A mailing list
receives all notifications related to alarm for the selected severity: Notice, Minor, Major, and Critical.
A notification is sent when an event triggers an alarm for the selected alarm level. A notification is
also sent when the alarm leaves the alarm level — either by being resolved or by entering a different
alarm severity level.
Service state notifications
Service State notifications are sent at the services level and are associated with services; for example,
the LDR service or CMS service. A mailing list receives all notifications related to changes in the
selected state: Unknown, or Administratively Down. A notification is sent when a service enters the
selected Service State and when it leaves the selected Service State.

Notification status and queues


You can view the current status of the NMS service’s ability to send notifications to the mail server
and the size of its notifications queue through the Interface Engine page.
To access the Interface Engine page, select Support > Grid Topology. Then, select site > Admin
Node > NMS > Interface Engine.
56 | StorageGRID Webscale 11.1 Administrator Guide

Notifications are processed through the email notifications queue and are sent to the mail server one
after another in the order they are triggered. If there is a problem (for example, a network connection
error) and the mail server is unavailable when the attempt is made to send the notification, a best
effort attempt to resend the notification to the mail server continues for a period of 60 seconds. If the
notification is not sent to the mail server after 60 seconds, the notification is dropped from the
notifications queue and an attempt to send the next notification in the queue is made. Because
notifications can be dropped from the notifications queue without being sent, it is possible that an
alarm can be triggered without a notification being sent. In the event that a notification is dropped
from the queue without being sent, the MINS (E-mail Notification Status) Minor alarm is triggered.
For a StorageGRID Webscale system configured with multiple Admin Nodes (and thus multiple
NMS services), if the “standby” sender detects a Server Connection Error with the preferred sender,
it will begin sending notifications to the mail server. The standby sender will continue to send
notifications until it detects that the preferred sender is no longer in an error state and is again
successfully sending notifications to the mail server. Notifications in the preferred sender’s queue are
not copied to the standby sender. Note that in a situation where the preferred sender and the standby
sender are islanded from each other, duplicate messages can be sent.

Related tasks
Selecting a preferred sender on page 63

Configuring notifications
By default, notifications are not sent. You must configure the StorageGRID Webscale to send
notifications when alarms are raised.

Steps
1. Configuring email server settings on page 57
2. Creating email templates on page 58
3. Creating mailing lists on page 59
4. Configuring global email notifications on page 60
Monitoring the StorageGRID Webscale system | 57

Configuring email server settings


The E-mail Server page allows you to configure SMTP mail server settings that enable the sending of
alarm notifications and AutoSupport messages. The StorageGRID Webscale system only sends
email; it cannot receive email.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.


• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Only the SMTP protocol is supported for the sending email.

Steps

1. Select Configuration > Email Setup.

2. From the Email menu, select Server.

3. Add the following SMTP mail server settings:

Item Description
Mail Server IP address of the SMTP mail server. You can enter a host name
rather than an IP address if you have previously configured DNS
settings on the Admin Node.
Port Port number to access the SMTP mail server.
Authentication Allows for the authentication of the SMTP mail server. By
default, authentication is Off.
Authentication Credentials Username and Password of the SMTP mail server. If
Authentication is set to On, a username and password to access
the SMTP mail server must be provided.
58 | StorageGRID Webscale 11.1 Administrator Guide

4. Under From Address, enter a valid email address that the SMTP server will recognize as the
sending email address. This is the official email address from which the alarm notification or
AutoSupport message is sent.

5. Optionally, send a test email to confirm that your SMTP mail server settings are correct.

a. In the Test E-mail > To box, add one or more addresses that you can access.
You can enter a single email address or a comma-delineated list of email addresses. Because
the NMS service does not confirm success or failure when a test email is sent, you must be
able to check the test recipient’s inbox.

b. Select Send Test E-mail.

6. Click Apply Changes.


The SMTP mail server settings are saved. If you entered information for a test email, that email is
sent. Test emails are sent to the mail server immediately and are not sent through the notifications
queue. In a system with multiple Admin Nodes, each Admin Node sends an email. Receipt of the
test email confirms that your SMTP mail server settings are correct and that the NMS service is
successfully connecting to the mail server. A connection problem between the NMS service and
the mail server triggers the MINS (NMS Notification Status) Minor alarm.

Related tasks
Creating mailing lists on page 59

Creating email templates


Create an email template to customize the header, footer, and subject line of a notification. You can
use email templates to send unique notifications that contain the same body text to different mailing
lists.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Different mailing lists might require different contact information. Templates do not include the body
text of the email message.

Steps

1. Select Configuration > Email Setup.

2. From the Email menu, select Templates.

3. Click Edit (or Insert if this is not the first template).


Monitoring the StorageGRID Webscale system | 59

4. In the new row add the following:

Item Description
Template Unique name used to identify the template. Template names cannot be
Name duplicated.
Subject Optional. Prefix that will appear at the beginning of an email’s subject line.
Prefix Prefixes can be used to easily configure email filters and organize notifications.
Header Optional. Header text that appears at the beginning of the email message body.
Header text can be used to preface the content of the email message with
information such as company name and address.
Footer Optional. Footer text that appears at the end of the email message body. Footer
text can be used to close the email message with reminder information such as
a contact phone number or a link to a web site.

5. Click Apply Changes.


A new template for notifications is added.

Creating mailing lists


You can create mailing lists for notifications. A mailing list enables you to send one email message to
multiple email addresses. These mailing lists are used to send notifications when an alarm is
triggered or when a service state changes. You must create a mailing list before you can send
notifications. To send a notification to a single recipient, create a mailing list with one email address.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Configuration > Email Setup.

2. From the Email menu, select Lists.

3. Click Edit (or Insert if this is not the first mailing list).
60 | StorageGRID Webscale 11.1 Administrator Guide

4. In the new row, add the following:

Item Description
Group Name Unique name used to identify the mailing list. Mailing list names cannot be
duplicated.
Note: If you change the name of a mailing list, the change is not propagated
to the other locations that use the mailing list name. You must manually
update all configured notifications to use the new mailing list name.

Recipients Single email address, a previously configured mailing list, or a comma-


delineated list of email addresses and mailing lists to which notifications will
be sent.
Template Optionally, select an email template to add a unique header, footer, and subject
line to notifications sent to all recipients of this mailing list.

5. Click Apply Changes.


A new mailing list is created.

Related tasks
Creating email templates on page 58

Configuring global email notifications


In order to receive global email notifications, recipients must be a member of a mailing list and that
list must be added to the Notifications page. Notifications are configured to send email to recipients
only when an alarm with a specified severity level is triggered or when a service state changes. Thus,
recipients only receive the notifications they need to receive.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You must have configured an email list.

Steps

1. Select Configuration > Notifications.

2. Click Edit (or Insert if this is not the first notification).

3. Under E-mail List, add a mailing list.


Monitoring the StorageGRID Webscale system | 61

4. Select one or more alarm severity levels and service states:

Notification Category Description


Type
Notice Severity Level An unusual condition exists that does not affect normal
operation.
Minor Severity Level An abnormal condition exists that could affect operation
in the future.
Major Severity Level An abnormal condition exists that is currently affecting
operation.
Critical Severity Level An abnormal condition exists that has stopped normal
operation.
Unknown Service State An unknown condition exists that has stopped normal
service operation.
Administratively Service State A condition whereby a service has been purposefully
Down stopped.

5. Click Apply Changes.


Notifications will be sent to the mailing list when alarms with the selected alarm severity level or
service state are triggered or changed.

Related tasks
Creating mailing lists on page 59

Suppressing email notifications for a mailing list


You can suppress notifications for a mailing list system-wide when you do not want a mailing list to
receive notifications, for example while performing maintenance procedures.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Configuration > Notifications.

2. Click Edit next to the mailing list for which you want to suppress notifications.

3. Under Suppress, select the check box next to the mailing list you want to suppress, or select
Suppress at the top of the column to suppress all mailing lists.

4. Click Apply Changes.


Notifications are suppressed for the selected mailing lists.
62 | StorageGRID Webscale 11.1 Administrator Guide

Suppressing email notifications system wide


You can block the StorageGRID Webscale system's ability to send notifications when an alarm is
triggered.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Attention: Suppressing email notifications system wide also suppresses event-triggered
AutoSupport emails, even when the Enabled check box is selected on the Event-Triggered
AutoSupport page (Support > AutoSupport > Event-triggered).

Steps

1. Select Configuration > Display Options.

2. From the Display Options menu, select Options.

3. Select Notification Suppress All.

4. Click Apply Changes.


The Notifications page (Configuration > Notifications) displays the following message:
Monitoring the StorageGRID Webscale system | 63

Related concepts
Troubleshooting AutoSupport messages on page 82

Selecting a preferred sender


Each site in a StorageGRID Webscale deployment can include one or more Admin Nodes. If a
deployment includes multiple Admin Nodes, you must configure one Admin Node as the preferred
sender of notifications and AutoSupport messages. You can select any Admin Node as the preferred
sender, and you can change which Admin Node is selected at any time.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


The Display Options page lists the Admin Node that is currently sending notifications. In most
cases, this Admin Node is the same as the Preferred Sender; however, if an Admin Node is islanded
from the rest of the system, it is unable to use the Preferred Sender and automatically updates to
become the Current Sender. In the case of islanded Admin Nodes, multiple Admin Nodes will
attempt to send notifications and AutoSupport message and thus it is possible that multiple copies of
notifications will be received.

Steps

1. Select Configuration > Display Options.

2. From the Display Options menu, select Options.

3. Select the Admin Node you want to set as the preferred sender from the drop-down list.

4. Click Apply Changes.


The Admin Node is set as the preferred sender of notifications.

Managing alarms
Customizing alarms lets you customize your StorageGRID Webscale system based on your unique
monitoring requirements.
You can configure customized alarms either globally (Global Custom alarms) or for individual
services (Custom alarms). You can create customized alarms with alarm levels that override Default
64 | StorageGRID Webscale 11.1 Administrator Guide

alarms, and you can create alarms for attributes that do not have a Default alarm. Alarm
customization is restricted to accounts with the Grid Topology Page Configuration and Other Grid
Configuration permissions.
Attention: Using the Default alarm settings is recommended. Be very careful if you change alarm
settings. For example, if you increase the threshold value for an alarm, you might not detect an
underlying problem until it prevents a critical operation from completing. If you do need to change
an alarm setting, you should discuss your proposed changes with technical support.

For a list of alarm codes, see the troubleshooting information.

Related concepts
Controlling system access with administration user accounts and groups on page 249

Related information
Troubleshooting StorageGRID Webscale

Alarm class types


Alarms are separated into three mutually exclusive alarm classes.

• Default: Standard alarm configurations. Default alarms are set during installation.

• Global Custom: Custom alarms that are set at a global level and that apply to all services of a
given type in the StorageGRID Webscale system. Global Custom alarms are configured after
installation to override default settings.

• Custom: Custom alarms that are set on individual services or components. Custom alarms are
configured after installation to override default settings.

Default alarms
Default alarms are configured on a global basis and cannot be modified. However, Default alarms can
be disabled or overridden by Custom alarms and Global Custom alarms.
Default alarms can be disabled both globally and at the services level. If a Default alarm is disabled
globally, the Enabled check box appears with an adjacent asterisk at the services level on the
Configuration page. The asterisk indicates that the Default alarm has been disabled through the
Configuration > Global Alarms page even though the Enabled check box is selected.
You can view the default alarms for a particular service or component. Select Support > Grid
Topology. Then select service or component > Configuration > Alarms.

Related tasks
Disabling Default alarms for services on page 74
Disabling a Default alarm system wide on page 75

Viewing all Default alarms


You can view all Default alarms that are standard alarm configurations set as part of the installation.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Monitoring the StorageGRID Webscale system | 65

Steps

1. Select Configuration > Global Alarms.

2. For Filtered by select Attribute Code or Attribute Name.

3. For equals, enter the wildcard symbol: *

4. Click the arrow or press Enter.


All Default alarms are listed.

Global Custom alarms


Global Custom alarms monitor the status of conditions system-wide. By creating a Global Custom
alarm, you can override a Default alarm system-wide. You can also create a new Global Custom
alarm that will monitor status system-wide. This can be useful for monitoring any customized
conditions of your StorageGRID Webscale system.
You can create Global Custom alarms, and disable Global Custom alarms system wide or for
individual services.
66 | StorageGRID Webscale 11.1 Administrator Guide

Related tasks
Creating Global Custom alarms on page 72
Disabling Global Custom alarms for services on page 76
Disabling Global Custom alarms system wide on page 77

Custom alarms
Custom alarms can be created to override a Default alarm or Global Custom alarm at the service or
component level. You can also create new Custom alarms based on the service’s unique
requirements.
You can configure Custom alarms by going to each service's Configuration > Alarms page in the
Grid Topology tree.

Related tasks
Creating custom service or component alarms on page 70

Alarm triggering logic


Each alarm class is organized into a hierarchy of five severity levels from Normal to Critical. An
alarm is triggered when a threshold value is reached that evaluates to true against a combination of
alarm class and alarm severity level. Note that a severity level of Normal does not trigger an alarm.
The alarm severity and corresponding threshold value can be set for every numerical attribute. The
NMS service on each Admin Node continuously monitors current attribute values against configured
thresholds. When an alarm is triggered, a notification is sent to all designated personnel.
Attribute values are evaluated against the list of enabled alarms defined for that attribute in the
Alarms table on the Alarms page for a specific service or component (for example, LDR > Storage >
Alarms > Main). The list of alarms is checked in the following order to find the first alarm class
with a defined and enabled alarm for the attribute:

1. Custom alarms with alarm severities from Critical down to Notice.

2. Global Custom alarms with alarm severities from Critical down to Notice.

3. Default alarms with alarm severities from Critical down to Notice.

After an enabled alarm for an attribute is found in the higher alarm class, the NMS service only
evaluates within that class. The NMS service will not evaluate against the other lower priority
classes. That is, if there is an enabled Custom alarm for an attribute, the NMS service only evaluates
the attribute value against Custom alarms. Global Custom alarms and Default alarms are not
evaluated. Thus, an enabled Global Custom alarm for an attribute can meet the criteria needed to
trigger an alarm, but it will not be triggered because a Custom alarm (that does not meet the specified
criteria) for the same attribute is enabled. No alarm is triggered and no notification is sent.

Related concepts
What an Admin Node is on page 218

Alarm triggering examples


You can use these example to understand how Custom alarms, Global Custom alarms, and Default
alarms are triggered.
Example 1
For the following example, an attribute has a Global Custom alarm and a Default alarm defined and
enabled as shown in the following table.
Monitoring the StorageGRID Webscale system | 67

Threshold Values
Global Custom alarm Default alarm (enabled)
(enabled)
Notice >= 1500 >= 1000
Minor >= 15,000 >= 1000
Major >=150,000 >= 250,000

If the attribute is evaluated when its value is 1000, no alarm is triggered and no notification is sent.
The Global Custom alarm takes precedence over the Default alarm. A value of 1000 does not reach
the threshold value of any severity level for the Global Custom alarm. As a result, the alarm level is
evaluated to be Normal.
After the above scenario, if the Global Custom alarm is disabled, nothing changes. The attribute
value must be evaluated again before a new alarm level is triggered.
With the Global Custom alarm disabled, when the attribute value is evaluated again, the attribute
value is evaluated against the threshold values for the Default alarm. The alarm level triggers a Notice
level alarm and an email notification is sent to the designated personnel.
Note, however, that if there are Custom alarms for an attribute, these alarms are still evaluated as
Custom alarms have a higher priority than Global Custom alarms.
Example 2
For the following example an attribute has a Custom alarm, a Global Custom alarm, and a Default
alarm defined and enabled as shown in the following table.

Threshold Values
Custom alarm Global Custom Default alarm (enabled)
(enabled) alarm (enabled)
Notice >= 500 >= 1500 >=1000
Minor >= 750 >= 15,000 >=10,000
Major >=1,000 >= 150,000 >= 250,000

If the attribute is evaluated when its value is 1000, a Major alarm is triggered and an email
notification is sent to the designated personnel. The Custom alarm takes precedence over both the
Global Custom alarm and Default alarm. A value of 1000 reaches the threshold value of the Major
severity level for the Custom alarm. As a result, the attribute value triggers a Major level alarm.
Within the same scenario, if the Custom alarm is then disabled and the attribute value evaluated again
at 1000, the alarm level is changed to Normal. The attribute value is evaluated against the threshold
values of the Global Custom alarm, the next alarm class that is defined and enabled. A value of 1000
does not reach any threshold level for this alarm class. As a result, the attribute value is evaluated to
be Normal and no notification is sent. The Notice level alarm from the previous evaluation is cleared.
Example 3
For the following example, an attribute has a Custom alarm, Global Custom alarm, and Default alarm
defined and enabled/disabled as shown below in the following table.

Threshold Values
Custom alarm Global Custom Default alarm (enabled)
(disabled) alarm (enabled)
Notice >= 500 >= 1500 >=1000
68 | StorageGRID Webscale 11.1 Administrator Guide

Threshold Values
Custom alarm Global Custom Default alarm (enabled)
(disabled) alarm (enabled)
Minor >= 750 >= 15,000 >=10,000
Major >=1,000 >= 150,000 >= 250,000

If the attribute is evaluated when its value is 10,000, a Notice alarm is triggered and an email
notification is sent to the designated personnel.
The Custom alarm is defined, but disabled; therefore, the attribute value is evaluated against the next
alarm class. The Global Custom alarm is defined, enabled, and it takes precedence over the Default
alarm. The attribute value is evaluated against the threshold values set for the Global Custom alarm
class. A value of 10,000 reaches the Notice severity level for this alarm class. As a result, the attribute
value triggers a Notice level alarm.
If the Global Custom alarm is then disabled and the attribute value evaluated again at 10,000, a Minor
level alarm is triggered. The attribute value is evaluated against the threshold values for the Default
alarm class, the only alarm class in that is both defined and enabled.
A value of 10,000 reaches the threshold value for a Minor level alarm. As a result, the Notice level
alarm from the previous evaluation is cleared and the alarm level changes to Minor. An email
notification is sent to the designated personnel.

Alarms of same severity


If two Global Custom or Custom alarms for the same attribute have the same severity, the alarms are
evaluated with a “top down” priority.
For instance, if UMEM drops to 50MB, the first alarm is triggered (= 50000000), but not the one
below it (<=100000000).

If the order is reversed, when UMEM drops to 100MB, the first alarm (<=100000000) is triggered,
but not the one below it (= 50000000).
Monitoring the StorageGRID Webscale system | 69

Alarm class overrides


To override a class of alarms, disable all alarms within that class. If all alarms within a class for an
attribute are disabled, the NMS service interprets the class as having no alarms configured for the
attribute and evaluates the next lower class for enabled alarms.
For example, if an alarm is triggered at the Global Custom alarm class level, it means that there are
no enabled alarms at the Custom alarms class level for that attribute.
For example, to override a Default alarm, add a Global Custom alarm or Custom alarm for that
attribute. This override is achieved because the NMS service does not evaluate lower priority alarm
classes once an alarm setting is detected within a class. If this override is performed after an alarm
has already been triggered, the override will not take effect until the alarm is triggered again.

Severity changes
If an alarm’s severity changes, the severity is propagated up the network hierarchy as needed. If there
is a notification configured, a notification is sent. The notification is sent only at the time the alarm
enters or leaves the new severity level.

Notifications
A notification reports the occurrence of an alarm or the change of state for a service. It is an email
communication to designated personnel that the system requires attention.
To avoid multiple alarms and notifications being sent when an alarm threshold value is reached, the
alarm severity is checked against the current alarm severity for the attribute. If there is no change,
then no further action is taken. This means that as the NMS service continues to monitor the system,
it will only raise an alarm and send notifications the first time it notices an alarm condition for an
attribute. If a new value threshold for the attribute is reached and detected, the alarm severity changes
and a new notification is sent. Alarms are cleared when conditions return to the Normal level.
The trigger value shown in the notification of an alarm state is rounded to three decimal places.
Therefore, an attribute value of 1.9999 triggers an alarm whose threshold is less than (<) 2.0,
although the alarm notification shows the trigger value as 2.0.

New services
As new services are added through the addition of new grid nodes or sites, they inherit Default alarms
and Global Custom alarms.
70 | StorageGRID Webscale 11.1 Administrator Guide

Creating custom service or component alarms


Customizing alarm settings enables you to create a customized methodology for monitoring the
StorageGRID Webscale system. You can create alarms on individual services or components in
addition to creating global alarms.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


You should not change Default alarm values unless absolutely necessary. By changing Default
alarms, you run the risk of concealing problems that might otherwise trigger an alarm

Steps

1. Select Support > Grid Topology.

2. Select a service or component in the Grid Topology tree.

3. Click Configuration > Alarms.

4. Add a new row to the Custom alarms table:

• Click Edit (if this is the first entry) or Insert to add a new alarm.
Monitoring the StorageGRID Webscale system | 71

• Copy an alarm from the Default alarms or Global Custom alarms tables. Click Copy next
to the alarm you want to modify.

5. Make any necessary changes to the Custom alarm settings:

Heading Description
Enabled Select or clear to enable or disable the alarm.
Attribute Select the name and code of the attribute being monitored from the list of all
attributes applicable to the selected service or component.

To display information about the attribute, click Info next to the attribute’s
name.
Severity The icon and text indicating the level of the alarm.
Message The reason for the alarm (connection lost, storage space below 10%, and so
on).
Operator Operators for testing the current attribute value against the Value threshold:

• = equal to

• > greater than

• < less than

• >= greater than or equal to

• <= less than or equal to


• ≠ not equal to

Value The alarm’s threshold value used to test against the attribute’s actual value
using the operator.
The entry can be a single number, a range of numbers specified with a colon
(1:3), or a comma delineated list of numbers and/or ranges.
Additional A supplementary list of email addresses to be notified when the alarm is
Recipients triggered, in addition to the mailing list’s configuration on the Configuration >
Notificationspage. Lists are comma delineated.
Note: Mailing lists require SMTP server setup in order to operate. Before
adding mailing lists, confirm that SMTP is configured.
Notifications for Custom alarms can override notifications from Global Custom
or Default alarms.
Actions Control buttons to:

Edit a row

Insert a row

Delete a row
Drag-and-drop a row up or down

Copy a row

6. Click Apply Changes.


72 | StorageGRID Webscale 11.1 Administrator Guide

Creating Global Custom alarms


You can configure Global Custom alarms when you require a unique alarm that is the same for every
service of the same type. Customizing alarm settings enables you to create a customized
methodology for monitoring the StorageGRID Webscale system.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Global alarms override Default alarms. You should not change Default alarm values unless absolutely
necessary. By changing Default alarms, you run the risk of concealing problems that might otherwise
trigger an alarm.

Steps

1. Select Configuration > Global Alarms.

2. Add a new row to the Global Custom alarms table:

• To add a new alarm, click Edit (if this is the first entry) or Insert .
Monitoring the StorageGRID Webscale system | 73

• To modify a Default alarm, search for the Default alarm.

a. Under Filter by, select either Attribute Code or Attribute Name.

b. Type a search string.


Specify four characters or use wildcards (for example, A??? or AB*). Asterisks (*)
represent multiple characters, and question marks (?) represent a single character.

c. Click the arrow , or press Enter.

d. In the list of results, click Copy next to the alarm you want to modify.
The Default alarm is copied to the Global Custom alarms table.

3. Make any necessary changes to the Global Custom alarms settings:

Heading Description
Enabled Select or clear to enable or disable the alarm.
Attribute Select the name and code of the attribute being monitored from the list of all
attributes applicable to the selected service or component.

To display information about the attribute, click Info next to the attribute’s
name.
Severity The icon and text indicating the level of the alarm.
Message The reason for the alarm (connection lost, storage space below 10%, and so
on).
Operator Operators for testing the current attribute value against the Value threshold:

• = equals

• > greater than

• < less than

• >= greater than or equal to

• <= less than or equal to


• ≠ not equal to

Value The alarm’s threshold value used to test against the attribute’s actual value
using the operator.
The entry can be a single number, a range of numbers specified with a colon
(1:3), or a comma delineated list of numbers and/or ranges.
Additional A supplementary list of email addresses to be notified when the alarm is
Recipients triggered. This is in addition to the mailing list’s configuration on the
Configuration > Notifications > Main page. Lists are comma delineated.
Note: Mailing lists require SMTP server setup in order to operate. Before
adding mailing lists, confirm that SMTP is configured.
Notifications for Custom alarms can override notifications from Global Custom
or Default alarms.
74 | StorageGRID Webscale 11.1 Administrator Guide

Heading Description
Actions Control buttons to:

Edit a row

Insert a row

Delete a row
Drag-and-drop a row up or down

Copy a row

4. Click Apply Changes.

Disabling alarms
Alarms are enabled by default, but you can disable alarms that are not required.
Disabling an alarm for an attribute that currently has an alarm triggered does not clear the current
alarm. The alarm will be disabled the next time the attribute crosses the alarm threshold, or you can
clear the triggered alarm.
Attention: There are consequences to disabling alarms and extreme care should be taken.
Disabling an alarm can result in no alarm being triggered. Because alarms are evaluated by alarm
class and then severity level within the class, disabling an alarm at a higher class does not
necessarily result in a lower class alarm being evaluated. All alarms for a specific attribute must be
disabled before a lower alarm class will be evaluated.

Related tasks
Clearing triggered alarms on page 78

Alarms and tables


Alarm attributes displayed in tables can be disabled at the service, component, or system level.
Alarms cannot be disabled for individual rows in a table.
For example, in the following figure, there are two critical Entries Available (VMFI) alarms. You can
disable the VMFI alarm so that the Critical level VMFI alarm is not triggered (both currently Critical
alarms would appear in the table as green); however, you cannot disable a single alarm in a table row
so that one VMFI alarm displays as a Critical level alarm while the other remains green.

Disabling Default alarms for services


To temporarily stop alarms for a specific service, you can disable Default alarms for that service.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.


Monitoring the StorageGRID Webscale system | 75

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Support > Grid Topology.

2. Select a service or component in the Grid Topology tree.

3. Click Configuration > Alarms.

4. In the Default Alarms table, click Edit next to the alarm you want to disable.

5. Clear the Enabled check box for the alarm.

6. Click Apply Changes.


The Default alarm is disabled for the service or component.

Disabling a Default alarm system wide


You can temporarily disable a Default alarm system wide.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Click Configuration > Global Alarms.

2. Search for the Default alarm to disable.


76 | StorageGRID Webscale 11.1 Administrator Guide

a. In the Default Alarms section, select Filter by > Attribute Code or Attribute Name.

b. Type a search string.


Specify four characters or use wildcards (for example, A??? or AB*). Asterisks (*) represent
multiple characters, and question marks (?) represent a single character.

c. Click the arrow , or press Enter.

Note: Selecting Disabled Defaults displays a list of all currently disabled Default alarms.

3. In the Default Alarms table, click the Edit icon next to the alarm you want to disable.

4. Clear the Enabled check box.

5. Click Apply Changes.


The Default alarm is disabled system wide.

Disabling Global Custom alarms for services


To disable a global alarm for a service, create another enabled global alarm for the attribute. You
must create another enabled global alarm, because if all alarms within a class for an attribute are
disabled, the NMS service interprets the class as having no alarms configured for the attribute and
evaluates the next lower class for the enabled alarm.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Instead of creating a Global Custom alarm and disabling it for selected services, reconfigure the
alarms such that you create individual local Custom alarms for the services that require the alarm. If
Monitoring the StorageGRID Webscale system | 77

you want to ensure that all these Custom alarms have the same configuration, you can create a Global
Custom alarm, disable it, and then enable it for selected services as a Custom alarm.
If you want to create a Global Custom alarm and disable it for selected services, you must create a
local Custom alarm for that service that will never be triggered. A local Custom alarm that is never
triggered overrides all Global Custom alarms for that service.
Note: Alarms cannot be disabled for individual rows in a table.

Steps

1. Select Configuration > Global Alarms

2. In the Global Custom alarm table, click next to the alarm you want to disable.
The alarm is copied to the Custom Alarms table.

3. Clear the Enabled check box for the alarm.

4. Click Apply Changes.

Related tasks
Creating custom service or component alarms on page 70

Disabling Global Custom alarms system wide


You can disable a Global Custom alarm for the entire system.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Note: Alarms cannot be disabled for individual rows in a table.

Steps

1. Select Configuration > Global Alarms.

2. In the Global Custom Alarms table, click Edit next to the alarm you want to disable.

3. Clear the Enabled check box.


78 | StorageGRID Webscale 11.1 Administrator Guide

4. Click Apply Changes.


The Global Custom alarm is disabled system wide.

Clearing triggered alarms


Disabling an alarm for an attribute that currently has an alarm triggered against it does not clear the
alarm. The alarm will be disabled the next time the attribute changes. You can acknowledge the alarm
or, if you want to immediately clear the alarm rather than wait for the attribute value to change,
(resulting in a change to the alarm state), you can clear the triggered alarm. You might find this
helpful if you want to clear an alarm immediately against an attribute whose value does not change
often (for example, state attributes).

Before you begin


You must have the root/admin password as listed in the Passwords.txt file.

Steps

1. Disable the alarm.

2. From the service laptop, log in to the primary Admin Node:

a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

3. Restart the NMS service:


service nms restart

4. Log out of the Admin Node:


exit

The alarm is cleared.

Related concepts
Disabling alarms on page 74
Monitoring the StorageGRID Webscale system | 79

What AutoSupport is
AutoSupport enables technical support to proactively monitor the health of your StorageGRID
Webscale system.
You can use any combination of the following choices to send AutoSupport messages to technical
support:

• User-triggered: Manually send AutoSupport messages at any time

• Weekly: Automatically send AutoSupport messages once per week (default setting: Enabled)

• Event-triggered: Automatically send AutoSupport messages when significant system events


occur (default setting: Enabled)

By analyzing AutoSupport information, technical support can help you determine the health and
status of your StorageGRID Webscale system and troubleshoot any problems that might occur.
Technical support can also monitor the storage needs of the system, such as the need to expand.

Related information
NetApp Support

Sending user-triggered AutoSupport messages


You can manually trigger an AutoSupport message at any time.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Support > AutoSupport.

2. From the AutoSupport menu, select User-triggered.

3. Click Send.
80 | StorageGRID Webscale 11.1 Administrator Guide

The StorageGRID Webscale system attempts to send an AutoSupport message to technical


support. If the attempt is successful, the Last Attempt attribute updates to Successful. If there is a
problem, the Last Attempt attribute updates to Failed. The StorageGRID Webscale system does
not try again.
If a failure occurs, verify that the StorageGRID Webscale system’s email server is correctly
configured and that your email server is running. The following error message might appear on
the AutoSupport page: “AutoSupport messages cannot be sent due to incorrect settings on the E-
mail Server page.”
The AutoSupport message includes the following information:

• StorageGRID Webscale software version

• Operating system version

• System-level and location-level attribute information

• All alarms raised in the last seven days

• Current status of all grid tasks, including historical data

• Events information as listed on the Nodes > Grid Node > Events page

• Admin Node database usage

• Number of lost or missing objects

• Grid configuration settings

• NMS entities

• Active ILM policy

• Provisioned grid specification file

Related tasks
Configuring email server settings on page 57

Disabling weekly AutoSupport messages


By default, the StorageGRID Webscale system is configured to send an AutoSupport message to
NetApp Support once a week.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Monitoring the StorageGRID Webscale system | 81

About this task


To determine when the weekly AutoSupport message is sent, see the Next Scheduled Time attribute
on the AutoSupport > Weekly page. You can disable the automatic sending of an AutoSupport
message at any time.

Steps

1. Select Support > AutoSupport.

2. From the AutoSupport menu, select Weekly.

3. Clear the Enabled check box.

4. Click Apply Changes.

Disabling event-triggered AutoSupport messages


By default, the StorageGRID Webscale system is configured to send an AutoSupport message to
NetApp Support when a significant system event occurs.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


You can disable the automatic sending of an AutoSupport message at any time.

Steps

1. Select Support > AutoSupport.

2. From the AutoSupport menu, select Event-triggered.

3. Clear the Enabled check box.


82 | StorageGRID Webscale 11.1 Administrator Guide

4. Click Apply Changes.

Troubleshooting AutoSupport messages


If an attempt to send an AutoSupport message fails, the StorageGRID Webscale system takes
different actions depending on the type of AutoSupport message.
Attention: Suppressing email notifications system wide (Configuration > Display Options,
Notifications Suppress All check box) suppresses event-triggered AutoSupport emails, even when
the Enabled check box is selected on the Event-Triggered AutoSupport page.

Weekly AutoSupport message failure


If a Weekly AutoSupport message fails to send, the StorageGRID Webscale system takes the
following actions:

1. Updates the Most Recent Result attribute to Retrying.

2. Attempts to resend the AutoSupport message 15 times every four minutes for one hour.

3. After one hour of send failures, updates the Most Recent Result attribute to Failed.

4. Attempts to send an AutoSupport message again at the next scheduled time.

5. Maintains the regular AutoSupport schedule if the message fails because the NMS service is
unavailable, and if a message is sent before seven days pass.

6. When the NMS service is available again, sends an AutoSupport message immediately if a
message has not been sent for seven days or more.

User-triggered or Event-triggered AutoSupport message failure


If a user-triggered or an event-triggered AutoSupport message fails to send, the StorageGRID
Webscale system takes the following actions:

1. Displays an error message if the error is known. For example, an email server configuration error
could appear.

2. Does not attempt to send the message again.

3. Logs the error in nms.log.

Correcting an AutoSupport message failure


If a failure occurs, the following error message might appear on the AutoSupport page: “AutoSupport
messages cannot be sent due to incorrect settings on the E-mail Server page.”
Monitoring the StorageGRID Webscale system | 83

Check that the StorageGRID Webscale system’s email server is correctly configured and that your
email server is running.

Related tasks
Configuring email server settings on page 57
Suppressing email notifications system wide on page 62

Using reports
You can use reports to monitor the state of the StorageGRID Webscale system and troubleshoot
problems. The types of reports available in the Grid Manager include pie charts (on the Dashboard
only), graphs, and text reports.

Types of charts
In addition to the summary pie charts shown on the Dashboard, you can access more detailed graphs
that present the data with the attribute value (vertical axis) over a specified time span (horizontal
axis).
The Dashboard provides access to pie charts summarizing available storage as well as graphs for
ILM and protocol operations.
In addition, graphs are available from the Nodes page and the Grid Topology tree. There are three
types of graphs:

• Line graph: Used to plot the values of an attribute that has a unit value (such as NTP Frequency
Offset, in ppm). The changes in the value are plotted in regular data intervals (bins) over time.

• Area graph: Used to plot volumetric quantities, such as object counts or service load values. Area
graphs are similar to line graphs, but include a light brown shading below the line. The changes in
the value are plotted in regular data intervals (bins) over time.
84 | StorageGRID Webscale 11.1 Administrator Guide

• State graph: State graphs are used to plot values that represent distinct states such as a service
state that can be online, standby, or offline. State graphs are similar to line graphs, but the
transition is discontinuous, that is, the value jumps from one state value to another.

Chart legend
The lines and colors used to draw charts have specific meaning.

Sample Meaning
Reported attribute values are plotted using dark green lines.
Light green shading around dark green lines indicates that the
actual values in that time range vary and have been “binned” for
faster plotting. The dark line represents the weighted average. The
range in light green indicates the maximum and minimum values
within the bin. Light brown shading is used for area graphs to
indicate volumetric data.
Blank areas (no data plotted) indicate that the attribute values were
unavailable. The background can be blue, gray, or a mixture of
gray and blue, depending on the state of the service reporting the
attribute.
Monitoring the StorageGRID Webscale system | 85

Sample Meaning
Light blue shading indicates that some or all of the attribute values
at that time were indeterminate; the attribute was not reporting
values because the service was in an unknown state.

Gray shading indicates that some or all of the attribute values at


that time were not known because the service reporting the
attributes was administratively down.

A mixture of gray and blue shading indicates that some of the


attribute values at the time were indeterminate (because the service
was in an unknown state), while others were not known because
the service reporting the attributes was administratively down.

Displaying charts
The Nodes page contains the charts you should access regularly to monitor attributes such as storage
capacity and throughput. In some cases, especially when working with technical support, you might
use the Grid Topology tree to access additional charts.

Before you begin


You must be signed in to the Grid Manager using a supported browser.

About this task


Note: You cannot create charts for all attributes; for example, text attributes such as Node ID,
version number, and build number.

Steps

1. Select Nodes > grid or grid node.

2. Select the appropriate tab, or select Chart to the right of an attribute to display a chart.
For some attributes, the chart appears when you select the tab. For other attributes, you select the
Chart icon to display the chart.
86 | StorageGRID Webscale 11.1 Administrator Guide

3. To display additional attributes and charts, select Support > Grid Topology.

4. Select grid node > component or service > Overview > Main.
Monitoring the StorageGRID Webscale system | 87

5. Click Chart next to the attribute.


The display automatically changes to the Reports > Charts page. The chart displays the
attribute’s data over the past day.

Generating charts
Charts display a graphical representation of attribute data values. You can report on a data center site,
grid node, component, or service.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.

Steps

1. Select Support > Grid Topology.

2. Select grid node > component or service > Reports > Charts.

3. Select the attribute to report on from the Attribute drop-down list.

4. To force the Y-axis to start at zero, deselect the Vertical Scaling check box.

5. To show values at full precision, select the Raw Data checkbox, or to round values to a maximum
of three decimal places (for example, for attributes reported as percentages), deselect the Raw
Data checkbox.

6. Select the time period to report on from the Quick Query drop-down list.
88 | StorageGRID Webscale 11.1 Administrator Guide

Select the Custom Query option to select a specific time range.


The chart appears after a few moments. Allow several minutes for tabulation of long time ranges.

7. If you selected Custom Query, customize the time period for the chart by entering the Start Date
and End Date.
Use the format YYYY/MM/DD HH:MM:SS in local time. Leading zeros are required to match the
format. For example, 2017/4/6 7:30:00 fails validation. The correct format is: 2017/04/06
07:30:00.

8. Click Update.
A chart is generated after a few moments. Allow several minutes for tabulation of long time
ranges. Depending on the length of time set for the query, either a raw text report or aggregate
text report is displayed.

9. If you want to print the chart, right-click and select Print, and modify any necessary printer
settings and click Print.

Types of text reports


Text reports display a textual representation of attribute data values that have been processed by the
NMS service. There are two types of reports generated depending on the time period you are
reporting on: raw text reports for periods less than a week, and aggregate text reports for time periods
greater than a week.

Raw text reports


A raw text report displays details about the selected attribute:

• Time Received: Local date and time that a sample value of an attribute’s data was processed by
the NMS service.

• Sample Time: Local date and time that an attribute value was sampled or changed at the source.

• Value: Attribute value at sample time.


Monitoring the StorageGRID Webscale system | 89

Aggregate text reports


An aggregate text report displays data over a longer period of time (usually a week) than a raw text
report. Each entry is the result of summarizing multiple attribute values (an aggregate of attribute
values) by the NMS service over time into a single entry with average, maximum, and minimum
values that are derived from the aggregation.
Each entry displays the following information:

• Aggregate Time: Last local date and time that the NMS service aggregated (collected) a set of
changed attribute values.
• Average Value: The average of the attribute’s value over the aggregated time period.
• Minimum Value: The minimum value over the aggregated time period.

• Maximum Value: The maximum value over the aggregated time period.

Generating text reports


Text reports display a textual representation of attribute data values that have been processed by the
NMS service. You can report on a data center site, grid node, component, or service.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.

About this task


For attribute data that is expected to be continuously changing, this attribute data is sampled by the
NMS service (at the source) at regular intervals. For attribute data that changes infrequently (for
example, data based on events such as state or status changes), an attribute value is sent to the NMS
service when the value changes.
The type of report displayed depends on the configured time period. By default, aggregate text
reports are generated for time periods longer than one week.
Grey text indicates the service was administratively down during the time it was sampled. Blue text
indicates the service was in an unknown state.

Steps

1. Select Support > Grid Topology.


90 | StorageGRID Webscale 11.1 Administrator Guide

2. Select grid node > component or service > Reports > Text.

3. Select the attribute to report on from the Attribute drop-down list.

4. Select the number of results per page from the Results per Page drop-down list.

5. To round values to a maximum of three decimal places (for example, for attributes reported as
percentages), deselect the Raw Data check box.

6. Select the time period to report on from the Quick Query drop-down list.
Select the Custom Query option to select a specific time range.
The report appears after a few moments. Allow several minutes for tabulation of long time ranges.

7. If you selected Custom Query, you need to customize the time period to report on by entering the
Start Date and End Date.
Use the format YYYY/MM/DD HH:MM:SS in local time. Leading zeros are required to match the
format. For example, 2017/4/6 7:30:00 fails validation. The correct format is: 2017/04/06
07:30:00.

8. Click Update.
A text report is generated after a few moments. Allow several minutes for tabulation of long time
ranges. Depending on the length of time set for the query, either a raw text report or aggregate
text report is displayed.

9. If you want to print the report, right-click and select Print, and modify any necessary printer
settings and click Print.

Exporting text reports


Exported text reports open a new browser tab, which enables you to select and copy the data.

About this task


The copied data can then be saved into a new document (for example, a spreadsheet) and used to
analyze the performance of the StorageGRID Webscale system.

Steps

1. Select Support > Grid Topology.

2. Create a text report.

3. Click Export .
Monitoring the StorageGRID Webscale system | 91

The Export Text Report window opens displaying the report.

4. Select and copy the contents of the Export Text Report window.
This data can now be pasted into a third-party document such as a spreadsheet.
92

Creating and managing tenant accounts


When you create a tenant account, you specify who can use your StorageGRID Webscale system to
store and retrieve objects, and what functionality is available to them.

What tenant accounts are


Tenant accounts allow client applications that use the Simple Storage Service (S3) REST API or the
Swift REST API to store and retrieve objects on StorageGRID Webscale.
Each tenant account supports the use of a single protocol, which you specify when you create the
account. To store and retrieve objects to a StorageGRID Webscale system with both protocols, you
must create two tenant accounts: one for S3 buckets and objects, and one for Swift containers and
objects. Each tenant account has its own unique account ID, federated or local groups and users,
containers (buckets for S3), and objects.
Optionally, you can create additional tenant accounts if you want to segregate the objects stored on
your system by different entities. For example, you might set up multiple tenant accounts in either of
these use cases:

• Enterprise use case: If you are administering a StorageGRID Webscale system in an enterprise
application, you might want to segregate the grid's object storage by the different departments in
your organization. In this case, you could create tenant accounts for the Marketing department,
the Customer Support department, the Human Resources department, and so on.
Note: If you use the S3 client protocol, you can simply use S3 buckets and bucket policies to
segregate objects between the departments in an enterprise. You do not need to use tenant
accounts. See the instructions for implementing S3 client applications for more information.

• Service provider use case: If you are administering a StorageGRID Webscale system as a
service provider, you can segregate the grid's object storage by the different entities that will lease
the storage on your grid. In this case, you would create tenant accounts for Company A, Company
B, Company C, and so on.

Creating tenant accounts


When you create a tenant account, you specify the following information:

• Display name for the tenant account (the tenant's account ID is assigned automatically and cannot
be changed)

• Which client protocol will be used by the tenant account (S3 or Swift)

• Initial password for the tenant account’s root user

• Whether the tenant account will use its own identity source or share the grid's identity source

• For S3 tenant accounts: Whether the tenant account has permission to use platform services with
S3 buckets. If you permit tenant accounts to use platform services, you must ensure that the grid
is configured to support their use. See “Managing platform services” for more information.

• Optionally, a storage quota for the tenant account—the maximum number of gigabytes, terabytes,
or petabytes available for the tenant's objects. A tenant's storage quota represents a logical amount
(object size), not a physical amount (size on disk).

Configuring S3 tenants
After an S3 tenant account is created, tenant users can access the Tenant Manager to perform tasks
such as the following:
Creating and managing tenant accounts | 93

• Setting up identity federation (unless the identity source is shared with the grid), or creating local
groups and users

• Managing S3 access keys

• Creating and managing S3 buckets

• Using platform services (if enabled)

• Monitoring storage usage

Attention: S3 tenant users can create and manage S3 buckets with the Tenant Manager, but they
must have S3 access keys and use the S3 REST API to ingest and manage objects.

Configuring Swift tenants


After a Swift tenant account is created, the tenant's root user can access the Tenant Manager to
perform tasks such as the following:

• Setting up identity federation (unless the identity source is shared with the grid), or creating local
groups and users

• Monitoring storage usage

Attention: The tenant's root user can sign in the Tenant Manager. However, the tenant's root user
does not have permission to use the Swift REST API. To authenticate into the Swift REST API to
create containers and ingest objects, the user must belong to a group with the Administrator
permission. However, administrator users cannot sign in to the Tenant Manager.

Steps
1. Creating a tenant account on page 93
2. Changing the password for a tenant account's root user on page 96
3. Editing a tenant account on page 97
4. Deleting tenant accounts on page 99
5. Managing platform services on page 100

Related tasks
Managing platform services on page 100

Related information
Implementing S3 client applications
Implementing Swift client applications
Using tenant accounts

Creating a tenant account


You must create at least one tenant account to control access to the storage in your StorageGRID
Webscale system.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
94 | StorageGRID Webscale 11.1 Administrator Guide

Steps

1. Select Tenants.
The Tenant Accounts page appears.

2. Click Create.
Step 1 - Create Tenant Account appears.

3. In the Display Name text box, enter the display name for this tenant account.
When the tenant account is created, it receives a unique, numeric Account ID; for this reason,
display names are not required to be unique.

4. Select the client protocol that will be used by this tenant account, either S3 or Swift.

5. Uncheck the Uses Own Identity Source checkbox if this tenant account will use the identity
source that was configured for the Grid Manager. See “Configuring identity federation” for more
information.
If this checkbox is selected (default), you must configure a unique identity source for this tenant if
you want to use identity federation for tenant groups and users. See the instructions for using
tenant accounts.
Creating and managing tenant accounts | 95

6. Uncheck the Allow Platform Services checkbox if you do not want this tenant to use platform
services for S3 buckets.
If platform services are enabled, a tenant can use features, such as CloudMirror replication, that
access external services. You might want to disable the use of these features to limit the amount
of network bandwidth or other resources a tenant consumes. See “Managing platform services”
for more information.

7. Optionally, enter the maximum number of gigabytes, terabytes, or petabytes that you want to
make available for this tenant's objects in the Storage Quota text box. Then, select the units from
the drop-down list.
Leave this field blank if you want this tenant to have an unlimited quota.
Note: A tenant's storage quota represents a logical amount (object size), not a physical amount
(size on disk). ILM copies and erasure coding do not contribute to the amount of quota used. If
the quota is exceeded, the tenant account cannot create new objects.

Note: You can monitor tenant storage usage from the Dashboard in the Tenant Manager or with
the Tenant Management API. Note that a tenant's storage usage values might become out of
date if nodes are isolated from other nodes in the grid. The totals will be updated when network
connectivity is restored.

8. In the Tenant Root User Password section, enter a password for the tenant account's root user.

9. Click Save.
The tenant account is created, and Step 2 - Configure Tenant Account appears.

10. Decide whether to configure the tenant account now or later, as follows:

• If you are ready to configure the new tenant account, go to step 11.

• If you or someone else will configure the tenant account later, go to step 13.

11. Click the Sign in as root button.


A green check mark appears on the button, indicating that you are now signed in to the tenant
account as the root user.
96 | StorageGRID Webscale 11.1 Administrator Guide

12. Select one or more of the following links to configure the tenant account in the Tenant Manager.

a. If you created an S3 tenant and you want to create and manage the S3 buckets for this
account, select Buckets.
The Buckets page for the Tenant Manager opens on a new tab. To complete this page, see the
instructions for using tenant accounts.

b. If you want to set up an identity source for the tenant, select Identity Federation.
Note: This link appears only if you left the Uses Own Identity Source checkbox selected
in step 5.

The Identity Federation page for the Tenant Manager opens on a new tab. To complete this
page, see the instructions for using tenant accounts.

c. If you want to configure the groups who can access the tenant, select Groups.
The Groups page for the Tenant Manager opens on a new tab. To complete this page, see the
instructions for using tenant accounts.

d. If you want to configure local users who can access the tenant, select Users.
If you are using federated groups, you do not need to configure users.
The Users page for the Tenant Manager opens on a new tab. To complete this page, see the
instructions for using tenant accounts.

13. Click Finish.


The dialog closes. To access the Tenant Manager later, select Tenants from the menu, click the
Sign in link, and sign in. Or, provide the URL for the Sign in link and the root user password to
the tenant account’s administrator.

Related concepts
Controlling system access with administration user accounts and groups on page 249

Related tasks
Configuring identity federation on page 249
Managing platform services on page 100

Related information
Using tenant accounts

Changing the password for a tenant account's root user


You might need to change the password for a tenant account's root user if the root user is locked out
of the account.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Creating and managing tenant accounts | 97

Steps

1. Select Tenants.
The Tenant Accounts page appears.

2. Select the tenant account you want to edit.


If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.

3. Select Change Root Password.

4. Enter the new password for the tenant account.

5. Click Save.

Related concepts
Controlling system access with administration user accounts and groups on page 249

Editing a tenant account


You can edit a tenant account to change the display name, change the identity source setting, allow or
disallow platform services, or enter a storage quota.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
98 | StorageGRID Webscale 11.1 Administrator Guide

Steps

1. Select Tenants.
The Tenant Accounts page appears.

2. Select the tenant account you want to edit.


If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.

3. Select Edit Account.

4. Change the values for the fields as required.

a. Change the display name for this tenant account.

b. Change the setting of the Uses Own Identity Source checkbox to determine whether the
tenant account will use its own identity source or the identity source that was configured for
the Grid Manager.
If the tenant has already enabled its own identity source, you cannot unselect the Uses Own
Identity Source checkbox. A tenant must disable its identity source before it can use the
identity source that was configured for the Grid Manager.

c. Change the setting of the Allow Platform Services checkbox to determine whether the tenant
account can use platform services for their S3 buckets.
Attention: If you disable platform services for a tenant who is already using them, the
services that they have configured for their S3 buckets will stop working. No error message
is sent to the tenant. For example, if the tenant has configured CloudMirror replication for
an S3 bucket, they can still store objects in the bucket, but copies of those objects will no
longer be made in the external S3 bucket that they have configured as an endpoint.
Creating and managing tenant accounts | 99

d. For Storage Quota, change the number of maximum number of gigabytes, terabytes, or
petabytes available for this tenant's objects, or leave the field blank if you want this tenant to
have an unlimited quota.
A tenant's storage quota represents a logical amount (object size), not a physical amount (size
on disk).
Note: You can monitor tenant storage usage from the Dashboard in the Tenant Manager or
with the Tenant Management API. Note that a tenant's storage usage values might become
out of date if nodes are isolated from other nodes in the grid. The totals will be updated
when network connectivity is restored.

5. Click Save.

Related concepts
Controlling system access with administration user accounts and groups on page 249

Related tasks
Managing platform services on page 100

Deleting tenant accounts


You can delete a tenant account if you want to permanently remove the tenant's access to the system.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You must have removed all buckets (S3), containers (Swift), and objects associated with the
tenant account.

Steps

1. Select Tenants.

2. Select the tenant account you want to delete.


If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.

3. Click Remove.

4. Click OK.

Related concepts
Controlling system access with administration user accounts and groups on page 249
100 | StorageGRID Webscale 11.1 Administrator Guide

Managing platform services


If you enable platform services for S3 tenant accounts, you must configure your grid such that
tenants can access the external resources necessary to use these services.

What platform services are


Platform services include CloudMirror replication, event notifications, and the search integration
service.
These services allow tenants to use the following functionality with their S3 buckets:

• CloudMirror replication: The StorageGRID Webscale CloudMirror replication service is used


to mirror specific objects from a StorageGRID Webscale bucket to a specified external
destination.

• Notifications: Per-bucket event notifications are used to send notifications about specific actions
performed on objects to a specified external Simple Notification Service (SNS).

• Search integration service: The search integration service is used to send S3 object metadata to
a specified Elasticsearch index where the metadata can be searched or analyzed using the external
service.

Platform services give tenants the ability to use external storage resources, notification services, and
search or analysis services with their data. Because the target location for platform services is
typically external to your StorageGRID Webscale deployment, you must decide if you want to permit
tenants to use these services. If you do, you must enable the use of platform services when you create
or edit tenant accounts. You must also must configure your network such that the platform services
messages that tenants generate can reach their destinations.

Recommendations for using platform services


Before using platform services, you must be aware of the following recommendations:

• NetApp recommends that you use no more than 100 active tenants with S3 requests requiring
CloudMirror replication, notifications, and search integration. Having more than 100 active
tenants can result in slower S3 client performance.

• If an S3 bucket in theStorageGRID Webscale system has both versioning and CloudMirror


replication enabled, NetApp recommends that the destination endpoint also have S3 bucket
versioning enabled. This allows CloudMirror replication to generate similar object versions on the
endpoint.

Related information
Using tenant accounts

Networking and ports for platform services


If you allow an S3 tenant to use platform services, you must configure networking for the grid to
ensure that platform services messages can be delivered to their destinations.
You can enable platform services for an S3 tenant account when you create or update the tenant
account. If platform services are enabled, the tenant can create endpoints that serve as a destination
for CloudMirror replication, event notifications, or search integration messages from its S3 buckets.
These platform services messages are sent from Storage Nodes that run the ADC service to the
destination endpoints.
For example, tenants might configure the following types of destination endpoints:
Creating and managing tenant accounts | 101

• A locally-hosted Elasticsearch cluster

• A local application that supports receiving Simple Notification Service (SNS) messages

• A locally-hosted S3 bucket on the same or another instance of StorageGRID Webscale

• An external endpoint, such as an endpoint on Amazon Web Services.

To ensure that platform services messages can be delivered, you must configure the network or
networks containing the ADC Storage Nodes. You must ensure that the following ports can be used
to send platform services messages to the destination endpoints.
By default, platform services messages are sent on the following ports:

• 80: For endpoint URIs that begin with http

• 443: For endpoint URIs that begin with https

Tenants can specify a different port when they create or edit an endpoint.
Note: If a StorageGRID Webscale deployment is used as the destination for CloudMirror
replication, replication messages are received by an API Gateway Node on port 8082. Ensure that
this port is accessible through your enterprise network.

If you use a non-transparent proxy server, you must also configure platform services settings to allow
messages to be sent to external endpoints, such as an endpoint on the internet.

Related information
Using tenant accounts

Configuring platform services


If you have enabled platform services and you are using a non-transparent proxy, you must configure
the proxy to allow platform services messages to be sent to external endpoints, such as an endpoint
on the internet. You can configure the settings for a single proxy.

Before you begin

• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.

• You must be signed in to the Grid Manager using a supported browser.

Steps

1. Select Configuration > Platform Services.


The Platform Services Settings page appears.
102 | StorageGRID Webscale 11.1 Administrator Guide

2. Select the protocol for the non-transparent proxy.

3. Enter the hostname or IP address of the proxy server.

4. Optionally, enter the port used to connect to the proxy server.


You can leave this field blank if you use the default port for the protocol: 80 for HTTP or 1080 for
SOCKS5.

5. Click Save.
After the proxy is saved, tenants can configure and test their endpoints.
Note: Proxy changes can take up to 10 minutes to take effect.

6. If you need to reset the proxy, click Clear.

Per-site delivery of platform services messages


All platform services operations are performed on a per-site basis.
That is, if a tenant uses a client to perform an S3 API Create operation on an object by connecting to
an API Gateway Node at Data Center Site 1, the notification about that action is triggered and sent
from Data Center Site 1.

If the client subsequently performs an S3 API Delete operation on that same object from Data Center
Site 2, the notification about the delete action is triggered and sent from Data Center Site 2.
Creating and managing tenant accounts | 103

Make sure that the networking at each site is configured such that platform services messages can be
delivered to their destinations.

Troubleshooting platform services


If a tenant has issues configuring or using platform services, you might be able to use the Grid
Manager to identify the cause of their issue.

Issues with endpoint creation


Before a tenant can use platform services, they must create one or more endpoints using the Tenant
Manager. Each endpoint represents an external destination for one platform service, such as a
StorageGRID Webscale S3 bucket, an Amazon Web Services bucket, a Simple Notification Service
topic, or an Elasticsearch cluster hosted locally or on AWS. Each endpoint includes both the location
of the external resource (expressed as a URI) and the credentials needed to access that resource.
When a tenant creates an endpoint, the StorageGRID Webscale system validates that the endpoint
exists and that it can be reached using the credentials that were specified. If endpoint validation fails,
an error message explains why endpoint validation failed. The tenant user should resolve the issue,
then try creating the endpoint again.
Note: Endpoint creation will fail if platform services are not enabled for the tenant account.

Issues with endpoint validation


Tenant users can test an existing endpoint by selecting the endpoint, and clicking Test. A success
message is displayed if the endpoint can be reached using the specified credentials.
If endpoint validation fails, an error message is displayed. To correct the error, the tenant user should
edit the endpoint and click Save to validate the changes.
104 | StorageGRID Webscale 11.1 Administrator Guide

Client operations fail


Some platform services issues might cause client operations on the S3 bucket to fail. For example, S3
client operations will fail if the internal Replicated State Machine (RSM) service stops, or if there are
too many platform services messages queued for delivery.
To check the status of services:

1. Select Support > Grid Topology.

2. Select site > Storage Node > SSM > Services.

Recoverable and unrecoverable endpoint errors


After endpoints have been created, platform service request errors can occur for various reasons.
Some errors are recoverable with user intervention. For example, recoverable errors might occur for
the following reasons:

• The user's credentials have been deleted or have expired.

• The destination bucket does not exist.

• The notification cannot be delivered.

If StorageGRID Webscale encounters a recoverable error, the platform request will be retried until it
succeeds.
Other errors are unrecoverable. For example, an unrecoverable error occurs if the endpoint is deleted.
If StorageGRID Webscale encounters an unrecoverable endpoint error, the Total Events (SMTT)
alarm is triggered in the Grid Manager. To view the Total Events alarm:

1. Select Nodes.

2. Select site > grid node > Events.

3. View Last Event at the top of the table.


Event messages are also listed in /var/local/log/bycasterr.log.

4. Follow the guidance provided in the SMTT alarm contents to correct the issue.

5. Click Reset event counts.

6. Notify the tenant of the objects whose platform services messages have not been delivered.

7. Instruct the tenant to trigger the failed replication or notification by updating the object's metadata
or tags.
The tenant can resubmit the existing values to avoid making unwanted changes.

Platform services messages cannot be delivered


If the destination encounters an issue that prevents it from accepting platform services messages, the
client operation on the bucket succeeds, but the platform services message is not delivered. For
example, this error might happen if credentials are updated on the destination such that StorageGRID
Webscale can no longer authenticate to the destination service.
If platform services messages cannot be delivered because of an unrecoverable error, the Total Events
(SMTT) alarm is triggered in the Grid Manager.

Slower performance for platform service requests


StorageGRID Webscale might throttle incoming S3 requests for a bucket if the rate at which the
requests are being sent exceeds the rate at which the destination endpoint can receive the requests.
Creating and managing tenant accounts | 105

Throttling only occurs when there is a backlog of requests waiting to be sent to the destination
endpoint.
The only visible effect is that the incoming S3 requests will take longer to execute. If you start to
detect significantly slower performance, you should reduce the ingest rate or use an endpoint with
higher capacity. If the backlog of requests continues to grow, client S3 operations (such as PUT
requests) will eventually fail.
CloudMirror requests are more likely to be affected by the performance of the destination endpoint
because these requests typically involve more data transfer than search integration or event
notification requests.
106

Managing objects through information lifecycle


management
You manage objects by configuring information lifecycle management (ILM) rules and policies,
which determine how the StorageGRID Webscale system creates and distributes copies of object data
and how it manages those copies over time.
Designing and implementing an ILM policy requires careful planning and an understanding of ILM.
You must define the logic for how objects are to be filtered, copied, and distributed throughout the
system, taking into account the topology of your StorageGRID Webscale system, object protection
requirements, and available storage types.

What an information lifecycle management policy is


An information lifecycle management (ILM) policy is a set of prioritized ILM rules that determines
how the StorageGRID Webscale system manages object data over time.

What proposed, active, and historical policies are


Every StorageGRID Webscale system must have one active ILM policy. A StorageGRID Webscale
system might also have one proposed ILM policy and any number of historical policies.
When you first create an ILM policy, you create a proposed policy by selecting one or more ILM
rules and arranging them in a specific order. After you have simulated the proposed policy to confirm
its behavior, you activate it to create the active policy. Activating the proposed policy causes the
previously active policy to become a historical policy. Historical ILM policies are those ILM policies
that are no longer active. Historical ILM policies cannot be deleted.

How ILM policies evaluate objects


When an object is ingested into the StorageGRID Webscale system, its metadata is evaluated against
the first ILM rule in the active ILM policy. If the object metadata matches the filters in the first rule,
the content placement instructions for that rule distribute object data to the specified storage
locations. If the metadata does not match the filters in the first rule, the object is evaluated against
each subsequent ILM rule in the active ILM policy, until a match is made.
One ILM rule must be set as the default ILM rule for the policy. If none of the other ILM rules in the
policy matches the object metadata, the placement instructions specified by the default rule are
applied. When the StorageGRID Webscale system is first installed, the stock ILM rule, Make 2
Copies, is the default ILM rule.
This figure illustrates how each object is evaluated by the ILM rules in the active ILM policy.
Managing objects through information lifecycle management | 107

ILM policy example


The following figure illustrates an ILM policy that includes ILM rules that specify the following:

1. When an object is ingested, one copy is placed on disk (Storage Node) at Data Center 1 (DC1),
one copy is placed on disk at Data Center 2 (DC2), and one copy is placed on archival media
(Archive Node) at DC2.

2. At the end of one year, the copy on disk at DC2 is deleted.


108 | StorageGRID Webscale 11.1 Administrator Guide

What dual commit is


Dual commit is the system's default functionality when an object is ingested. Dual commit is
designed to prevent the loss of object data if an object's initial storage location fails before the object
can be evaluated against the active ILM policy.
As soon as an object is ingested, a second copy of that object is created and distributed to a different
Storage Node. When the object is matched by an ILM rule in the active policy, StorageGRID
Webscale determines if the initial, dual-commit copies satisfy the placement instructions in the rule.
If they do not, the object is added to the ILM evaluation queue. When the object is re-evaluated, new
object copies might need to be made in different locations, and the initial dual-commit copies might
need to be deleted.
If the request to create the dual-commit copies fails (for example, a network issue prevents the
second initial copy from being made), the StorageGRID Webscale system does not retry and ingest
fails.
Dual commit is enabled by default. If ILM rules are configured to store only one instance of
replicated object data, you can specify a single-commit ingest operation to avoid unnecessarily
creating and then deleting copies generated by the dual-commit ingest. For configuration
information, see the instructions for implementing S3 or Swift client applications.
Note: You cannot specify a single-commit ingest operation (that is, you cannot use
REDUCED_REDUNDANCY) for S3 buckets that have compliance enabled. This is to ensure that
compliance requirements are satisfied (two copies of each object exist) before the object is
evaluated against the active ILM policy.

Related tasks
Managing S3 buckets and objects for compliance on page 162

Related information
Implementing S3 client applications
Implementing Swift client applications

What an information lifecycle management rule is


An information lifecycle management (ILM) rule determines how the StorageGRID Webscale system
stores object data over time. You configure ILM rules and then add them to an ILM policy.
ILM rules determine:

• Where an object’s data is stored and the type of storage used (storage grades and storage pools)

• The number and type of copies made (replication or erasure coding)

• How specific objects are managed (object filtering)

• How the object’s data is managed over time, where it is stored, and how it is protected from loss
(placement instructions)

Object metadata is not managed by ILM rules. Instead, object metadata is stored in a Cassandra
database in what is known as a metadata store. Three copies of object metadata are automatically
maintained at each site to protect the data from loss. The copies are load balanced across all Storage
Nodes.
Managing objects through information lifecycle management | 109

What replication and erasure coding are


StorageGRID Webscale provides two mechanisms for protecting object data from loss: replication
and erasure coding.

What replication is
Replication is one of two mechanisms used by StorageGRID Webscale to store object data. When
StorageGRID Webscale matches objects to an ILM rule that is configured to create replicated copies,
the system creates exact copies of object data and stores the copies on Storage Nodes or Archive
Nodes.
When you configure an ILM rule to create replicated copies, you specify how many copies should be
created, where those copies should be placed, and how long the copies should be stored at each
location.
In the following example, the ILM rule specifies that two replicated copies of each object be placed
in a storage pool that contains three Storage Nodes.

When StorageGRID Webscale matches objects to this rule, it creates two copies of the object, placing
each copy on a different Storage Node in the storage pool. The two copies might be placed on any
two of the three available Storage Nodes. In this case, the rule placed object copies on Storage Nodes
2 and 3. Because there are two copies, the object can be retrieved if any of the nodes in the storage
pool fails.

Related concepts
Using multiple storage pools for cross-site replication on page 116

What erasure coding is


Erasure coding is the second method used by StorageGRID Webscale to store object data. When
StorageGRID Webscale matches objects to an ILM rule that is configured to create erasure-coded
copies, it slices object data into data fragments, computes additional parity fragments, and stores each
fragment on a different Storage Node. When an object is accessed, it is reassembled using the stored
fragments. If a data or a parity fragment becomes corrupt or lost, the erasure-coding algorithm can
recreate that fragment using a subset of the remaining data and parity fragments.
The figure illustrates the use of an erasure-coding algorithm on an object’s data. In this example, the
ILM rule uses a 6+3 erasure coding scheme. Each object is sliced into six equal data fragments, and
110 | StorageGRID Webscale 11.1 Administrator Guide

three parity fragments are computed from the object data. Each of the nine fragments is stored on a
different node across multiple sites to provide data protection for node failures or site loss.

In the example, the object can be retrieved using any six of the nine fragments. Up to three fragments
can be lost without loss of the object data. If an entire data center site is lost, the object can still be
retrieved or repaired, as long as all of the other fragments remain accessible.

Related concepts
What erasure coding schemes are on page 110

Related tasks
Configuring an Erasure Coding profile on page 119

What erasure coding schemes are


When you configure the Erasure Coding profile for an ILM rule, you select an available erasure
coding scheme. Erasure coding schemes control how many data fragments and how many parity
fragments are created for each object. The erasure coding schemes that are available depend on how
many Storage Nodes and sites make up the storage pool you plan to use.
The StorageGRID Webscale system uses the Reed-Solomon erasure-coding algorithm. The algorithm
slices an object into k data fragments and computes m parity fragments. The k + m = n fragments are
spread across n Storage Nodes to provide data protection. An object can sustain up to m lost or
corrupt fragments. k fragments are needed to retrieve or repair an object.
You can use erasure coding with a single-site deployment or with a multiple-site deployment. Erasure
coding is well suited for single-site deployments that require efficient data protection with only a
single erasure-coded copy rather than multiple replicated copies.
The table lists the erasure coding schemes currently supported by StorageGRID Webscale for
deployments with three, four, or five sites. It specifies the recommended number of sites and Storage
Nodes for each scheme. The supported erasure coding schemes are designed to provide site loss
protection. Up to one entire site can be lost, and the object can still be accessible.

Erasure coding Number of Recommended Total Site loss


scheme deployed sites number of recommended protection?
(k + m) Storage Nodes number of
at each site Storage Nodes
2+1 3 3 9 Yes
4+2 3 3 9 Yes
Managing objects through information lifecycle management | 111

Erasure coding Number of Recommended Total Site loss


scheme deployed sites number of recommended protection?
(k + m) Storage Nodes number of
at each site Storage Nodes
6+3 3 4 12 Yes
9+3 4 4 16 Yes
6+2 4 3 12 Yes
8+2 5 3* 15 Yes
* At minimum, each site requires three Storage Nodes.
Additional erasure coding schemes are available. Contact your account manager.

The following example shows the 6+3 erasure coding scheme, which splits each object into six data
fragments and adds three parity fragments. This erasure coding scheme requires a minimum of nine
Storage Nodes, with three Storage Nodes at each of three different sites. An object can be retrieved as
long as any six (k) of the nine fragments (data or parity) remain available.

If more than three (m) Storage Nodes are lost, the object is not retrievable.
112 | StorageGRID Webscale 11.1 Administrator Guide

Advantages, disadvantages, and requirements for erasure coding


Before deciding whether to use replication or erasure coding to protect object data from loss, you
should understand the advantages, disadvantages, and the requirements for erasure coding.

Advantages of erasure coding


When compared to replication, erasure coding offers improved reliability, availability, and storage
efficiency.

• Reliability: Reliability is gauged in terms of fault tolerance—that is, the number of simultaneous
failures that can be sustained without loss of data. With replication, multiple identical copies are
stored on different nodes and across sites. With erasure coding, an object is encoded into data and
parity fragments and distributed across many nodes and sites. This dispersal provides both site
and node failure protection. When compared to replication, erasure coding provides improved
reliability at comparable storage costs.

• Availability: Availability can be defined as the ability to retrieve objects if Storage Nodes fail or
become inaccessible. When compared to replication, erasure coding provides increased
availability at comparable storage costs.

• Storage efficiency: For similar levels of availability and reliability, objects protected through
erasure coding consume less disk space than the same objects would if protected through
replication. For example, a 10 MB object that is replicated to two sites consumes 20 MB of disk
space (two copies), while an object that is erasure coded across three sites with a 6+3 erasure
coding scheme only consumes 15 MB of disk space.
Note: Disk space for erasure coded objects is calculated as the object size plus the storage
overhead. The storage overhead percentage is the number of parity fragments divided by the
number of data fragments.

Disadvantages of erasure coding


When compared to replication, erasure coding has the following disadvantages:

• Increased number of Storage Nodes and sites required. For example, if you use an erasure coding
scheme of 6+3, you must have at least three Storage Nodes at three different sites. In contrast, if
you simply replicate object data, you require only one Storage Node for each copy.
• Increased retrieval latencies when you use erasure coding across geographically distributed sites.
The object fragments for an object that is erasure coded and distributed across remote sites take
longer to retrieve over WAN connections than an object that is replicated and available locally
(the same site to which the client connects).

• When you use erasure coding across geographically distributed sites, higher WAN network traffic
usage for retrievals and repairs, especially for frequently retrieved objects or object repairs over
WAN network connections.

• Higher usage of compute resources.

Requirements for erasure coding


Erasure coding is best suited for the following requirements:

• Objects larger than 1 MB in size.


Attention: Due to the overhead of managing the number of fragments associated with an
erasure coded copy, do not use erasure coding for objects smaller than 200 KB.

• Long-term or cold storage for infrequently retrieved content.


Managing objects through information lifecycle management | 113

• High data availability and reliability.

• Protection against complete site and node failures.

• Storage efficiency.

• Single-site deployments that require efficient data protection with only a single erasure-coded
copy rather than multiple replicated copies.

Configuring ILM rules


Before you can create and activate the ILM policy for your StorageGRID Webscale system, you must
define object storage locations, determine the types of copies you want, optionally configure S3
regions, and create one or more rules.

About this task


When configuring ILM rules:

• Consider the StorageGRID Webscale system’s topology and storage configurations.

• Consider what types of object copies you want to make (replicated or erasure coded) and the
number of copies of each object that are required.

• Determine what types of object metadata are used in the applications that connect to the
StorageGRID Webscale system. ILM rules filter objects based on their metadata.

• Consider where you want object copies to be placed over time.

Steps
1. Creating and assigning storage grades on page 113
2. Configuring storage pools on page 115
3. Configuring an Erasure Coding profile on page 119
4. Configuring regions (optional and S3 only) on page 122
5. Creating an ILM rule on page 123

Creating and assigning storage grades


A storage grade identifies the type of storage used by a Storage Node to store object data, for
example, flash or spinning disk. If you use more than one type of storage, you can optionally create a
storage grade to identify each type. Creating storage grades makes it easier to select the Storage Node
when configuring storage pools.

Before you begin


If storage grade is not a concern (for example, your StorageGRID Webscale system only uses
spinning disks), skip this procedure and instead use the system-generated storage grade “All Disks”
when configuring a storage pool.

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


When creating storage grades, do not create more storage grades than necessary. For example, do not
create one storage grade for each Storage Node. Instead, assign each storage grade to two or more
114 | StorageGRID Webscale 11.1 Administrator Guide

nodes. Storage grades assigned to only one node can cause ILM backlogs if that node becomes
unavailable.
Note: You cannot configure storage grades for Archive Nodes.

Steps

1. Select ILM > Storage Grades.

2. Create a storage grade:

a. For each storage grade you need to define, click Insert to add a row and enter a label for
the storage grade.
The Default storage grade cannot be modified. It is reserved for new LDR services added
during a StorageGRID Webscale system expansion.

b. To edit an existing storage grade, click Edit and modify the label as required.
Note: You cannot delete storage grades.

c. Click Apply Changes.


These storage grades are now available for assignment to LDR services.

3. Assign a storage grade to an LDR service:

a. For each Storage Node's LDR service, click Edit and select a storage grade from the list.
Managing objects through information lifecycle management | 115

Attention: Assign a storage grade to a given Storage Node only once. A Storage Node
recovered from failure maintains the previously assigned storage grade. Do not change this
assignment once the ILM policy is activated. If the assignment is changed, data is stored
based on the new storage grade.

b. Click Apply Changes.

Configuring storage pools


A storage pool is a logical grouping of Storage Nodes or Archive Nodes. You configure storage pools
to determine where the StorageGRID Webscale system stores object data and the type of storage
used.
Storage pools have two attributes:

• Storage grade: For Storage Nodes, the relative performance of backing storage.

• Site: The data center where objects will be stored.

Storage pools are used in ILM rules to determine where object data is stored. When you configure
ILM rules for replication, you select one or more storage pools that include either Storage Nodes or
Archive Nodes. When you create Erasure Coding profiles, you select a storage pool that includes
Storage Nodes.

Steps
1. Guidelines for creating storage pools on page 115
2. Using multiple storage pools for cross-site replication on page 116
3. Configuring storage pools on page 118

Guidelines for creating storage pools


When configuring and using storage pools, follow these guidelines.

Guidelines for all storage pools


• Keep storage pool configurations as simple as possible. Do not create more storage pools than
necessary.

• Create storage pools with as many nodes as possible. Each storage pool should contain two or
more nodes. A storage pool with insufficient nodes can cause ILM backlogs if that node becomes
unavailable.
116 | StorageGRID Webscale 11.1 Administrator Guide

• Avoid creating or using storage pools that overlap (contain one or more of the same nodes). If
storage pools overlap, more than one copy of object data might be saved on the same node.

Guidelines for storage pools used for archived copies


• You cannot create a storage pool that includes both Storage Nodes and Archive Nodes, Archived
copies require a storage pool that only includes Archive Nodes.

• When using a storage pool that includes Archive Nodes, you should also maintain at least one
replicated or erasure-coded copy on a storage pool that includes Storage Nodes.
• If the global Compliance setting is enabled and you are creating a compliant ILM rule, you
cannot use a storage pool that includes Archive Nodes. See “Managing S3 buckets and objects for
compliance.”

• If an Archive Node's Target Type is Cloud Tiering - Simple Storage Service (S3), the Archive
Node must be in its own storage pool. See “Configuring connection settings for S3 API” for
information.

Guidelines for storage pools used for replicated copies


• If your StorageGRID Webscale system includes more than one data center site, create a storage
pool for each site. You can use multiple storage pools to protect replicated copies if a site fails.

Guidelines for storage pools used for erasure-coded copies


• You cannot use Archive Nodes for erasure-coded data.

• Note that the number of Storage Nodes and sites contained in the storage pool determine which
erasure coding schemes are available.

• If possible, a storage pool should include more than the minimum number of Storage Nodes
required for the erasure coding scheme you select. For example, if you use a 6+3 erasure coding
scheme, you must have at least nine Storage Nodes. However, having one additional Storage
Node per site is recommended.

• Distribute Storage Nodes across sites as evenly as possible. For example, to support a 4+2 erasure
coding scheme, configure a storage pool that includes three Storage Nodes at three sites.

Related concepts
What replication is on page 109
What erasure coding is on page 109
What erasure coding schemes are on page 110
Using multiple storage pools for cross-site replication on page 116

Related tasks
Managing S3 buckets and objects for compliance on page 162
Configuring connection settings for S3 API on page 205

Using multiple storage pools for cross-site replication


If your StorageGRID Webscale deployment includes more than one site, you can enable site-loss
protection by creating a storage pool for each site and specifying both storage pools in the rule's
placement instructions. For example, if you configure an ILM rule to make two replicated copies and
specify storage pools at two sites, one copy of each object will be placed at each site. If you
configure a rule to make two copies and specify three storage pools, the copies are distributed to keep
Managing objects through information lifecycle management | 117

disk usage among the storage pools balanced, while ensuring that the two copies are stored at
different sites.
The following example illustrates what can happen if an ILM rule places replicated object copies to a
single storage pool containing Storage Nodes from two sites. Because the system uses any available
nodes in the storage pool when it places the replicated copies, it might place all copies of some
objects within only one of the sites. In this example, the system stored two copies of object AAA on
Storage Nodes at Site 1, and two copies of object CCC on Storage Nodes at Site 2. Only object BBB
is protected if one of the sites fails or becomes inaccessible.

In contrast, this example illustrates how objects are stored when you use multiple storage pools. In
the example, the ILM rule specifies that two replicated copies of each object be created, and that the
copies be distributed to two storage pools. Each storage pool contains all Storage Nodes at one site.
Because a copy of each object is stored at each site, object data is protected from site failure or
inaccessibility.

When using multiple storage pools, keep the following rules in mind:

• If you are creating n copies, you must add n or more storage pools. For example, if a rule is
configured to make three copies, you must specify three or more storage pools.

• If the number of copies equals the number of storage pools, one copy of the object is stored in
each storage pool.

• If the number of copies is less than the number of storage pools, the system distributes the copies
to keep disk usage among the pools balanced and to ensure that two or more copies are not stored
in the same storage pool.

• If the storage pools overlap (contain the same Storage Nodes), all copies of the object might be
saved at only one site. You must ensure that the selected storage pools do not contain the same
Storage Nodes.
118 | StorageGRID Webscale 11.1 Administrator Guide

Configuring storage pools


A storage pool is a logical grouping of Storage Nodes or Archive Nodes. You configure storage pools
to determine where object data is stored. You select storage pools that use Storage Nodes when you
create Erasure Coding profiles. You select storage pools that use Storage Nodes or Archive Nodes
when you configure ILM rules for replication.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.


• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select ILM > Storage Pools.

2. Create a storage pool.

a. Click Insert at the end of the row for the last storage pool.

b. Enter a representative name for the storage pool.


Use a name that will be easy to identify when you configure Erasure Coding profiles and ILM
rules.

c. From the Storage Grade drop-down list, set the type of storage to which object data will be
copied if an ILM rule uses this storage pool.
The values All Disks and Archive Nodes are system-generated.

d. From the Site drop-down list, set the site to which object data will be copied if an ILM rule
uses this storage pool.
The value All Sites is system-generated.
When you select a Site, the number of grid nodes and storage capacity information (Installed,
Used, and Available) are automatically updated. Make sure that storage pools have sufficient
Managing objects through information lifecycle management | 119

storage and Storage Nodes to support planned ILM rules and the types of copies that will be
made.

e. To add another storage grade/site combination to the storage pool, click Insert next to
Site.
You cannot create storage pools that include LDR and ARC services in the same storage pool.
A storage pool includes either disks or archive media, but not both.

f. To remove a storage grade/site combination, click Delete next to Site.

3. To delete a storage pool, click Delete next to the storage pool name. You cannot delete a
storage pool that is used in a saved ILM rule.

4. Click Apply Changes.


Note: Changes made to a storage pool that is currently in use by an ILM policy do not take
effect until the ILM policy is reactivated.

Related concepts
Guidelines for creating storage pools on page 115

Configuring an Erasure Coding profile


You create an Erasure Coding profile by associating a storage pool with an erasure coding scheme,
such as 6+3. Then, when you configure the placement instructions for an ILM rule, you can select the
Erasure Coding profile. If an object matches the rule, data and parity fragments are created and
distributed to the storage locations in the storage pool according to the erasure coding scheme.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You must have created a storage pool that includes exactly one site or a storage pool that includes
three or more sites.

About this task


To create an Erasure Coding profile, you associate a storage pool containing Storage Nodes with an
erasure coding scheme. This association determines the number of data and parity fragments created
and where the system distributes these fragments. The storage pools used in Erasure Coding profiles
must include exactly one site or three or more sites. If you want to provide site redundancy, the
storage pool must have at least three sites. See “What erasure coding schemes are” for more
information.
Note: You must select a storage pool that contains Storage Nodes. You cannot use Archive Nodes
for erasure-coded data.

After you save the Erasure Coding profile, you can change its name, but you cannot select a different
storage pool or erasure coding scheme. You also cannot delete the profile.

Steps

1. Select ILM > Erasure Coding.


The Erasure Coding Profiles page appears.
120 | StorageGRID Webscale 11.1 Administrator Guide

2. Click Create.
The Create EC Profile dialog box appears. By default, the Storage Pool field shows All Storage
Nodes and lists any available erasure coding schemes, based on the total number of Storage
Nodes and sites available in your StorageGRID Webscale system.

3. Enter a name for the Erasure Coding profile.


This name is automatically appended to the storage pool name when you configure ILM rules for
erasure coding. Enter a short, but meaningful name.

4. Select a storage pool and erasure coding scheme.


When you select a storage pool, the list of available erasure coding schemes is updated to reflect
the number of Storage Nodes and sites in that pool. The following information is listed for each
available scheme:

• Erasure Code: The name of the erasure coding scheme in the following format: data
fragments + parity fragments.

• Storage Overhead (%): The additional storage required for parity fragments relative to the
object's data size. Storage Overhead = Total number of parity fragments / Total number of data
fragments.

• Storage Node Redundancy: The number of Storage Nodes that can be lost while still
maintaining the ability to retrieve object data.
Managing objects through information lifecycle management | 121

• Site Redundancy: Whether the selected erasure code allows the object data to be retrieved if
a site is lost.
To support site redundancy, the selected storage pool must include multiple sites, each with
enough Storage Nodes to allow any site to be lost. For example, to support site redundancy
using a 6+3 erasure coding scheme, the selected storage pool must include at least three sites
with at least three Storage Nodes at each site.

Messages are displayed in these cases:

• The storage pool you selected does not provide site redundancy. The following message is
expected when the selected storage pool includes only one site. You can use this Erasure
Coding profile in ILM rules to protect against node failures.

• The storage pool you selected does not satisfy the requirements for any erasure coding
scheme. For example, the following message is expected when the selected storage pool
includes only two sites. If you want to use erasure coding to protect object data, you must
select a storage pool with exactly one site or a storage pool with three or more sites.

5. If more than one erasure coding scheme is listed, select the one you want to use.
When deciding which erasure coding scheme to use, you should balance fault tolerance (achieved
by having more parity segments) against the network traffic requirements for repairs (more
fragments equals more network traffic). For example, when deciding between a 4+2 scheme and
6+3 scheme, select the 6+3 scheme if additional parity and fault tolerance are required. Select the
4+2 scheme if network resources are constrained to reduce network usage during node repairs.

6. Click Save.

Related concepts
What erasure coding schemes are on page 110
Configuring storage pools on page 115
What erasure coding is on page 109

Related tasks
Creating an ILM rule on page 123
122 | StorageGRID Webscale 11.1 Administrator Guide

Configuring regions (optional and S3 only)


ILM rules can filter objects based on the regions where S3 buckets are created, allowing you to store
objects from different regions in different storage locations. If you want to use an S3 bucket region as
a filter in a rule, you must first create the regions that can be used by the buckets in your system.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


When creating an S3 bucket, you can specify that the bucket be created in a specific region.
Specifying a region allows the bucket to be geographically close to its users, which can help optimize
latency, minimize costs, and address regulatory requirements.
When you create an ILM rule, you might want to use the region associated with an S3 bucket as an
advanced filter. For example, you can design a rule that applies only to objects in S3 buckets created
in the us-west-2 region. You can then specify that copies of those objects be placed on Storage Nodes
at a data center site within that region to optimize latency.
When configuring regions, follow these guidelines:

• By default, all buckets are considered to belong to the us-east-1 region.

• You must create the regions using the Grid Manager before you can specify a non-default region
when creating buckets using the Tenant Manager or Tenant Management API or with the
LocationConstraint request element for S3 PUT Bucket API requests. An error occurs if a PUT
Bucket request uses a region that has not been defined in StorageGRID Webscale.

• You must use the exact region name when you create the S3 bucket. Region names are case
sensitive and must contain between 2 and 32 characters. Valid characters are numbers, letters, and
hyphens.
Note: EU is not considered to be an alias for eu-west-1. If you want to use the EU or eu-west-1
region, you must use the exact name.

• You cannot delete or modify a region if it is currently used within the active ILM policy or the
proposed ILM policy.

• If the region used as the advanced filter in an ILM rule is invalid, it is still possible to add that rule
to the proposed policy. However, an error occurs if you attempt to save or activate the proposed
policy. (An invalid region can result if you use a region as an advanced filter in an ILM rule but
you later delete that region, or if you use the Grid Management API to create a rule and specify a
region that you have not defined.)

• If you delete a region after using it to create an S3 bucket, you will need to re-add the region if
you ever want to use the Location Constraint advanced filter to find objects in that bucket.

Steps

1. Select ILM > Regions.

The Regions page appears, with the currently defined regions listed. Region 1 shows the default
region, us-east-1, which cannot be modified or removed.
Managing objects through information lifecycle management | 123

2. To add a region:

a. Click the insert icon to the right of the last entry.


b. Enter the name of a region that you want to use when creating S3 buckets.
You must use this exact region name as the LocationConstraint request element when you
create the corresponding S3 bucket.

3. To remove an unused region, click the delete icon .


An error message appears if you attempt to remove a region that is currently used in the active
policy or the proposed policy.

4. When you are done making changes, click Save.


You can now select these regions from the Location Constraint list on the Advanced Filtering
page of the ILM rule wizard.

Related concepts
Using advanced filters in ILM rules on page 128

Creating an ILM rule


ILM rules allow you to manage the placement of object data over time. To create an ILM rule, you
use the Create ILM Rule wizard.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• If you plan to use last access time metadata, Last Access Time updates must be enabled by bucket
for S3 or by container for Swift.

• If you are creating erasure-coded copies, you have configured an Erasure Coding profile.
124 | StorageGRID Webscale 11.1 Administrator Guide

About this task


When StorageGRID Webscale evaluates objects against an ILM rule, it looks first at the rule's basic
and advanced filtering criteria. If an object matches the filtering criteria, the object is copied and
placed according to the rule's placement instructions. Placement instructions determine where, when,
and how object data is stored. If the rule includes more than one placement instruction, when a set
time expires, the content placement instructions for the next time period are applied to objects at the
next ILM evaluation time.

Steps

1. Select ILM > Rules.


The ILM Rules page appears, with the stock rule, Make 2 Copies, selected.

Note: If the global Compliance setting has been enabled for the StorageGRID Webscale
system, the ILM Rules page indicates which ILM rules are compliant. The summary table
includes a Compliant column, and the details for the selected rule include a Compliance
Compatible field. See “Managing S3 buckets and objects for compliance” for more
information.

2. Click Create.
Step 1 of the Create ILM Rule wizard appears.

3. Complete Step 1 of the Create ILM Rule wizard.

a. Enter a unique name for the rule in the Name field.


You must enter between 1 and 64 characters.
Managing objects through information lifecycle management | 125

b. Optionally, enter a short description for the rule in the Description field.
You should describe the rule's purpose or function so you can recognize the rule later.

c. From the Tenant Account drop-down list, optionally select the S3 or Swift tenant account to
which this rule applies. If this rule applies to all tenants, select Ignore (default).

d. Use the Bucket Name field to specify the S3 buckets or Swift containers to which this rule
applies.
If matches all is selected (default), the rule applies to all S3 buckets or Swift containers.

e. Optionally, click Advanced filtering, and specify additional filtering criteria.


If you do not configure advanced filtering, the rule applies to all objects that match the
currently configured criteria (tenant account and bucket).
See “Using advanced filters in ILM rules” for information about the types of metadata,
operators, and metadata values you can specify.

4. Click Next.
Step 2 of the wizard appears.

5. For Reference Time, select the time used to calculate the start time for a placement instruction.

Option Description
Ingest Time The time when the object was ingested.
Last Access Time The time when the object was last retrieved (read or viewed).
Note: To use this option, updates to Last Access Time must be
enabled for the S3 bucket or Swift container.
126 | StorageGRID Webscale 11.1 Administrator Guide

Option Description
Noncurrent Time The time an object version became noncurrent because a new version was
ingested and replaced it as the current version.
Note: The Noncurrent Time applies only to S3 objects in versioning-
enabled buckets.

You can use this option to reduce the storage impact of versioned objects
by filtering for noncurrent object versions updated with a new current
version or delete marker.

User Defined Creation Time A time specified in user-defined metadata.

Note: If you want to create a compliant rule, you must select Ingest Time. See “Managing S3
buckets and objects for compliance.”

6. In the Placements section, select a starting time and a duration for the first time period.
For example, you might want to specify where to store objects for the first year (“day 0 for 365
days”). At least one instruction must start at day 0.

7. If you want to create replicated copies:

a. Select the number of copies you want to make.


Note: In general, you should not configure an ILM rule to create only one replicated copy.
If the only replicated copy is lost or corrupted, data will be lost.

b. Select replicated from the copies as drop-down list.

c. Optionally, click Add Pool and select one or more storage pools.
If you are specifying more than one storage pool, keep these rules in mind:

• If you are specifying more than one storage pool and creating n copies, you must add n or
more pools. For example, if a rule is configured to make three copies, you must specify
three or more storage pools.

• If the number of copies equals the number of storage pools, one copy of the object is
stored in each storage pool.

• If the number of copies is less than the number of storage pools, the system distributes the
copies to keep disk usage among the pools balanced, while ensuring that one copy goes
only to one site.

• If the storage pools overlap (contain the same Storage Nodes), all copies of the object
might be saved at only one site. For this reason, do not specify the default storage pool
(All Storage Nodes) and another storage pool.

d. If you are using only a single storage pool, specify a temporary storage pool.
Managing objects through information lifecycle management | 127

Specifying a temporary storage pool is optional, but recommended. If the preferred storage
pool is unavailable, a copy is made in the temporary storage pool. As soon as the preferred
storage pool becomes available, a copy is made in the preferred storage pool, and the copy in
the temporary storage pool is deleted.
Attention: Failing to specify a temporary storage pool puts object data at risk if the
preferred pool is unavailable.

8. If you want to create an erasure-coded copy:

a. Select erasure coded from the copies as drop-down list.


The number of copies changes to 1.

b. Select the storage location.


The storage locations for an erasure-coded copy include the name of the storage pool,
followed by the name of the Erasure Coding profile.

9. Optionally, click the plus icons to add placement instructions that specify different time periods or
create additional copies at different locations.

10. Click Refresh to update the Retention Diagram and to confirm your placement instructions.
Each line in the diagram represents a placement instruction and shows where and when object
copies will be placed. The type of copy is represented by one of the following icons:

• One or more replicated copies

• Erasure coded copy

In this example, replicated copies will be saved to three storage pools (DC1, DC2, and DC3) for
one year. Then, an erasure-coded copy will be saved forever, using the Erasure Coding profile
associated with the All Storage Nodes storage pool.
128 | StorageGRID Webscale 11.1 Administrator Guide

Attention: When adding a rule that makes an erasure-coded copy to the ILM policy, you must
ensure that the policy has at least one rule that filters by Object Size. Due to the overhead of
managing the number of fragments associated with an erasure coded copy, do not erasure code
objects smaller than 200 KB.

11. Click Save.


The ILM rule is saved. The rule does not become active until it is added to an ILM policy and that
policy is activated.

Related concepts
Using advanced filters in ILM rules on page 128
Configuring storage pools on page 115
Creating, simulating, and activating an ILM policy on page 133

Related tasks
Using Ingest Time or Last Access Time in ILM rules on page 132
Configuring an Erasure Coding profile on page 119
Managing S3 buckets and objects for compliance on page 162

Using advanced filters in ILM rules


Advanced filtering allows you to create ILM rules that apply only to specific objects based on their
metadata. When you set up advanced filtering for a rule, you select the type of metadata you want to
match, select an operator, and specify a metadata value. When objects are evaluated, the ILM rule is
applied only to those objects that have metadata matching the advanced filter.
The table shows the types of metadata you can specify in advanced filters, the operators you can use
for each type of metadata, and the metadata values expected.

Metadata type Supported operators Metadata value Object type


S3 Swift
Ingest Time Time and date the object was Yes Yes
• equals
(microseconds) ingested, in microseconds
• does not equal since Unix Epoch.
See “Using Ingest Time or
• less than
Last Access Time in ILM
• less than or equals rules” for more information
on how to calculate this
• greater than value.
• greater than or equals
Managing objects through information lifecycle management | 129

Metadata type Supported operators Metadata value Object type


S3 Swift
Key All or part of a unique S3 or Yes Yes
• equals
Swift object key.
• does not equal For example, you might want
to match objects that end
• contains
with .txt or start with
• does not contain test-object/.

• starts with

• does not start with


• ends with

• does not end with

Last Access Time Time and date the object was Yes Yes
• equals
(microseconds) last retrieved (read or viewed)
• does not equal in microseconds since Unix
Epoch.
• less than
See “Using Ingest Time or
• less than or equals Last Access Time in ILM
rules” for more information
• greater than on how to calculate this
value.
• greater than or equals
Note: If you plan to use
• exists last access time as an
• does not exist advanced filter, Last
Access Time updates must
be enabled for the S3
bucket or Swift container.

Location Constraint The region where an S3 Yes No


• equals
(S3 only) bucket was created. Use ILM
• does not equal > Regions to define the
regions that are shown.
Note: A value of us-east-1
will match objects in
buckets created in the us-
east-1 region as well as
objects in buckets that have
no region specified.
See “Configuring regions”
for more information.
130 | StorageGRID Webscale 11.1 Administrator Guide

Metadata type Supported operators Metadata value Object type


S3 Swift
Object Size (MB) The object's size in MB. Yes Yes
• equals
To filter on object sizes
• not equals smaller than 1 MB, type in a
decimal value. For example,
• less than
type 0.2 (200 KB) as the
• less than or equals advanced filter for a rule that
ensures that erasure coding is
• greater than not used for objects 200 KB
or smaller.
• greater than or equals

User Metadata • contains


Key-value pair Yes Yes
For example, to filter on
• ends with objects that have user
metadata of color=blue,
• equals
specify color for User
• exists Metadata Name, equals for
the operator, and blue for
• does not contain User Metadata Value.
• does not end with Note: User-metadata
names are not case
• does not equal
sensitive; user-metadata
• does not exist values are case sensitive.

• does not start with

• starts with

Object Tag (S3 only) Key-value pair Yes No


• contains
For example, to filter on
• ends with objects that have an object
tag of Image=True, specify
• equals
Image for Object Tag
• exists Name, equals for the
operator, and True for
• does not contain Object Tag Value.
• does not end with Note: Object tag names
and object tag values are
• does not equal
case sensitive. You must
• does not exist enter these items exactly as
they were defined for the
• does not start with object.
• starts with

Specifying multiple metadata types and values


When you define advanced filtering, you can specify multiple types of metadata and multiple
metadata values. For example, if you want a rule to match objects between 10 MB and 100 MB in
size, you would select the Object Size metadata type and specify two metadata values.

• The first metadata value specifies objects greater than or equal to 10 MB


Managing objects through information lifecycle management | 131

• The second metadata value specifies objects less than or equal to 100 MB

Using multiple entries allows you to have precise control over which objects are matched. In the
following example, the rule applies to objects that have a Brand A or Brand B as the value of the
camera_type user metadata. However, the rule only applies to those Brand B objects that are smaller
than 10 MB.

Related tasks
Using Ingest Time or Last Access Time in ILM rules on page 132
Configuring regions (optional and S3 only) on page 122

Related information
Implementing S3 client applications
Implementing Swift client applications
132 | StorageGRID Webscale 11.1 Administrator Guide

Using Ingest Time or Last Access Time in ILM rules


You can use advanced filtering if you want an ILM rule to apply only to objects that were ingested or
last accessed on a specific date. You can also use Ingest Time or Last Access Time as the reference
time in an ILM rule. For example, you might want to leave objects that have been viewed in the past
month on local Storage Nodes, while moving objects that have not been viewed as recently to an off-
site location.

About this task


When using Ingest Time or Last Access Time as an advanced filter, you must convert the desired
time and date to microseconds since Unix Epoch.
When using Last Access Time as an advanced filter or as a reference time, you must enable last
access time updates for S3 buckets.
Note: Last access time updates are always enabled for Swift containers, but are disabled by default
for S3 buckets.

The table summarizes the behavior applied to all objects in the bucket when last access time is
disabled or enabled.

Type of request Behavior if last access time is Behavior if last access time is
disabled (default) enabled
Last access time Object added to Last access time Object added to
updated? ILM evaluation updated? ILM evaluation
queue? queue?
Request to No No Yes Yes
retrieve an object,
its access control
list, or its
metadata
Request to update Yes Yes Yes Yes
an object’s
metadata
Request to copy • No, for the • No, for the • Yes, for the • Yes, for the
an object from source copy source copy source copy source copy
one bucket to
another • Yes, for the • Yes, for the • Yes, for the • Yes, for the
destination destination destination destination
copy copy copy copy

Request to Yes, for the Yes, for the Yes, for the Yes, for the
complete a assembled object assembled object assembled object assembled object
multipart upload

Steps

1. If you are using Ingest Time or Last Access Time as advanced filters, determine the UTC date and
time you want to use in the filter.
You might need to convert from your local time zone to UTC.

2. Convert the UTC date and time to microseconds since Unix Epoch.
Managing objects through information lifecycle management | 133

For example, use date from a Linux command prompt.

# date -d '2015-03-14 00:00:00 UTC' +%s000000


14262912000000

3. If you are using Last Access Time as an advanced filter or as a reference time, enable last access
time updates on each S3 bucket specified in that rule.
You can use the Tenant Manager or the Tenant API to enable updates to last access time for S3
buckets. See the instructions for using tenant accounts.
Attention: Be aware that enabling last access time updates can reduce performance, especially
in systems with small objects. The performance impact occurs because StorageGRID Webscale
must perform these additional steps every time objects are retrieved:

• Update the objects with new timestamps

• Add the objects to the ILM queue, so they can be reevaluated against current ILM rules and
policy

Related information
Implementing S3 client applications
Using tenant accounts

Creating, simulating, and activating an ILM policy


When you create a ILM policy, you start by selecting and arranging the ILM rules. Then, you verify
the behavior of your proposed policy by simulating it against previously ingested objects. When you
are satisfied that the proposed policy is functioning as intended, you can activate it to create the
active policy.
When creating an ILM policy:

• Consider all of the different types of objects that might be ingested into your grid. Make sure the
policy includes rules to match and place these objects as required. If no rules match an object, the
policy's default rule controls where that object is placed and for how long it is retained.
• Make sure that the rules in the policy are in the correct order. After an object has been matched by
a rule, none of the following rules in the policy are applied to that object.

• Keep the ILM policy as simple as possible. This avoids potentially dangerous situations where
object data is not protected as intended when changes are made to the StorageGRID Webscale
system over time.

Caution: An ILM policy that has been incorrectly configured can result in unrecoverable data loss.
Before activating an ILM policy, carefully review the ILM policy and its ILM rules, and then
simulate the ILM policy. Always confirm that the ILM policy will work as intended.

Creating a policy consists of these main tasks:

Steps
1. Creating a proposed ILM policy on page 134
2. Simulating an ILM policy on page 137
3. Activating the ILM policy on page 145
4. Verifying an ILM policy with object metadata lookup on page 146
134 | StorageGRID Webscale 11.1 Administrator Guide

Related concepts
What an information lifecycle management policy is on page 106

Creating a proposed ILM policy


You can create a proposed ILM policy from scratch, or you can clone the current active policy if you
want to start with the same set of rules.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You have created the rules you want to add to the proposed policy. Note that you can save a
proposed policy, create additional rules, and then edit the policy to add the new rules.

About this task


Typical reasons for creating a proposed ILM policy include:

• You made changes to a storage pool.

• You added an Archive Node to the StorageGRID Webscale grid.

• New storage retention requirements were defined. For example, there was a change in regulatory
requirements.

The proposed ILM policy must include at least one ILM rule.

Steps

1. Select ILM > Policies.


The ILM Policies page appears. The Baseline 2 Copies Policy is the system's default policy. This
policy includes the stock ILM rule (Make 2 Copies), and it applies if no other policy has been
activated.
From this page, you can review the proposed, active, and historical policies; create, edit, or
remove a proposed policy; clone the active policy; or view the details for any policy.

Note: If the global Compliance setting is enabled, the ILM Policies page indicates which ILM
rules are compliant. See “Managing S3 buckets and objects for compliance.”

2. Determine how you want to create the proposed ILM policy.


Managing objects through information lifecycle management | 135

Option Steps
Create a new proposed policy
a. If a proposed ILM policy currently exists, select that policy, and click
that has no rules already
Remove.
selected
You cannot create a new proposed policy if a proposed policy already
exists.

b. Click Create Proposed Policy.

Create a proposed policy


a. If a proposed ILM policy currently exists, select that policy, and click
based on the active policy
Remove.
You cannot clone the active policy if a proposed policy already exists.

b. Select the active policy from the table.

c. Click Clone.

Edit the existing proposed


a. Select the proposed policy from the table.
policy
b. Click Edit.

The Configure ILM Policy dialog box appears.


If you are creating a new proposed policy, all fields are blank and no rules are selected.

If you are cloning the active policy, the Name field shows the name of the active policy, appended
by a version number (“v2” in the example). The rules used in the active policy are selected and
shown in their current order.

3. Enter a unique name for the proposed policy in the Name field.
You must enter between 1 and 64 characters. If you are cloning the active policy, you can use the
current name with the appended version number or you can enter a new name.

4. Enter the reason you are creating a new proposed policy in the Reason for change field.
You must enter between 1 and 128 characters.

5. To add rules to the policy, click Select Rules.


136 | StorageGRID Webscale 11.1 Administrator Guide

The Select Rules for Policy dialog box appears, with all defined rules listed. If you are cloning the
active policy, any rules that are currently in use are selected.

6. Select and unselect the check boxes to choose the rules you want to add to the proposed policy.

You can click the rule name or the more details icon to view the settings for each rule. Click
Close when you are done viewing rule details.

7. When you are done selecting rules for the proposed policy, click Apply.

8. Drag the rows to reorder the rules in the proposed policy.


Caution: You must confirm that the ILM rules are in the correct order. When the policy is
applied to an object, object metadata is evaluated by the rules in the order listed, starting at the
top.

9. Select the radio button to specify which rule you want to be the default rule for this policy.
Every ILM policy must contain one default ILM rule. The placement instructions for the default
rule are applied to any objects that are not matched by the other rules in the policy.

Note: If the global Compliance setting is enabled, the default rule must be a compliant rule.
See “Managing S3 buckets and objects for compliance.”
Managing objects through information lifecycle management | 137

10. As required, click the delete icon to delete any rules that you do not want in the policy, or click
Select Rules to add more rules.

11. When you are done, click Save.


The ILM Policy page updates, and the policy you saved is shown as Proposed. Proposed policies
do not have start and end dates. The Simulate and Activate buttons are now enabled.

Related concepts
What an information lifecycle management policy is on page 106

Related tasks
Managing S3 buckets and objects for compliance on page 162

Simulating an ILM policy


You should simulate a proposed policy on test objects before activating the policy and applying it to
your production data. The simulation window provides a standalone environment that is safe for
testing policies before they are activated and applied to data in the production environment.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You must know the S3 bucket/object-key or the Swift container/object-name for each object you
want to test, and you must have already ingested those objects.

About this task


You must carefully select the objects you want the proposed policy to test. To simulate a policy
thoroughly, you should test at least one object for each filter in each rule.
For example, if a policy includes one rule to match objects in bucket A and another rule to match
objects in bucket B, you must select at least one object from bucket A and one object from bucket B
138 | StorageGRID Webscale 11.1 Administrator Guide

to test the policy thoroughly. If the policy includes a default rule to place all other objects, you must
test at least one object from another bucket.
When simulating a policy, the following considerations apply:

• After you make changes to a policy, save the proposed policy. Then, simulate the behavior of the
saved proposed policy.

• When you simulate a policy, the ILM rules in the policy filter the test objects, so you can see
which rule was applied to each object. However, no object copies are made and no objects are
placed. Running a simulation does not modify your data, rules, or the policy in any way.
• The Simulation page retains the objects you tested until you close, navigate away from, or refresh
the ILM Policies page.

• Simulation returns the name of the matched rule. To determine which storage pool or Erasure
Coding profile is in effect, you can view the Retention Diagram by clicking the rule name or the
more details icon .

• If S3 Versioning is enabled, the policy is only simulated against the current version of the object.

Steps

1. Select and arrange the rules, and save the proposed policy.
The Demo policy in this example has three rules:

• The first rule, X-men, applies only to objects in a specific tenant account and uses an
advanced filter to match series=x-men user metadata.

• The second rule, PNGs, matches keys ending in .png.

• The last rule is the stock rule, Make 2 Copies, and it is selected as the default. This rule
applies to any objects that do not match the other rules.

2. Click Simulate.
The Simulation ILM Policy dialog box appears.

3. In the Object field, enter the S3 bucket/object-key or the Swift container/object-name for a test
object, and click Simulate.
Note: A message appears if you specify an object that has not been ingested.
Managing objects through information lifecycle management | 139

4. Under Simulation Results, confirm that each object was matched by the correct rule.
In the example, the Havok.png and Warpath.jpg objects were correctly matched by the X-men
rule. The Fullsteam.png object, which does not include series=x-men user metadata, was
not matched by the X-men rule but was correctly matched by the PNGs rule. None of the test
objects was matched by the Make 2 Copies rule

Examples for simulating ILM policies


These examples show how you can verify ILM rules by simulating the ILM policy before activating
it.

Choices
• Example 1: Verifying rules when simulating a proposed ILM policy on page 139
• Example 2: Reordering rules when simulating a proposed ILM policy on page 140
• Example 3: Correcting a rule when simulating a proposed ILM policy on page 142

Example 1: Verifying rules when simulating a proposed ILM policy


This example shows how to verify rules when simulating a proposed policy.

About this task


In this example, the “xmen photo files” policy is being simulated against the ingested objects in a
bucket called “photos.” The policy includes three rules, as follows:

• The first rule, X-men, applies only to objects in a specific tenant account and filters for
series=x-men user metadata. This rule is also marked as the default rule for the policy.

• The second rule, PNGs, filters for keys ending with .png.

• The last rule, JPGs, filters for keys ending with .jpg.
140 | StorageGRID Webscale 11.1 Administrator Guide

Steps

1. After adding the rules and saving the policy, click Simulate.
The Simulate ILM Policy dialog box appears.

2. In the Object field, enter the S3 bucket/object-key or the Swift container/object-name for a test
object, and click Simulate.
The Simulation Results appear, showing which rule in the policy matched each object you tested.

3. Confirm that each object was matched by the correct rule.


In this example:

a. The Fullsteam.png object did not match the X-men rule, but did match the PNGs rule.

b. The Havok.png and the Warpath.jpg objects both matched the X-men rule, which was
evaluated first.
Note that even if these two files had not matched the X-men rule, they would have matched
one of the subsequent rules: either PNGs or JPGs.

Example 2: Reordering rules when simulating a proposed ILM policy


This example shows how you can reorder rules to change the results when simulating a policy.

About this task


In this example, the “Demo” policy is being simulated. This policy, which is intended to find objects
that have series=x-men user metadata, includes three rules, as follows:

• The first rule, PNGs, filters for key names that end in .png.
Managing objects through information lifecycle management | 141

• The second rule, X-men, applies only to objects in a specific tenant account and filters for
series=x-men user metadata.

• The last rule is the default rule, Make 2 Copies, which will match any objects that do not match
the first two rules.

Steps

1. After adding the rules and saving the policy, click Simulate.

2. In the Object field, enter the S3 bucket/object-key or the Swift container/object-name for a test
object, and click Simulate.
The Simulation Results appear, showing that the Havok.png object was matched by the PNGs
rule.

However, the rule that the Havok.png object was meant to test was the X-men rule.

3. To resolve the issue, reorder the rules.

a. Click Finish to close the Simulate ILM Policy page.

b. Click Edit to edit the policy.

c. Drag the X-men rule to the top of the list.


142 | StorageGRID Webscale 11.1 Administrator Guide

d. Click Save.

4. Click Simulate.
The objects you previously tested are re-evaluated against the updated policy, and the new
simulation results are shown. In the example, the Rule Matched column shows that the
Havok.png object now matches the X-men metadata rule, as expected. The Previous Match
column shows that the PNGs rule matched the object in the previous simulation.

Note: If you stay on the Configure Policies page, you can re-simulate a policy after making
changes without needing to re-enter the names of the test objects.

Example 3: Correcting a rule when simulating a proposed ILM policy


This example shows how to simulate a policy, correct a rule in the policy, and continue the
simulation.

About this task


In this example, the “Demo” policy is being simulated. This policy is intended to find objects that
have series=x-men user metadata. However, unexpected results occurred when simulating this
policy against the Beast.jpg object. Instead of matching the X-men metadata rule, the object
matched the default, Make 2 Copies, rule.
Managing objects through information lifecycle management | 143

When a test object is not matched by the expected rule in the policy, you must examine each rule in
the policy and correct any errors.

Steps

1. For each rule in the policy, view the rule settings by clicking the rule name or the more details
icon on any dialog box where the rule is displayed.

2. Review the rule's tenant account, reference time, and filtering criteria.
In this example, the metadata for the X-men rule includes an error. The metadata value was
entered as “x-men1” instead of “x-men.”

3. To resolve the error, correct the rule, as follows:

• If the rule is part of the proposed policy, you can either clone the rule or remove the rule from
the policy and then edit it.

• If the rule is part of the active policy, you must clone the rule. You cannot edit or remove a
rule from the active policy.
144 | StorageGRID Webscale 11.1 Administrator Guide

Option Description
Cloning the rule
a. Select ILM > Rules.

b. Select the incorrect rule, and click Clone.

c. Change the incorrect information, and click Save.

d. Select ILM > Policies.

e. Select the proposed policy, and click Edit.

f. Click Select Rules.

g. Select the check box for the new rule, unselect the checkbox for the
original rule, and click Apply.

h. Click Save.

Editing the rule


a. Select the proposed policy, and click Edit.

b. Click the delete icon to remove the incorrect rule, and click Save.

c. Select ILM > Rules.

d. Select the incorrect rule, and click Edit.

e. Change the incorrect information, and click Save.

f. Select ILM > Policies.

g. Select the proposed policy, and click Edit.

h. Select the corrected rule, click Apply, and click Save.

4. Perform the simulation again.


Note: Because you navigated away from the ILM Policies page to edit the rule, the objects you
previously entered for simulation are no longer displayed. You must re-enter the names of the
objects.

In this example, the corrected X-men rule now matches the Beast.jpg object based on the
series=x-men user metadata, as expected.
Managing objects through information lifecycle management | 145

Activating the ILM policy


After you add ILM rules to a proposed ILM policy, simulate the policy, and confirm it behaves as you
expect, you are ready to activate the proposed policy.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You have saved and simulated the proposed ILM policy.


Caution: Errors in an ILM policy can cause unrecoverable data loss. Carefully review and
simulate the policy before activating it to confirm that it will work as intended. When a new
ILM policy goes into effect, the StorageGRID Webscale system immediately uses it to manage
all objects in the grid, including existing objects and newly ingested objects.

About this task


When you activate an ILM policy, the system distributes the new policy to all nodes. However, the
new active policy might not actually take effect until all grid nodes are available to receive the new
policy. In some cases, the system waits to implement a new active policy to ensure that grid objects
are not accidentally removed.

• If you make policy changes that increase data redundancy or durability, those changes are
implemented immediately. For example, if you activate a new policy that uses a Make 3 Copies
rule instead of a Make 2 Copies rule, that policy will be implemented right away because it
increases data redundancy.

• If you make policy changes that could decrease data redundancy or durability, those changes will
not be implemented until all grid nodes are available. For example, if you activate a new policy
that uses a Make 2 Copies rule instead of a Make 3 Copies rule, the new policy will be marked as
“Active,” but it will not take effect until all nodes are online and available.

Steps

1. When you are ready to activate a proposed policy, select the policy on the ILM Policies page and
click Activate.
A warning message is displayed, prompting you to confirm that you want to activate the proposed
policy.

2. Click OK.
146 | StorageGRID Webscale 11.1 Administrator Guide

Result
When a new ILM policy has been activated:

• The policy is shown with a Policy State of Active in the table on the ILM Policies page. The Start
Date entry indicates the date and time the policy was activated.

• The previously active policy is shown with a Policy State of Historical. The Start Date and End
Date entries indicate when the policy became active and when it was no longer in effect.

Verifying an ILM policy with object metadata lookup


After you have activated an ILM policy, you should ingest representative test objects into the
StorageGRID Webscale system. You should then perform an object metadata lookup to confirm that
copies are being made as intended and placed in the correct locations.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.

• You must have an object identifier, which can be one of:

◦ UUID: The object's Universally Unique Identifier. Enter the UUID in all uppercase.

◦ CBID: The object's unique identifier within StorageGRID Webscale. You can obtain an
object's CBID from the audit log. Enter the CBID in all uppercase.

◦ S3 bucket and object key: When an object is ingested through the S3 interface, the client
application uses a bucket and object key combination to store and identify the object.

◦ Swift container and object name: When an object is ingested through the Swift interface, the
client application uses a container and object name combination to store and identify the
object.

Steps

1. Ingest the object.

2. Select ILM > Object Metadata Lookup.

3. Type the object's identifier in the Identifier field.


You can enter a UUID, CBID, S3 bucket/object-key, or Swift container/object-name.
Managing objects through information lifecycle management | 147

4. Click Look Up.


The object metadata lookup results appear. This page lists the following types of information:

• System metadata, including the object ID (UUID), the object name, the name of the container,
the tenant account name or ID, the logical size of the object, the date and time the object was
first created, and the date and time the object was last modified.

• Any custom user metadata key-value pairs associated with the object.
• For S3 objects, any object tag key-value pairs associated with the object.

• For replicated object copies, the current storage location of each copy, including the name of
the grid node and the full path to the disk location of the object.

• For erasure-coded object copies, the current storage location of each fragment, including the
name of the grid node and the type of fragment (data or parity).

• For segmented objects, a list of object segments including segment identifiers and data sizes.
For objects with more than 100 segments, only the first 100 segments are shown.

• All object metadata in the unprocessed, internal storage format.


The following example shows the object metadata lookup results for an S3 test object that is
stored as two replicated copies.
148 | StorageGRID Webscale 11.1 Administrator Guide

5. Confirm that the object is stored in the correct location or locations and that it is the correct type
of copy.
Note: If the Audit option is enabled, you can also monitor the audit log for the ORLM Object
Rules Met message. The ORLM audit message can provide you with more information about
the status of the ILM evaluation process, but it cannot give you information about the
correctness of the object data’s placement or the completeness of the ILM policy. You must
evaluate this yourself. For details, see the information about understanding audit messages.

Related concepts
Configuring audit client access on page 234

Related information
Understanding audit messages
Implementing S3 client applications
Managing objects through information lifecycle management | 149

Implementing Swift client applications

Working with ILM rules and ILM policies


Once you have created ILM rules and an ILM policy, you can continue to work with them, modifying
their configuration as your storage requirements change.

Choices
• Deleting an ILM rule on page 149
• Editing an ILM rule on page 149
• Cloning an ILM rule on page 150
• Viewing the ILM policy activity queue on page 151

Deleting an ILM rule


To keep the list of current ILM rules manageable, delete any ILM rules that you are not likely to use.
You cannot delete the stock ILM rule (Make 2 Copies), ILM rules listed in the active policy, or ILM
rules currently listed in the proposed policy.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select ILM > Rules.

2. Select the ILM rule you want to delete, and click Remove.

3. Click OK to confirm that you want to delete the ILM rule.


The ILM rule is deleted.

Related concepts
Creating, simulating, and activating an ILM policy on page 133

Editing an ILM rule


You might need to edit an ILM rule to change a filter or placement instruction. You cannot edit the
stock ILM rule (Make 2 Copies), ILM rules listed in the active policy, ILM rules listed in the
proposed policy, or ILM rules created before StorageGRID Webscale version 10.3.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select ILM > Rules.


150 | StorageGRID Webscale 11.1 Administrator Guide

The ILM Rules page appears. This page shows all available rules and indicates which rules are
being used in the active policy or the proposed policy.

2. Select a rule that is not being used, and click Edit.


The Edit ILM Rule wizard opens.

3. Complete the pages of the Edit ILM Rule wizard, following the steps for creating an ILM rule
and using advanced filters, as necessary.
When editing an ILM rule, you cannot change its name.

4. Click Save.

Related concepts
Using advanced filters in ILM rules on page 128

Related tasks
Creating an ILM rule on page 123

Cloning an ILM rule


You cannot edit a rule if it is being used in the proposed ILM policy or the active ILM policy. Instead,
you can clone a rule and make any required changes to the cloned copy. Then, if required, you can
remove the original rule from the proposed policy and replace it with the modified version. You
cannot clone an ILM rule if it was created using StorageGRID Webscale version 10.2 or earlier.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.


Managing objects through information lifecycle management | 151

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select ILM > Rules.


The ILM Rules page appears.

2. Select the ILM rule you want to clone, and click Clone.
The Create ILM Rule wizard opens.

3. Update the cloned rule by following the steps for editing an ILM rule and using advanced filters.
When cloning an ILM rule, you must enter a new name.

4. Click Save.
The new ILM rule is created.

Related concepts
Using advanced filters in ILM rules on page 128

Related tasks
Editing an ILM rule on page 149

Viewing the ILM policy activity queue


You can view the number of objects that are in the queue to be evaluated against the ILM policy at
any time. You might want to monitor the ILM processing queue to determine system performance. A
large queue might indicate that the system is not able to keep up with the ingest rate, the load from
the client applications is too great, or that some abnormal condition exists.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Dashboard.
152 | StorageGRID Webscale 11.1 Administrator Guide

2. Monitor the Information Lifecycle Management (ILM) section.


You can click the question mark to see a description of the items in this section.

Example ILM rules and policies


You can use the following examples as starting points for defining your own ILM rules and policy.

• Example 1: ILM rules and policy for object storage


• Example 2: ILM rules and policy for EC object size filtering on page 156
• Example 3: ILM rules and policy for better protection for image files on page 158

Example 1: ILM rules and policy for object storage


You can use the following example rules and policy as a starting point when defining an ILM policy
to meet your object protection and retention requirements.
Caution: The following ILM rules and policy are only examples. There are many ways to
configure ILM rules. Carefully analyze your ILM rules before adding them to an ILM policy to
confirm that they will work as intended to protect content from loss.

ILM rule 1 for example 1: Copy object data to two data centers
This example ILM rule copies object data to storage pools in two data centers.

Rule Example value


definition
Storage Pools Two storage pools, each at different data centers, named Storage Pool DC1 and
Storage Pool DC2.
Rule Name Two Copies Two Data Centers
Reference Ingest Time
Time
Managing objects through information lifecycle management | 153

Rule Example value


definition
Content On Day 0, keep two replicated copies forever—one in Storage Pool DC1 and one
Placement in Storage Pool DC2.

ILM rule 2 for example 1: Erasure Coding profile with bucket matching
This example ILM rule uses an Erasure Coding profile and an S3 bucket to determine where and how
long the object is stored.

Rule Example value


definition
Erasure • One storage pool across three data centers
Coding Profile
• Use 4+2 Erasure Coding scheme

Rule Name EC for S3 Bucket FinanceRecords


Reference Ingest Time
Time
Content For objects in the S3 Bucket FinanceRecords, create one Erasure Coded copy in
Placement the pool specified by the Erasure Coding profile. Keep this copy forever.
154 | StorageGRID Webscale 11.1 Administrator Guide

ILM rule 3 for example 1: Store object to DC1 and Archive


This example ILM rule creates two copies. One copy is stored in Data Center 1 for one year, and the
second copy is stored in an Archive Node forever.

Rule Example value


definition
Storage Pools A disk storage pool and an archive storage pool.
Rule Name Archive
Reference Ingest Time
Time
Content
• On Day 0, keep a replicated copy in Storage Pool DC1 for 365 days;
Placement temporary copies in Storage Pool DC2

• On Day 0, keep a replicated copy in Storage Pool Archive forever; temporary


copies in All Storage Nodes
Managing objects through information lifecycle management | 155

ILM policy for example 1


The StorageGRID Webscale system allows you to design sophisticated and complex ILM policies;
however, in practice, most ILM policies are simple.
A typical ILM policy for a multi-site topology might include ILM rules such as the following:

• At ingest, use 4+2 Erasure Coding to store all objects belonging to the S3 Bucket FinanceReports
across three data centers.

• If an object does not match the first ILM rule, use the policy's default ILM rule, Two Copies Two
Data Centers, to store a copy of that object in two data centers, DC1 and DC2.
156 | StorageGRID Webscale 11.1 Administrator Guide

Example 2: ILM rules and policy for EC object size filtering


You can use the following example rules and policy as starting points to define an ILM policy that
filters by object size to meet recommended EC requirements.
Caution: The following ILM rules and policy are only examples. There are many ways to
configure ILM rules. Carefully analyze your ILM rules before adding them to an ILM policy to
confirm that they will work as intended to protect content from loss.

ILM rule 1 for example 2: Use EC for all objects larger than 200 KB
This example ILM rule erasure codes all objects larger than 200 KB (0.20 MB).

Rule Example value


definition
Rule Name EC only objects > 200 KB
Reference Ingest Time
Time
Advanced Object Size (MB) greater than 0.20
Filtering for
Object Size
Content Create 1 erasure coded copy in all Storage Nodes
Placement

The placement instructions specify that one erasure coded copy be created in all Storage Nodes.
Managing objects through information lifecycle management | 157

ILM rule 2 for example 2: Make 2 copies


This example ILM rule creates two replicated copies and does not filter by object size. This rule is
the second rule in the policy. Because ILM rule 1 for example 2 filters out all objects larger than 200
KB, ILM rule 2 for example 2 only applies to objects that are 200 KB or smaller.

Rule Example value


definition
Rule Name Make 2 Copies
Reference Ingest Time
Time
Advanced None
Filtering for
Object Size
Content Create 2 replicated copies in all Storage Nodes
Placement
158 | StorageGRID Webscale 11.1 Administrator Guide

ILM policy for example 2: Use EC for objects larger than 200 KB
In this example policy, objects larger than 200 KB are erasure coded, and any other objects that are
smaller than 200 KB are replicated using the default catch-all Make 2 Copies rule.
This example ILM policy includes the following ILM rules:

• Erasure code all objects larger than 200 KB.

• If an object does not match the first ILM rule, use the default ILM rule to create two replicated
copies of that object. Because objects larger than 200 KB have been filtered out by rule 1, rule 2
only applies to objects that are 200 KB or smaller.

Example 3: ILM rules and policy for better protection for image files
You can use the following example rules and policy to ensure that images larger than 200 KB are
erasure coded and that three copies are made of smaller images.
Caution: The following ILM rules and policy are only examples. There are many ways to
configure ILM rules. Carefully analyze your ILM rules before adding them to an ILM policy to
confirm that they will work as intended to protect content from loss.

ILM rule 1 for example 3: Use EC for image files larger than 200 KB
This example ILM rule uses advanced filtering to erasure code all image files larger than 200 KB.

Rule Example value


definition
Rule Name EC image files > 200 KB
Reference Ingest Time
Time
Advanced User Metadata type equals image files
Filtering for
User Metadata
Advanced Object Size (MB) greater than 0.2
Filtering for
Object Size
Managing objects through information lifecycle management | 159

Rule Example value


definition
Content Create 1 erasure coded copy in all Storage Nodes
Placement

Because this rule is configured as the first rule in the policy, the erasure coding placement
instructions will only apply to images that are larger than 200 KB.
160 | StorageGRID Webscale 11.1 Administrator Guide

ILM rule 2 for example 3: Replicate 3 copies for all remaining image files
This example ILM rule uses advanced filtering to specify that image files be replicated.

Rule Example value


definition
Rule Name 3 copies for image files
Reference Ingest Time
Time
Advanced User Metadata type equals image files
Filtering for
User Metadata
Content Create 3 replicated copies in all Storage Nodes
Placement

Because the first rule in the policy has already matched image files larger than 200 KB, these
placement instructions only apply to image files 200 KB or smaller.
Managing objects through information lifecycle management | 161

ILM policy for example 3: Better protection for image files


In this example, the ILM policy uses three ILM rules to create a policy that erasure codes image files
larger than 200 KB (0.2 MB), creates replicated copies for image files 200 KB or smaller, and makes
two replicated copies for any non-image files.
This example ILM policy includes rules that perform the following:

• Erasure code all image files larger than 200 KB.

• Create three copies of any remaining image files (that is, images that are 200 KB or smaller).

• Apply the stock rule, Make 2 Copies, as the default to any remaining objects (that is, all non-
image files).
162

Managing S3 buckets and objects for compliance


In some cases, an S3 tenant account might need to comply with regulations that require object data to
be preserved for a specified amount of time. If your StorageGRID Webscale system includes such
tenants, you can enable the global Compliance setting and create compliant ILM rules to manage the
objects in compliant S3 buckets.

How StorageGRID Webscale protects compliant data


When StorageGRID Webscale is properly configured and when compliant S3 buckets, ILM rules,
and ILM policies have been correctly applied, StorageGRID Webscale provides functionality that
prevents objects in S3 buckets from being overwritten, deleted, or altered until the specified retention
period has expired.
StorageGRID Webscale meets the relevant storage requirements of these regulations:

• US SEC (Securities and Exchange Commission) regulation 17 CFR § 240.17a-4(f), which


regulates exchange members, brokers or dealers.

• US CFTC (Commodity Futures Trading Commission) regulation 17 CFR § 1.31(b)-(c)1.31(b)-


(c), which regulates commodity futures trading.

Compliance and retention


When the global Compliance setting is enabled for the StorageGRID Webscale system, S3 tenant
users can create compliant buckets for object data, such as legal and financial records, that needs to
be preserved for a certain amount of time. When creating a compliant bucket, users can specify the
retention period for bucket objects and select whether object data will be automatically deleted when
the retention period expires.
Each object's retention period starts when the object is ingested into the bucket. During the retention
period, the object can be retrieved, but it cannot be modified or deleted. As required, tenant users can
increase a bucket's retention period, place the bucket under a legal hold (meaning that objects cannot
be deleted when their retention period expires), remove a legal hold, or change the auto-delete
setting.

Compliance and the storage of duplicate data


StorageGRID Webscale ensures that duplicate copies of each compliant object are stored on the grid.
When the global Compliance setting is enabled, the active and any proposed ILM policies must use a
compliant ILM rule as their default rule. Compliant rules create at least two replicated object copies
or one erasure-coded copy on Storage Nodes. These copies must exist from day 0 until the retention
period expires and the objects are deleted.

Compliance and security features


StorageGRID Webscale protects compliant objects with the following platform security features:

• Internal public key infrastructure and node certificates are used to authenticate and encrypt
internode communication. Internode communication is secured by TLS.

• Rules for firewalls and iptables are automatically configured to control incoming and outgoing
network traffic, as well as closing unused ports.

• The base operating system of StorageGRID Webscale appliances and virtual nodes is hardened;
unrelated software packages are removed.
Managing S3 buckets and objects for compliance | 163

• Root login over SSH is disabled on all grid nodes. SSH access between nodes uses certificate
authentication.

• Separate networks are available for Client, Admin, and internal Grid traffic

Compliance workflow
The workflow diagram shows the high-level steps for enabling the global Compliance setting, for
creating and managing compliant ILM rules and ILM policies, and for creating and managing S3
buckets that are compliant. As a grid administrator, you must coordinate closely with the tenant
administrator to ensure that the objects in compliant buckets are protected in a manner that satisfies
regulatory requirements.
As the workflow diagram shows, a grid administrator must enable the global Compliance setting for
the entire StorageGRID Webscale system before an S3 tenant can create compliant buckets. The grid
administrator must also ensure that the default rule in the grid's active ILM policy satisfies the data-
protection requirements of objects in compliant buckets.
Then, once the global Compliance setting has been enabled, tenants can create and manage compliant
buckets using the Tenant Manager, the Tenant Management API, or the S3 REST API.
164 | StorageGRID Webscale 11.1 Administrator Guide

Related information
Using tenant accounts
Implementing S3 client applications

Considerations for compliance


Make sure you review the considerations for using the global Compliance setting as well as the
restrictions StorageGRID Webscale places on compliant buckets, compliant objects, and compliant
ILM rules and policies.

Considerations for using the global Compliance setting


• You must enable the global Compliance setting before any S3 tenant can create a compliant
bucket.

• After you enable the global Compliance setting, you cannot disable this setting.

• Enabling the global Compliance setting allows all S3 tenant accounts to use the Tenant Manager,
the Tenant Management API, or the S3 REST API to create and manage compliant buckets. Users
with the appropriate permissions can create compliant buckets, set and increase the retention
period for objects in the bucket, specify how objects can be deleted at the end of their retention
period, and optionally place all objects in the bucket under a legal hold or lift a legal hold.
For example, this tenant user is creating a compliant bucket named bank-records in the default
us-east-1 region. Objects in this bucket will be retained for 6 years and then deleted
automatically. This bucket is not currently under a legal hold.
Managing S3 buckets and objects for compliance | 165

• When the global Compliance setting is enabled, you cannot create a new proposed ILM policy or
activate an existing proposed ILM policy unless the default rule in the policy satisfies the
requirements of S3 compliant buckets. The ILM Rules and ILM Policies pages indicate which
ILM rules are compliant.
In the following example, the ILM Rules page lists two rules that are compatible with compliant
buckets.

Restrictions for using compliant buckets


• If S3 tenants need to create compliant buckets, they must enable compliance and specify
compliance settings when they create the bucket. After a bucket has been saved, compliance
cannot be disabled for the bucket.

• The retention period for the bucket specifies the minimum amount of time each object in that
bucket must be preserved (stored) within StorageGRID Webscale.

• Tenant users can edit bucket settings to increase the retention period, but they can never decrease
this value.

• If a tenant account is notified of a pending legal action or regulatory investigation, users can
preserve relevant information by placing a legal hold on the bucket. When a bucket is under a
legal hold, no object in that bucket can be deleted even if its retention period has ended. As soon
as the legal hold is lifted, objects in the bucket can be deleted when their retention periods end.

• Objects can be added to a compliant bucket at any time, regardless of the bucket's compliance
settings.

• Objects can be retrieved from a compliant bucket at any time, regardless of the bucket's
compliance settings.

• Versioning is not supported for compliant buckets.

Restrictions for objects in compliant buckets


Each object that is saved in a compliant bucket goes through three stages:

1. Object ingest
• When an object is ingested, the system generates metadata for the object that includes a
unique object identifier (UUID) and the ingest date and time. The object inherits the
compliance settings from the bucket.
166 | StorageGRID Webscale 11.1 Administrator Guide

• After an object is ingested into a compliant bucket, its data, S3 user-defined metadata, or S3
object tags cannot be modified, even after the retention period expires.

• StorageGRID Webscale maintains three copies of all object metadata at each site to provide
redundancy and protect object metadata from loss. Metadata is stored independently of object
data.

2. Retention period
• The retention period for an object starts when the object is ingested into the bucket.

• Each time the object is accessed or looked up, the compliance settings for the bucket are also
looked up. The system uses the object's ingest time and date and the bucket's retention period
setting to calculate when the object's retention period will expire.

• During an object's retention period, multiple copies of the object are stored by StorageGRID
Webscale. The exact number and type of copies and the storage locations are determined by
the compliant rules in the active ILM policy.
Note: As required, you might need to add new ILM rules to manage the objects in a
particular bucket.

• During an object's retention period, or when legal hold is enabled for the bucket, the object
cannot be deleted.

3. Object deletion
• When an object's retention period ends, all copies of the object can be deleted, unless legal
hold is enabled for the bucket.

• When an object’s retention period ends, a bucket-level compliance setting allows tenant users
to control how objects are deleted: by users when required or automatically by the system.

• If the bucket setting is to delete objects automatically, all copies of the object are removed by
the scanning ILM process in StorageGRID Webscale. When an object’s retention period ends,
the object is scheduled for deletion. You can look for the IDEL (ILM Initiated Delete)
message in the audit log to determine when ILM has started the process of auto-deleting an
object because its retention period has expired (assuming the auto-delete setting is enabled
and legal hold is off).
Note: The actual amount of time needed to delete all object copies can vary, depending on
the number of objects in the grid and how busy the grid processes are.

Restrictions for compliant ILM rules


If you want to enable the global Compliance setting, you must ensure that the default rule in your
active ILM policy is compliant. A compliant rule satisfies the requirements of compliant S3 buckets:

• It must create at least two replicated object copies or one erasure-coded copy.

• These copies must exist on Storage Nodes for the entire duration of each line in the placement
instructions.

• Object copies cannot be saved on Archive Nodes.

• At least one line of the placement instructions must start at day 0, using Ingest Time as the
reference time.

• At least one line of the placement instructions must be “forever.” The actual meaning of “forever”
is determined by the compliance settings for each bucket.
Managing S3 buckets and objects for compliance | 167

For example, this rule satisfies the requirements of compliant S3 buckets. It stores three replicated
object copies from Ingest Time (day 0) to “forever.” The objects will be stored on Storage Nodes at
three data centers.

Note: The Make 2 Copies stock rule is compliant. You can use it as the default rule in a compliant
policy.

When you configure the placement instructions for a compliant rule, you must consider where the
object copies will be stored. For example, if your deployment includes more than one site, you can
enable site-loss protection for compliant objects by creating a storage pool for each site and
specifying both storage pools in the rule's placement instructions. See “Using multiple storage pools
for cross-site replication.”

Restrictions for active and proposed ILM policies


When the global Compliance setting is enabled, active and proposed ILM policies can include both
compliant and non-compliant rules.

• The default rule in the active or any proposed ILM policy must be compliant.

• Non-compliant rules only apply to objects in non-compliant buckets.

• Compliant rules can apply to objects in any compliant or non-compliant bucket.

As illustrated in “Example: Using compliant ILM rules in an ILM policy,” a compliant ILM policy
might include these three rules:

1. A compliant rule that creates erasure-coded copies of the objects in a specific compliant S3
bucket. The EC copies are stored on Storage Nodes from day 0 to forever.

2. A non-compliant rule that creates two replicated object copies on Storage Nodes for a year and
then moves one object copy to Archive Nodes and stores that copy forever. This rule only applies
to non-compliant buckets because it stores only one object copy forever and it uses Archive
Nodes.

3. A default, compliant rule that creates two replicated object copies on Storage Nodes from day 0 to
forever. This rule applies to any object in any compliant or non-compliant bucket that was not
filtered out by the first two rules.

Related concepts
Using multiple storage pools for cross-site replication on page 116
Example: Using compliant ILM rules in an ILM policy on page 169

Related information
Using tenant accounts
Implementing S3 client applications
Understanding audit messages
168 | StorageGRID Webscale 11.1 Administrator Guide

Enabling compliance
Enabling the global Compliance setting allows all S3 tenant accounts to create and manage compliant
buckets. If S3 tenant accounts need to comply with regulatory requirements when saving object data,
you can enable compliance for your entire StorageGRID Webscale system.

Before you begin

• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.

• You must be signed in to the Grid Manager using a supported browser.

• You must have reviewed the compliance workflow, and you must understand the considerations
for compliance.

About this task


Users with the appropriate permissions can create compliant buckets, set and increase the retention
period for bucket objects, specify how objects can be deleted at the end of their retention period, and
optionally place all objects in the bucket under a legal hold or lift a legal hold.
Attention: If you enable this setting, you will not be able to disable it in the future.

Steps

1. Select Configuration > Compliance.


The Global Compliance Settings page appears.

2. Select Enable Compliance.

3. Click Apply.
A confirmation dialog box appears.
Managing S3 buckets and objects for compliance | 169

4. If you are sure you want to enable compliance for the grid, click OK.
When you click OK:

• If the default rule in the active ILM policy is compliant, compliance is now enabled for the
entire grid and cannot be disabled.

• If the default rule is not compliant, an error appears, indicating that you must create and
activate a new ILM policy that includes a compliant rule as its default rule. Click OK, and
create a new proposed policy, simulate it, and activate it.

Related concepts
Creating, simulating, and activating an ILM policy on page 133

Related tasks
Creating an ILM rule on page 123

Example: Using compliant ILM rules in an ILM policy


You can use the S3 bucket, ILM rules, and ILM policy in this example as a starting point when
defining an ILM policy to meet the object protection and retention requirements for objects in
compliant S3 buckets.
Attention: The following S3 bucket, ILM rules, and ILM policy are examples. There are many
ways to configure compliant ILM rules. Carefully analyze your ILM rules before adding them to
an ILM policy to confirm that they will work as intended to protect content from loss.

S3 bucket for compliance example: bank-records


In this example, an S3 tenant account named Bank of ABC has created a compliant bucket, bank-
records, to store critical bank records. To comply with regulatory requirements, objects in this
bucket must be retained for six years. At the end of that time, objects must be deleted.

Bucket definition Example value


Tenant Account Name Bank of ABC
170 | StorageGRID Webscale 11.1 Administrator Guide

Bucket definition Example value


Bucket Name bank-records
Bucket Region us-east-1 (default)
Retention Period 6 years (2,190 days)
After Retention Period Objects will be deleted automatically
Legal Hold Not in effect

ILM rule 1 for compliance example: Erasure Coding profile with bucket
matching
This example ILM rule applies only to the S3 tenant account named Bank of ABC. It matches any
object in the bank-records bucket and then places an erasure-coded copy of the object on Storage
Nodes at three data center sites using a 6+3 Erasure Coding profile. This rule satisfies the
requirements of compliant S3 buckets: erasure-coded copies are kept on Storage Nodes from day 0 to
forever, using Ingest Time as the reference time.

Rule Example value


definition
Rule Name Compliant Rule: EC objects in bank-records bucket - Bank of ABC
Tenant Bank of ABC
Account
Bucket Name bank-records
Managing S3 buckets and objects for compliance | 171

Rule Example value


definition
Advanced Not specified
filtering

Rule Example value


definition
Reference Ingest Time
Time
Placements From day 0 store forever
Erasure • Create one erasure-coded copy on Storage Nodes at three data center sites
Coding Profile
• Uses 6+3 erasure-coding scheme
172 | StorageGRID Webscale 11.1 Administrator Guide

ILM rule 2 for compliance example: Non-compliant rule


This example ILM rule stores two replicated object copies on Storage Nodes for one year. After that,
it moves one object copy to Archive Nodes and stores this copy forever. Because this rule uses
Archive Nodes and because it only saves only one object copy from day 365 to forever, it is not
compliant and will not apply to the objects in compliant S3 buckets.

Rule Example value


definition
Rule Name Non-Compliant Rule: One Copy on Archive Nodes
Tenant Not specified
Accounts
Bucket Name Not specified, but will only apply to non-compliant buckets

Rule Example value


definition
Reference Ingest Time
Time
Placements • On Day 0, keep two replicated copies on Storage Nodes in Data Center 1 and
Data Center 2 for 365 days

• On Day 365, keep one replicated copy on Archive Nodes forever; temporary
copies in Data Center 1
Managing S3 buckets and objects for compliance | 173

ILM rule 3 for compliance example: Default rule


This example ILM rule copies object data to storage pools in two data centers. This rule is designed
to be the default rule in the ILM policy. It satisfies the requirements of compliant S3 buckets: two
object copies are kept on Storage Nodes from day 0 to forever, using Ingest as the reference time.

Rule Example value


definition
Rule Name Compliant Rule: Two Copies Two Data Centers
Tenant Not specified
Account
Bucket Name Not specified
Advanced Not specified
filtering
174 | StorageGRID Webscale 11.1 Administrator Guide

Rule Example value


definition
Reference Ingest Time
Time
Placements From Day 0 to forever, keep two replicated copies—one on Storage Nodes in
Data Center 1 and one on Storage Nodes in Data Center 2.

ILM policy for compliance example


To create an ILM policy that will effectively protect objects in compliant S3 buckets as well as
objects in non-compliant buckets, you must select ILM rules that satisfy the storage requirements for
both types of objects. Then, you must simulate and activate the proposed policy.

Adding rules to the policy


In this example, the active ILM policy includes three ILM rules, in the following order:
Managing S3 buckets and objects for compliance | 175

1. A compliant rule that creates erasure-coded copies of the objects in a specific compliant S3
bucket. The EC copies are stored on Storage Nodes from day 0 to forever.

2. A non-compliant rule that creates two replicated object copies on Storage Nodes for a year and
then moves one object copy to Archive Nodes and stores that copy forever. This rule only applies
to non-compliant buckets because it stores only one object copy forever and it uses Archive
Nodes.

3. A compliant rule that creates two replicated object copies on Storage Nodes from day 0 to
forever.

Selecting a default rule


If the global Compliance setting is enabled, the default rule in the active or any proposed ILM policy
must be compliant. In the example, the default rule is the second compliant rule. This rule applies to
any object in any compliant or non-compliant bucket that was not matched by the first two rules.
Note: When the global Compliance setting is enabled, you might see an error message when you
initially select the rules for a proposed ILM policy. The message indicates you must select a
compliant ILM rule to be the default rule. Select the Default radio button for the compliant rule
that you want to be the default, and drag that rule to the appropriate position in the list.
176 | StorageGRID Webscale 11.1 Administrator Guide

Simulating the proposed policy


After you have added rules in your proposed policy, arranged them, and chosen a default compliant
rule, you should simulate the policy by testing objects from both compliant and non-compliant
buckets. For example, if you simulated the example policy, you would expect test objects to be
evaluated as follows:

• A test object in the bucket bank-records for the Bank of ABC tenant would be matched by the
EC objects compliant rule.

• A test object in any non-compliant bucket for any tenant account would be matched by the non-
compliant rule.

• A test object in a compliant bucket named customer-records for Bank of ABC or any other
tenant would be matched by the default rule. This is because the bucket name does not match
bank-records and the non-compliant rule does not apply to objects in compliant buckets.

Activating the policy


When you are completely satisfied that the new policy protects object data as expected, you can
activate it.

Related concepts
Creating, simulating, and activating an ILM policy on page 133
Managing S3 buckets and objects for compliance | 177

Resolving consistency errors when updating a bucket's


compliance configuration
If a data center site or multiple Storage Nodes at a site become unavailable, you might need to help
S3 tenant users apply changes to a bucket's compliance configuration.

About this task


Tenant users who have created compliant buckets can configure the bucket's compliance settings
from the Tenant Manager, Tenant Management API, or the S3 REST API. For example, a tenant user
might need to increase the retention period or put a bucket under a legal hold.
When a tenant user updates the compliance settings for an S3 bucket, StorageGRID Webscale
attempts to immediately update the bucket's metadata across the grid. If the system is unable to
update the bucket's metadata because a data center site or multiple Storage Nodes are unavailable, it
displays an error message. Specifically:

• Tenant Manager users see the following error message:

• Tenant Management API users and S3 API users receive a response code of 503 Service
Unavailable with similar message text.

To resolve this error, follow these steps.

Steps

1. Attempt to make all Storage Nodes or sites available again as soon as possible.

2. If you are unable to make enough of the Storage Nodes at each site available, contact technical
support, who can help you recover nodes and ensure that compliance changes are consistently
applied across the grid.

3. Once the underlying issue has been resolved, remind the tenant user to retry their compliance
configuration changes.

Related information
Using tenant accounts
Implementing S3 client applications
Recovery and maintenance
178

Managing disk storage


Storage Nodes provide disk storage capacity and services.

What a Storage Node is


A Storage Node includes the services and processes required to store, move, verify, and retrieve
object data and metadata on disk.

What the LDR service is


Hosted by a Storage Node, the Local Distribution Router (LDR) service handles content transport for
the StorageGRID Webscale system. Content transport encompasses many tasks including data
storage, routing, and request handling. The LDR service does the majority of the StorageGRID
Webscale system’s hard work by handling data transfer loads and data traffic functions.
The LDR service handles the following tasks:

• Queries

• Information Lifecycle Management (ILM) activity

• Object deleting

• Object data storage

• Object data transfers from another LDR service (Storage Node)

• Data storage management

• Protocol interfaces (S3 and Swift)


Managing disk storage | 179

Queries
LDR queries include queries for object location during retrieve and archive operations. You can
identify the average time that it takes to run a query, the total number of successful queries, and the
total number of queries that failed because of a timeout issue.
You can review query information to monitor the health of the metadata store, which impacts the
system’s ingest and retrieval performance. For example, if the latency for an average query is slow
and the number of failed queries due to timeouts is high, the metadata store might be encountering a
higher load or performing another operation.
You can also view the total number of queries that failed because of consistency failures. Consistency
level failures result from an insufficient number of available metadata stores at the time a query is
performed through the specific LDR service.

ILM Activity
Information Lifecycle Management (ILM) metrics allow you to monitor the rate at which objects are
evaluated for ILM implementation. You can view some of these metrics on the Dashboard.

Object stores
The underlying data storage of an LDR service is divided into a fixed number of object stores (also
known as storage volumes). Each object store is a separate mount point.
180 | StorageGRID Webscale 11.1 Administrator Guide

The object stores in a Storage Node are identified by a hexadecimal number from 0000 to 000F,
which is known as the volume ID. By default, 3 TB of space is reserved in the first object store
(volume 0) for object metadata in a Cassandra database; any remaining space on that volume is used
for object data. All other object stores are used exclusively for object data, which includes replicated
copies and erasure coded fragments.
To ensure even space usage for replicated copies, object data for a given object is stored to one object
store based on available storage space. When one or more object stores fill to capacity, the remaining
object stores continue to store objects until there is no more room on the Storage Node.

What the DDS service is


Hosted by a Storage Node, the Distributed Data Store (DDS) service interfaces with the Cassandra
database to manage the object metadata stored in the StorageGRID Webscale system.
The DDS service also manages the mapping of S3 and Swift objects to the unique “content handles”
(UUIDs) that the StorageGRID Webscale system assigns to each ingested object.

Object counts
The DDS service lists the total number of objects ingested into the StorageGRID Webscale system as
well as the total number of objects ingested through each of the system’s supported interfaces (S3 or
Swift).
Managing disk storage | 181

Because object metadata synchronization occurs over time, object count attributes (see DDS > Data
Store > Overview > Main) can differ between DDS services. Eventually, all metadata stores will
synchronize and counts should become the same.

Queries
You can identify the average time that it takes to run a query against the metadata store through the
specific DDS service, the total number of successful queries, and the total number of queries that
failed because of a timeout issue.
You might want to review query information to monitor the health of the metadata store, Cassandra,
which impacts the system’s ingest and retrieval performance. For example, if the latency for an
average query is slow and the number of failed queries due to timeouts is high, the metadata store
might be encountering a higher load or performing another operation.
You can also view the total number of queries that failed because of consistency failures. Consistency
level failures result from an insufficient number of available metadata stores at the time a query is
performed through the specific DDS service.

Consistency guarantees and controls


StorageGRID Webscale guarantees read-after-write consistency for newly created objects. Any GET
operation following a successfully completed PUT operation will be able to read the newly written
data. Overwrites of existing objects, metadata updates, and deletes remain eventually consistent.
182 | StorageGRID Webscale 11.1 Administrator Guide

Metadata protection
Object metadata is information related to or a description of an object; for example, object
modification time, or storage location. StorageGRID Webscale stores object metadata in a Cassandra
database, which interfaces with the DDS service.
To ensure redundancy and thus protection against loss, three copies of object metadata are
maintained. The copies are load balanced across all Storage Nodes at each site. This replication is
non-configurable and performed automatically.

What the nodetool repair operation is


Periodically, the StorageGRID Webscale system runs the nodetool repair operation on Storage Nodes
checking for and repairing metadata replication inconsistencies that may occur over time.
Nodetool repair is run every 12 to 14 days at random times on different Storage Nodes, so that it does
not run on every Storage Node at the same time. The nodetool repair operation is a seamless activity
that occurs in the background of normal system operations.

What the CMS service is


The Content Management System (CMS) service manages objects to ensure that the StorageGRID
Webscale system’s information lifecycle management (ILM) policy is satisfied.
The CMS service carries out the operations of the active ILM policy’s ILM rules, determining how
object data is protected over time. For information about the role of the CMS service when content is
ingested and copied, see “Storage Node overview” and “Object management”.

Related tasks
Configuring ILM rules on page 113

What the ADC service is


The Administrative Domain Controller (ADC) service authenticates grid nodes and their connections
with each other. The ADC service is hosted on each of the first three Storage Nodes at a site.
The ADC service maintains topology information including the location and availability of services.
When a grid node requires information from another grid node or an action to be performed by
another grid node, it contacts an ADC service to find the best grid node to process its request. In
addition, the ADC service retains a copy of the StorageGRID Webscale deployment’s configuration
bundles, allowing any grid node to retrieve current configuration information.
To facilitate distributed and islanded operations, each ADC service synchronizes certificates,
configuration bundles, and information about services and topology with the other ADC services in
the StorageGRID Webscale system.
In general, all grid nodes maintain a connection to at least one ADC service. This ensures that grid
nodes are always accessing the latest information. When grid nodes connect, they cache other grid
nodes’ certificates, enabling systems to continue functioning with known grid nodes even when an
ADC service is unavailable. New grid nodes can only establish connections by using an ADC
service.
The connection of each grid node lets the ADC service gather topology information. This grid node
information includes the CPU load, available disk space (if it has storage), supported services, and
the grid node’s site ID. Other services ask the ADC service for topology information through
topology queries. The ADC service responds to each query with the latest information received from
the StorageGRID Webscale system.
Managing disk storage | 183

Managing Storage Nodes


Managing Storage Nodes entails monitoring the amount of usable space on each node, using
watermark settings, and applying Storage Node configuration settings.

What watermarks are


You use watermark settings to globally manage a Storage Node’s usable storage space. Watermark
settings trigger alarms that assist you in monitoring available storage and determining when to add
Storage Nodes.
A Storage Node becomes read-only when all of a Storage Node’s object stores reach the Storage
Volume Hard Read-Only Watermark. If available space falls below this configured watermark
amount, a Notice alarm is triggered for the Storage Status (SSTS) attribute. The alarm notifies you to
manage storage proactively so that you add capacity only when necessary.
You can view the StorageGRID Webscale system’s current watermark values at any time. Go to
Configuration > Storage Options > Overview.

Watermark related attributes


184 | StorageGRID Webscale 11.1 Administrator Guide

Watermar Default Code Description


k Name Setting
Storage 5 GB VROM Indicates when a Storage Node transitions to hard read-
Volume only mode. Hard read-only mode means that the Storage
Hard Read- Node is read-only and no longer accepts write requests.
Only The Storage Volume Hard Read-Only Watermark value is
Watermark calculated against the Total Space value for the Storage
Node, but measured against the Total Usable Space value
for the Storage Node. When the value of Total Usable
Space falls below the value of Storage Volume Hard
Read-Only Watermark, the Storage Node transitions to
hard read-only mode.
The Storage Volume Hard Read-Only Watermark value
must be less than value for The Storage Volume Soft
Read-Only Watermark.
Managing disk storage | 185

Watermar Default Code Description


k Name Setting
Storage 10 GB VHWM Indicates when a Storage Node transitions to soft read-
Volume only mode. Soft read-only mode means that the Storage
Soft Read- Node advertises read-only services to the rest of the
Only StorageGRID Webscale system, but fulfills all pending
Watermark write requests.
The Storage Volume Soft Read-Only Watermark value is
calculated against the Total Space value for the Storage
Node, but measured against the Total Usable Space value
for the Storage Node. When the value of Total Usable
Space falls below the value of Storage Volume Soft Read-
Only Watermark, the Storage Node transitions to soft
read-only mode:

• The Storage State – Current (SSCR) changes to Read-


Only. If Storage State – Desired is set to Online,
Storage Status (SSTS) changes to Insufficient Free
Space and a Notice alarm is triggered.

• An alarm for Total Usable Space (Percent) (SAVP)


can be triggered, depending on the relationship
between the watermark setting (in bytes) and the
alarm settings (in percent).

The Storage Node is writable again if Total Usable Space


(STAS) becomes greater than Storage Volume Soft Read-
Only Watermark.
Storage 30 GB VLWM Indicates when a Storage Node that has transitioned to
Volume read-only mode is allowed to become read-write again,
Read-Write specifically when one of the constituent volumes
Watermark determines that its available space exceeds this setting.
The Storage Volume Read-Write Watermark value must
be greater than the value for Storage Volume Soft Read-
Only Watermark.
The value for a volume's available space is located on the
Support > Grid Topology > storage node > LDR >
Storage > Overview page.
Metadata 3 TB CAWM The amount of space reserved on storage volume 0 for
Reserved metadata storage. Metadata Reserved Space is subdivided
Space into space for object metadata (Metadata Allowed Space,
or CEMS) and space for essential database operations,
such as compaction and repair.
Note: If the storage capacity of volume 0 is less than
500 GB (non-production use only), 10% of the storage
volumeʹs capacity is reserved for metadata.

Related concepts
Managing full Storage Nodes on page 189
186 | StorageGRID Webscale 11.1 Administrator Guide

Storage Node configuration settings


Depending on your requirements, there are several configuration settings for Storage Nodes that you
can apply.
This table summarizes Storage Node configuration settings.

Service/ Attribute Code Description


Component Name
LDR HTTP State HSTE Set the LDR service to one of:

• Offline: No operations are allowed, and any client


application that attempts to open an HTTP session
to the LDR service receives an error message.
Active sessions are gracefully closed.
• Online: Operation continues normally

Auto-Start HTAS Enable the HTTP component when the LDR service is
HTTP restarted. If not selected, the HTTP interface remains
Offline until explicitly enabled.
If Auto-Start HTTP is selected, the state of the system
on restart depends on the state of the LDR > Storage
component. If the LDR > Storage component is Read-
only on restart, the HTTP interface is also Read-only. If
the LDR > Storage component is Online, then HTTP
is also Online. Otherwise, the HTTP interface remains
in the Offline state.
LDR > Data Reset Lost RCOR Reset to zero the counter for the number of lost objects
Store Objects Count on this service.
Managing disk storage | 187

Service/ Attribute Code Description


Component Name
LDR > Storage State SSDS A user-configurable setting for the desired state of the
Storage – Desired storage component. The LDR service reads this value
and attempts to match the status indicated by this
attribute. The value is persistent across restarts.
For example, you can use this setting to force storage to
become read-only even when there is ample available
storage space. This can be useful for troubleshooting.
The attribute can take one of the following values:

• Offline: When the desired state is Offline, the LDR


service takes the LDR > Storage component
offline.

• Read-only: When the desired state is Read-only, the


LDR service moves the storage state to read-only
and stops accepting new content. Note that content
might continue to be saved to the Storage Node for
a short time until open sessions are closed.

• Online: Leave the value at Online during normal


system operations. The Storage State – Current of
the storage component will be dynamically set by
the service based on the condition of the LDR
service, such as the amount of available object
storage space. If space is low, the component
becomes Read-only.

Health Check SHCT The time limit in seconds within which a health check
Timeout test must complete in order for a storage volume to be
considered healthy. Only change this value when
directed to do so by Support.
LDR > Reset Missing VCMI Resets the count of Missing Objects Detected (OMIS).
Verification Objects Count Use only after foreground verification completes.
Missing replicated object data is restored automatically
by the StorageGRID Webscale system.
Verify FVOV Select object stores on which to perform foreground
verification.
Verification VPRI Set the priority rate at which background verification
Rate takes place. See information on configuring the
background verification rate.
Reset Corrupt VCCR Reset the counter for corrupt replicated object data
Objects Count found during background verification. This option can
be used to clear the Corrupt Objects Detected (OCOR)
alarm condition. For details, see the information about
troubleshooting.
188 | StorageGRID Webscale 11.1 Administrator Guide

Service/ Attribute Code Description


Component Name
LDR > Reset Writes RSWF Reset to zero the counter for write failures of erasure
Erasure Failure Count coded object data to the Storage Node.
Coding
Reset Reads RSRF Reset to zero the counter for read failures of erasure
Failure Count coded object data from the Storage Node.
Reset Deletes RSDF Reset to zero the counter for delete failures of erasure
Failure Count coded object data from the Storage Node.
Reset Corrupt RSCC Reset to zero the counter for the number of corrupt
Copies copies of erasure coded object data on the Storage
Detected Node.
Count
Reset Corrupt RSCD Reset to zero the counter for corrupt fragments of
Fragments erasure coded object data on the Storage Node.
Detected
Count
Reset Missing RSMD Reset to zero the counter for missing fragments of
Fragments erasure coded object data on the Storage Node. Use
Detected only after foreground verification completes.
Count
LDR > Reset RICR Reset to zero the counter for inbound replication
Replication Inbound failures. This can be used to clear the RIRF (Inbound
Replication Replication – Failed) alarm.
Failure Count
Reset ROCR Reset to zero the counter for outbound replication
Outbound failures. This can be used to clear the RORF (Outbound
Replication Replications – Failed) alarm.
Failure Count
Disable DSIR Select to disable inbound replication as part of a
Inbound maintenance or testing procedure. Leave unchecked
Replication during normal operation.
When inbound replication is disabled, objects can be
retrieved from the Storage Node for copying to other
locations in the StorageGRID Webscale system, but
objects cannot be copied to this Storage Node from
other locations: the LDR service is read-only.
Disable DSOR Select to disable outbound replication (including
Outbound content requests for HTTP retrievals) as part of a
Replication maintenance or testing procedure. Leave unchecked
during normal operation.
When outbound replication is disabled, objects can be
copied to this Storage Node, but objects cannot be
retrieved from the Storage Node to be copied to other
locations in the StorageGRID Webscale system. The
LDR service is write-only.
LDR > Reset HTTP LHAC Reset to zero the counter for all HTTP transactions.
HTTP Counts
Managing disk storage | 189

Related tasks
Configuring the background verification rate on page 196

Related information
Troubleshooting StorageGRID Webscale

Managing full Storage Nodes


As Storage Nodes reach capacity, you must be expand the StorageGRID Webscale system through
the addition of new storage. There are two options available when considering how to increase
storage capacity: adding Storage Nodes and adding storage volumes.

Adding Storage Nodes


You can increase storage capacity by adding Storage Nodes. Careful consideration of currently active
ILM rules and capacity requirement must be taken when adding storage. See the instructions for
expanding a StorageGRID Webscale grid.

Adding storage volumes


Each Storage Node supports a maximum of 16 storage volumes. If a Storage Node includes fewer
than 16 storage volumes, you can increase its capacity by adding storage volumes up to the
maximum of 16.

Related information
Expanding a StorageGRID Webscale grid

Setting Grid Options


Grid Options allow you to manage the settings for stored objects, including stored object encryption,
stored object hashing, and stored object compression. You can also set the Prevent Client Modify and
the HTTP options.

Choices
• Configuring stored object encryption on page 189
• Configuring stored object hashing on page 190
• Configuring stored object compression on page 191
• Enabling Prevent Client Modify on page 192
• Enabling HTTP for client communications on page 193

Configuring stored object encryption


Stored object encryption enables the encryption of stored object data so that if an object store is
compromised data cannot be retrieved in a readable form. By default, objects are not encrypted.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
190 | StorageGRID Webscale 11.1 Administrator Guide

About this task


Objects can also be encrypted using the AES‐128 or AES‐256 encryption algorithm. Stored object
encryption enables the encryption of all object data ingested through S3 or Swift. If disabled,
currently encrypted objects remain encrypted. For S3 objects, the Stored Object Encryption setting
can be overridden by the x-amz-server-side-encryption header. If you use the x-amz-
server-side-encryption header, you must specify the AES-256 encryption algorithm in the
request.
Note: If you change this setting, it will take about one minute for the new setting to be applied.
The configured value is cached for performance and scaling.

Steps

1. Select Configuration > Grid Options.

2. From the Grid Options menu, select Configuration.

3. Change Stored Object Encryption to Disabled, AES-256, or AES-128.

4. Click Apply Changes.

Configuring stored object hashing


The Stored Object Hashing option specifies the hashing algorithm used by the LDR service to hash
data when new content is stored. These hashes are verified during retrieval and verification to protect
the integrity of data.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


By default, object data is hashed using the SHA‐1 algorithm. Object data can also be hashed using
the SHA‐256 algorithm.
Managing disk storage | 191

Note: If you change this setting, it will take about one minute for the new setting to be applied.
The configured value is cached for performance and scaling.

Steps

1. Select Configuration > Grid Options.

2. From the Grid Options menu, select Configuration.

3. Change Stored Object Hashing to SHA-256 or SHA-1.

4. Click Apply Changes.

Configuring stored object compression


You can use the Stored Object Compression grid option to reduce the size of objects stored in
StorageGRID Webscale, so that objects consume less storage.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


The Stored Object Compression grid option is disabled by default. If you enable this option,
StorageGRID Webscale attempts to compress each object when saving it, using lossless compression.
Note: If you change this setting, it will take about one minute for the new setting to be applied.
The configured value is cached for performance and scaling.

Before enabling this setting, be aware of the following:


192 | StorageGRID Webscale 11.1 Administrator Guide

• Applications that save objects to StorageGRID Webscale might compress objects before saving
them. If a client application has already compressed an object before saving it to StorageGRID
Webscale, enabling Stored Object Compression will not further reduce an object’s size.

• If the Stored Object Compression grid option is enabled, S3 and Swift client applications should
avoid performing GET Object operations that specify a range of bytes be returned. These “range
read” operations are inefficient because StorageGRID Webscale must effectively uncompress the
objects to access the requested bytes. GET Object operations that request a small range of bytes
from a very large object are especially inefficient; for example, it is inefficient to read a 10 MB
range from a 50 GB compressed object.
If ranges are read from compressed objects, client requests can time out.
Note: If you need to compress objects and your client application must use range reads,
increase the read timeout for the application.

Steps

1. Select Configuration > Grid Options.

2. From the Grid Options menu, select Configuration.

3. Change Stored Object Compression to Enabled.

4. Click Apply Changes.

Enabling Prevent Client Modify


You can set the Prevent Client Modify option to Enabled to override the permissions defined for
HTTP profiles and to deny specific HTTP client operations.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


When the Prevent Client Modify option is enabled, the following requests are denied:
Managing disk storage | 193

• S3 REST API
◦ Delete Bucket requests

◦ Any requests to modify an existing object's data, user-defined metadata, or S3 object tagging
Note: This setting does not apply to buckets with versioning enabled. Versioning already
prevents modifications to object data, user-defined metadata, and object tagging.

• Swift REST API


◦ Delete Container requests
◦ Requests to modify any existing object. For example, the following operations are denied: Put
Overwrite, Delete, Metadata Update, and so on.

Prevent Client Modify is a system wide setting.

Steps

1. Select Configuration > Grid Options.

2. From the Grid Options menu, select Configuration.

3. Change Prevent Client Modify to Enabled.

4. Click Apply Changes.

Enabling HTTP for client communications


By default, client applications use HTTPS for all connections with the Storage Nodes and API
Gateway Nodes. However, you can set the HTTP option to Enabled if you want to use HTTP
194 | StorageGRID Webscale 11.1 Administrator Guide

communications between S3 and Swift clients and StorageGRID Webscale in addition to HTTPS
communications. For example, you might use HTTP when testing a non-production grid.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


When the HTTP option is enabled, either HTTP or HTTPS can be used for communications between
S3 and Swift clients and StorageGRID Webscale. HTTP communications require the use of different
ports between the S3/Swift clients and the API Gateway Nodes and Storage Nodes. See the
instructions for implementing S3 and Swift client applications for more information.
Attention: HTTP mode is intended for use in testing and debugging environments. Be careful
when enabling HTTP for a production grid since requests will be sent unencrypted.

Steps

1. Select Configuration > Grid Options.

2. From the Grid Options menu, select Configuration.

3. Change HTTP to Enabled.

4. Click Apply Changes.

Related information
Implementing S3 client applications
Implementing Swift client applications
Managing disk storage | 195

What object segmentation is


Object segmentation is the process of splitting up an object into a collection of smaller fixed-size
objects in order to optimize storage and resources usage for large objects. S3 multi-part upload also
creates segmented objects, with an object representing each part.
When an object is ingested into the StorageGRID Webscale system, the LDR service splits the object
into segments, and creates a segment container that lists the header information of all segments as
content.

If your StorageGRID Webscale system includes an Archive Node whose Target Type is Cloud
Tiering – Simple Storage Service and the targeted archival storage system is Amazon Web Services
(AWS), the Maximum Segment Size must be less than or equal to 4.5 GiB (4,831,838,208 bytes).
This upper limit ensures that the AWS PUT limitation of five GBs is not exceeded. Requests to AWS
that exceed this value fail.
On retrieval of a segment container, the LDR service assembles the original object from its segments
and returns the object to the client.
The container and segments are not necessarily stored on the same Storage Node. Container and
segments can be stored on any Storage Node.
Each segment is treated by the StorageGRID Webscale system independently and contributes to the
count of attributes such as Managed Objects and Stored Objects. For example, if an object stored to
the StorageGRID Webscale system is split into two segments, the value of Managed Objects
increases by three after the ingest is complete, as follows:
segment container + segment 1 + segment 2 = three stored objects
You can improve performance when handling large objects by ensuring that:

• Each Gateway and Storage Node has sufficient network bandwidth for the throughput required.
For example, configure separate Grid and Client Networks on 10 Gbps Ethernet interfaces.

• Enough Gateway and Storage Nodes are deployed for the throughput required.

• Each Storage Node has sufficient disk IO performance for the throughput required.
196 | StorageGRID Webscale 11.1 Administrator Guide

Verifying object integrity


The StorageGRID Webscale system verifies the integrity of object data on Storage Nodes, checking
for both corrupt and missing objects.
There are two verification processes: background verification and foreground verification. They work
together to ensure data integrity. Background verification runs automatically, and continuously
checks the correctness of object data. Foreground verification can be triggered by a user, to more
quickly verify the existence (although not the correctness) of objects.

Related concepts
What background verification is on page 196
What foreground verification is on page 198

What background verification is


The background verification process automatically and continuously checks Storage Nodes for
corrupt copies of object data, and automatically attempts to repair any issues that it finds.
Background verification checks the integrity of both replicated object data and erasure coded object
data.
If background verification detects that a copy of replicated object data is corrupt, that corrupt copy is
removed from its location and quarantined elsewhere on the Storage Node. The Storage Node's LDR
service then sends a request to the DDS service to create a new uncorrupted copy. The DDS service
fulfills this request by running an existing copy through an ILM evaluation, which will determine that
the current ILM policy is no longer being met for this object because the corrupt object no longer
exists at the expected location. A new copy is generated and placed to satisfy the system's active ILM
policy, which may not be in the same location that the corrupt copy was stored. Corrupt object data is
quarantined rather than deleted from the system, so that it can still be accessed. For more information
on accessing quarantined object data, contact technical support.
If background verification detects that a fragment of erasure coded object data is corrupt, the
StorageGRID Webscale system automatically attempts to rebuild the missing fragment in place on
the same Storage Node using the remaining fragments of that copy. If the corrupted fragment cannot
be rebuilt, the Corrupt Copies Detected (ECOR) attribute is incremented by one, and an attempt is
made to retrieve another copy of the object. If retrieval is successful, an ILM evaluation is performed
to create a replacement copy of the erasure coded object data.
If background verification cannot replace a corrupted object because it cannot locate another copy ,
the LOST (Lost Objects) alarm is triggered.
Background verification runs at a continuous rate that is designed not to interfere with ordinary
system activities. Background verification cannot be stopped. However you can increase the
background verification rate to more quickly verify the contents of a Storage Node if you suspect a
problem.
Background verification does not run on Archive Nodes.

Configuring the background verification rate


You can change the rate at which background verification checks replicated object data on a Storage
Node if you have concerns about data integrity.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.


Managing disk storage | 197

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


You can change the Verification Rate for background verification on a Storage Node:

• Adaptive: Default setting. The task is designed to verify at a maximum of 4 MB/s or 10 objects/s
(whichever is exceeded first).

• High: Storage verification proceeds quickly, at a rate that can slow ordinary system activities.
Use the High verification rate only when you suspect that a hardware or software fault might have
corrupted object data. After the High priority background verification completes, the Verification
Rate automatically resets to Adaptive.

Steps

1. Select Support > Grid Topology.

2. Select Storage Node > LDR > Verification.

3. Click Configuration > Main.

4. Go to LDR > Verification > Configuration > Main.

5. Under Background Verification, select Verification Rate > High or Verification Rate >
Adaptive.

Note: Setting the Verification Rate to High triggers a Notice level alarm for VPRI (Verification
Rate).

6. Click Apply Changes.

7. Monitor the results of background verification.


198 | StorageGRID Webscale 11.1 Administrator Guide

a. Go to LDR > Verification > Overview > Main and monitor the attribute Corrupt Objects
Detected (OCOR).
If background verification finds corrupt replicated object data, the attribute Corrupt Objects
Detected is incremented. The LDR service recovers by quarantining the corrupt object data
and sending a message to the DDS service to create a new copy of the object data. The new
copy can be made anywhere in the StorageGRID Webscale system that satisfies the active
ILM policy.

b. Go to LDR > Erasure Coding > Overview > Main and monitor the attribute Corrupt
Fragments Detected (ECCD).
If background verification finds corrupt fragments of erasure coded object data, the attribute
Corrupt Fragments Detected is incremented. The LDR service recovers by rebuilding the
corrupt fragment in place on the same Storage Node.

8. If corrupt replicated object data is found, contact technical support to clear the quarantined copies
from the StorageGRID Webscale system and determine the root cause of the corruption.

What foreground verification is


Foreground verification is a user-initiated process that checks if all expected object data exists on a
Storage Node. Foreground verification is used to verify the integrity of a storage device.
Foreground verification is a faster alternative to background verification that checks the existence but
not the integrity of object data on a Storage Node. If foreground verification finds that many items
are missing, it may indicate that there is an issue with all or part of a storage device associated with
the Storage Node. If foreground verification finds issues when verifying erasure coded objects, you
must perform a recovery procedure for any affected storage volumes.
Foreground verification checks both replicated object data and erasure coded object data. If
foreground verification finds a missing copy of object data, it automatically attempts to replace it.
If a copy of replicated object data is found to be missing, the StorageGRID Webscale system
automatically attempts to replace it from copies stored elsewhere in the system. The Storage Node
runs an existing copy through an ILM evaluation, which will determine that the current ILM policy is
no longer being met for this object because the missing object no longer exists at the expected
location. A new copy is generated and placed to satisfy the system's active ILM policy. This new
copy may not be placed in the same location that the missing copy was stored.
If a fragment of a copy of an erasure coded object is found to be missing, the StorageGRID Webscale
system automatically attempts to rebuild the missing fragment in place on the same Storage Node
using the remaining fragments. If the missing fragment cannot be rebuilt, the Corrupt Copies
Detected (ECOR) attribute is incremented by one. ILM then attempts to find another copy of the
object which it can use to generate a new erasure coded copy.
If no other copies of a missing replicated object or a corrupted erasure coded object can be found in
the grid, the LOST (Lost Objects) alarm is triggered.
If foreground verification identifies an issue with erasure coding on a storage volume, the foreground
verification grid task pauses with an error message that identifies the volume that is affected.

Running foreground verification


Foreground verification enables you to verify the existence of data on a Storage Node. Missing object
data might indicate that an issue exists with the underlying storage device.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Managing disk storage | 199

• You have ensured that the following grid tasks are not running:

◦ Grid Expansion: Add Server (GEXP), when adding a Storage Node

◦ Storage Node Decommissioning (LDCM) on the same Storage Node

If these grid tasks are running, wait for them to complete or release their lock, or abort them as
appropriate.

• You have ensured that the storage is online. (Select Support > Grid Topology. Then, select
Storage Node > LDR > Storage > Overview > Main. Ensure that Storage State - Current is
Online.)
• You have ensured that the following recovery procedures are not running on the same Storage
Node:

◦ Recovery of a failed storage volume

◦ Recovery of a Storage Node with a failed system drive

Foreground verification does not provide useful information while recovery procedures are in
progress.

About this task


Foreground verification checks for both missing replicated object data and missing erasure coded
object data:

• If foreground verification finds large amounts of missing object data, there is likely an issue with
the Storage Node's storage that needs to be investigated and addressed.

• If foreground verification finds a serious storage error associated with erasure coded data, it will
notify you. You must perform storage volume recovery to repair the error.

You can configure foreground verification to check all of a Storage Node's object stores or only
specific object stores.
If foreground verification finds missing object data, the StorageGRID Webscale system attempts to
replace it. If a replacement copy cannot be made, the LOST (Lost Objects) alarm might be triggered.
Foreground verification generates an LDR Foreground Verification grid task that, depending on the
number of objects stored on a Storage Node, can take days or weeks to complete. It is possible to
select multiple Storage Nodes at the same time; however, these grid tasks are not run simultaneously.
Instead, they are queued and run one after the other until completion. When foreground verification is
in progress on a Storage Node, you cannot start another foreground verification task on that same
Storage Node even though the option to verify additional volumes might appear to be available for
the Storage Node.
If a Storage Node other than the one where foreground verification is being run goes offline, the grid
task continues to run until the % Complete attribute reaches 99.99 percent. The % Complete
attribute then falls back to 50 percent and waits for the Storage Node to return to online status. When
the Storage Node's state returns to online, the LDR Foreground Verification grid task continues until
it completes.

Steps

1. Select Storage Node > LDR > Verification.

2. Click Configuration > Main.

3. Under Foreground Verification, select the check box for each storage volume ID you want to
verify.
200 | StorageGRID Webscale 11.1 Administrator Guide

4. Click Apply Changes.


Wait until the page auto-refreshes and reloads before you leave the page. Once refreshed, object
stores become unavailable for selection on that Storage Node.
An LDR Foreground Verification grid task is generated and runs until it completes, pauses, or is
aborted.

5. Monitor missing objects or missing fragments:

a. Select Storage Node > LDR > Verification.

b. On the Overview tab under Verification Results, note the value of Missing Objects
Detected.
If the count for the attribute Missing Objects Detected is large (if there are a hundreds of
missing objects), there is likely an issue with the Storage Node's storage. In this case, cancel
foreground verification by aborting the Foreground Verification grid task, resolve the storage
issue, and then rerun foreground verification for the Storage Node.

c. Select Storage Node > LDR > Erasure Coding.

d. On the Overview tab under Verification Results, note the value of Missing Fragments
Detected.
If the count for the attribute Missing Fragments Detected is large (if there are a hundreds of
missing fragments), there is likely an issue with the Storage Node's storage. In this case,
cancel foreground verification by aborting the Foreground Verification grid task, resolve the
storage issue, and then rerun foreground verification for the Storage Node.

If foreground verification does not detect a significant number of missing replicated object copies
or a significant number of missing fragments, then the storage is operating normally.

6. Monitor the completion of the foreground verification grid task:


Managing disk storage | 201

a. Select Support > Grid Topology. Then select site > Admin Node > CMN > Grid Task >
Overview > Main.

b. Verify that the foreground verification grid task is progressing without errors.
Note: A notice-level alarm is triggered on grid task status (SCAS) if the foreground
verification grid task pauses.

c. If the grid task pauses with a critical storage error, recover the affected volume and
then run foreground verification on the remaining volumes to check for additional errors.
Attention: If the foreground verification grid task pauses with the message
Encountered a critical storage error in volume volID
you must perform the procedure for recovering a failed storage volume. See the recovery
and maintenance instructions.

After you finish


If you still have concerns about data integrity, go to LDR > Verification > Configuration > Main
and increase the background Verification Rate. Background verification checks the correctness of all
stored object data and repairs any issues that it finds. Finding and repairing potential issues as
quickly as possible reduces the risk of data loss.

Related information
Recovery and maintenance

How load balancing works


To balance ingest and retrieval workloads, optionally deploy the StorageGRID Webscale system with
API Gateway Nodes, or integrate a third-party load balancer.

API Gateway Node


The API Gateway Node provides load balancing functionality to the StorageGRID Webscale system
and distributes the workload when multiple client applications perform ingest and retrieval
operations.
The Connection Load Balancer (CLB) service directs incoming requests to the optimal LDR service,
based on availability and system load. When the optimal LDR service is chosen, the CLB service
establishes an outgoing connection and forwards the traffic to the chosen grid node.
HTTP connections from the StorageGRID Webscale system to a client application use the CLB
service to act as a proxy unless the client application is configured to connect through an LDR
202 | StorageGRID Webscale 11.1 Administrator Guide

service. The CLB services operates as a connection pipeline between the client application and an
LDR service.
203

Managing archival storage


Optionally, each of your StorageGRID Webscale system's data center sites can be deployed with an
Archive Node, which allows you to connect to a targeted external archival storage system.

What an Archive Node is


The Archive Node provides an interface through which you can target an external archival storage
system for the long term storage of object data. The Archive Node also monitors this connection and
the transfer of object data between the StorageGRID Webscale system and the targeted external
archival storage system.

Object data that cannot be deleted, but is not regularly accessed, can at any time be moved off of a
Storage Node's spinning disks and onto external archival storage such as the cloud or tape. This
archiving of object data is accomplished through the configuration of a data center site's Archive
Node and then the configuration of ILM rules where this Archive Node is selected as the "target" for
content placement instructions. The Archive Node does not manage archived object data itself; this is
achieved by the external archive device.
Note: Object metadata is not archived, but remains on Storage Nodes.

What the ARC service is


The Archive Node's Archive (ARC) service provides the management interface you can use to
configure connections to external archival storage such as the cloud through the S3 API or tape
through TSM middleware.
It is the ARC service that interacts with an external archival storage system, sending object data for
near-line storage and performing retrievals when a client application requests an archived object.
When a client application requests an archived object, a Storage Node requests the object data from
the ARC service. The ARC service makes a request to the external archival storage system, which
retrieves the requested object data and sends it to the ARC service. The ARC service verifies the
object data and forwards it to the Storage Node, which in turn returns the object to the requesting
client application.
Requests for object data archived to tape through TSM middleware are managed for efficiency of
retrievals. Requests can be ordered so that objects stored in sequential order on tape are requested in
204 | StorageGRID Webscale 11.1 Administrator Guide

that same sequential order. Requests are then queued for submission to the storage device. Depending
upon the archival device, multiple requests for objects on different volumes can be processed
simultaneously.

About supported archive targets


When you configure the Archive Node to connect with an external archive, you must select the target
type.
The StorageGRID Webscale system supports the archiving of object data to the cloud through an S3
interface or to tape through TSM middleware.

Archiving to the cloud through the S3 API


You can configure an Archive Node to target any external archival storage system that is capable of
interfacing with the StorageGRID Webscale system through the S3 API.
The Archive Node's ARC service can be configured to connect directly to Amazon Web Services
(AWS) or to any other system that can interface to the StorageGRID Webscale system through the S3
API; for example, another instance of the StorageGRID Webscale system.

Archiving to tape through TSM middleware


You can configure an Archive Node to target a Tivoli Storage Manager (TSM) server which provides
a logical interface for storing and retrieving object data to random or sequential access storage
devices, including tape libraries.
The Archive Nodes's ARC service acts as a client to the TSM server, using Tivoli Storage Manager
as middleware for communicating with the archival storage system.
Tivoli Storage Manager Management Classes
Management classes defined by the TSM middleware outline how the TSMʹs backup and archive
operations function, and can be used to specify rules for content that are applied by the TSM server.
Such rules operate independently of the StorageGRID Webscale system’s ILM policy, and must be
consistent with the StorageGRID Webscale system’s requirement that objects are stored permanently
and are always available for retrieval by the Archive Node. After object data is sent to a TSM server
by the Archive Node, the TSM lifecycle and retention rules are applied while the object data is stored
to tape managed by the TSM server.
The TSM management class is used by the TSM server to apply rules for data location or retention
after objects are sent to the TSM server by the Archive Node. For example, objects identified as
database backups (temporary content that can be overwritten with newer data) could be treated
differently than application data (fixed content that must be retained indefinitely).

Managing connections to archival storage


You can configure an Archive Node to connect to an external archival storage system through either
the S3 API or TSM middleware.
Once the type of archival target is configured for an Archive Node, the target type cannot be changed.
Managing archival storage | 205

Configuring connection settings for S3 API


There are a number of settings you must configure, before the Archive Node can communicate with
an external archival storage system that connects to the StorageGRID Webscale system through the
S3 API.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You need to create a bucket on the target archival storage system:

◦ The bucket must be dedicated to a single Archive Node. It cannot be used by other Archive
Nodes or other applications.

◦ The bucket must have the appropriate region selected for your location.

◦ The bucket should be configured with versioning suspended.

• Object Segmentation must be enabled and the Maximum Segment Size must be less than or equal
to 4.5 GiB (4,831,838,208 bytes). S3 API requests that exceed this value will fail if Simple
Storage Service (S3) is used as the external archival storage system.

About this task


Until these settings are configured, the ARC service remains in a Major alarm state as it is unable to
communicate with the external archival storage system.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Target.

3. Click Configuration > Main.


206 | StorageGRID Webscale 11.1 Administrator Guide

4. Select Cloud Tiering - Simple Storage Service (S3) from the Target Type drop-down list.
Note: Configuration settings are unavailable until you select a Target Type.

5. Configure the cloud tiering (S3) account through which the Archive Node will connect to the
target external S3 capable archival storage system.
Most of the fields on this page are self-explanatory. The following describes fields for which you
might need guidance.

• Region: Only available if Use AWS is selected. The region you select must match the bucket's
region.

• Endpoint and Use AWS: For Amazon Web Services (AWS), select Use AWS. Endpoint is then
automatically populated with an endpoint URL based on the Bucket Name and Region
attributes. For example, https://bucket.region.amazonaws.com
For a non-AWS target, enter the URL of the system hosting the bucket, including the port
number. For example, https://system.com:1080

• End Point Authentication: Enabled by default. Clear to disable endpoint SSL certificate and
host name verification for the targeted external archival storage system. Only clear the
checkbox if the network to the external archival storage system is trusted. If another instance
of a StorageGRID Webscale system is the target archival storage device and the system is
configured with publicly signed certificates, you do not need to clear the checkbox.

• Storage Class: Select Standard, the default value, for regular storage, or Reduced Redundancy,
which provides lower cost storage with less reliability for objects that can be easily recreated.
If the targeted archival storage system is another instance of the StorageGRID Webscale
system, Storage Class controls the target system's dual-commit behavior.

6. Click Apply Changes.


The specified configuration settings are validated and applied to your StorageGRID Webscale
system. Once configured, the target cannot be changed.
Managing archival storage | 207

Modifying connection settings for S3 API


After the Archive Node is configured to connect to an external archival storage system through the S3
API, you can modify some settings should the connection change.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


If you change the Cloud Tiering (S3) account, you must ensure that the user access credentials have
read/write access to the bucket, including all objects that were previously ingested by the Archive
Node to the bucket.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Target.

3. Click Configuration > Main.

4. Modify account information, as necessary.


If you change the storage class, new object data is stored with the new storage class. Existing
object continue to be stored under the storage class set when ingested.
Note: Bucket Name, Region, and Endpoint, use AWS values and cannot be changed.

5. Click Apply Changes.


208 | StorageGRID Webscale 11.1 Administrator Guide

Modifying the Cloud Tiering Service state


You can control the Archive Node's ability read and write to the targeted external archival storage
system that connects through the S3 API by changing the state of the Cloud Tiering Service.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• The Archive Node must be configured.

About this task


You can effectively take the Archive Node offline by changing the Cloud Tiering Service State to
Read-Write Disabled.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC.

3. Click Configuration > Main.

4. Select a Cloud Tiering Service State.

5. Click Apply Changes.

Configuring connections to Tivoli Storage Manager middleware


Before the Archive Node can communicate with Tivoli Storage Manager (TSM) middleware, you
must configure a number of settings.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Until these settings are configured, the ARC service remains in a Major alarm state as it is unable to
communicate with the Tivoli Storage Manager.
Managing archival storage | 209

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Target.

3. Click Configuration > Main.

4. Select Tivoli Storage Manager (TSM) from the Target Type drop-down list.

5. By default, the Tivoli Storage Manager State is set to Online, which means that the Archive
Node is able to retrieve object data from the TSM middleware server. Select Offline to prevent
retrievals from the TSM middleware server.

6. Complete the following information:

• Server IP or Hostname: Specify the IP address or Fully Qualified Domain Name (FQDN) of
the TSM middleware server used by the ARC service. The default IP address is 127.0.0.1.

• Server Port: Specify the port number on the TSM middleware server that the ARC service will
connect to. The default is 1500.

• Node Name: Specify the name of the Archive Node. You must enter the name (arc‐user) that
you registered on the TSM middleware server.

• User Name: Specify the user name the ARC service uses to log in to the TSM server. Enter
the default user name (arc‐user) or the administrative user you specified for the Archive Node.

• Password: Specify the password used by the ARC service to log in to the TSM server.

• Management Class: Specify the default management class to use if a management class is not
specified when the object is being save to the StorageGRID Webscale system, or the specified
management class is not defined on the TSM middleware server.
If the specified management class does not exist on the TSM server, the object cannot be
saved to the TSM archive. The object remains in the queue on the StorageGRID Webscale
system and the CMS > Content > Overview > Objects with ILM Evaluation Pending
count is incremented.
210 | StorageGRID Webscale 11.1 Administrator Guide

• Number of Sessions: Specify the number of tape drives on the TSM middleware server that
are dedicated to the Archive Node. The Archive Node concurrently creates a maximum of one
session per mount point plus a small number of additional sessions (less than five).
You need to change this value to be the same as the value set for MAXNUMMP (maximum
number of mount points) when the Archive Node was registered or updated. (In the register
command, the default value of MAXNUMMP used is 1, if no value is set.)
You must also change the value of MAXSESSIONS for the TSM server to a number that is at
least as large as the Number of Sessions set for the ARC service. The default value of
MAXSESSIONS on the TSM server is 25.
• Maximum Retrieve Sessions: Specify the maximum number of sessions that the ARC service
can open to the TSM middleware server for retrieve operations. In most cases, the appropriate
value is Number of Sessions minus Maximum Store Sessions. If you need to share one tape
drive for storage and retrieval, specify a value equal to the Number of Sessions.

• Maximum Store Sessions: Specify the maximum number of concurrent sessions that the ARC
service can open to the TSM middleware server for archive operations.
This value should be set to one except when the targeted archival storage system is full and
only retrievals can be performed. Set this value to zero to use all sessions for retrievals.

7. Click Apply Changes.

Managing Archive Nodes


You can configure an Archive Node to optimize Tivoli Storage Manager performance, take an
Archive Node offline when a TSM server is nearing capacity or unavailable, and configure
replication and retrieve settings. You can also set Custom alarms for the Archive Node.

Choices
• Optimizing Archive Node for TSM middleware sessions on page 210
• Managing an Archive Node when TSM server reaches capacity on page 211
• Configuring Archive Node replication on page 213
• Configuring retrieve settings on page 214
• Configuring the archive store on page 215
• Setting Custom alarms for the Archive Node on page 216

Optimizing Archive Node for TSM middleware sessions


You can optimize the performance of an Archive Node that connects to an external archival storage
system through the S3 API by configuring the Archive Node's sessions.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Typically, the number of concurrent sessions that the Archive Node has open to the TSM middleware
server is set to the number of tape drives the TSM server has dedicated to the Archive Node. One
tape drive is allocated for storage while the rest are allocated for retrieval. However, in situations
where a Storage Node is being rebuilt from Archive Node copies or the Archive Node is operating in
Read-only mode, you can optimize TSM server performance by setting the maximum number of
retrieve sessions to be the same as number of concurrent sessions. The result is that all drives can be
Managing archival storage | 211

used concurrently for retrieval, and, at most, one of these drives can also be used for storage if
applicable.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Target.

3. Click Configuration > Main.

4. Change Maximum Retrieve Sessions to be the same as Number of Sessions.

5. Click Apply Changes.

Managing an Archive Node when TSM server reaches capacity


The TSM server has no way to notify the Archive Node when either the TSM database or the archival
media storage managed by the TSM server is nearing capacity. The Archive Node continues to accept
object data for transfer to the TSM server after the TSM server stops accepting new content. This
content cannot be written to media managed by the TSM server. An alarm is triggered if this
happens. This situation can be avoided through proactive monitoring of the TSM server.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


To prevent the ARC service from sending further content to the TSM server, you can take the
Archive Node offline by taking its ARC > Store component offline. This procedure can also be
useful in preventing alarms when the TSM server is unavailable for maintenance.
212 | StorageGRID Webscale 11.1 Administrator Guide

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Store.

3. Click Configuration > Main.

4. Change Archive Store State to Offline.

5. Select Archive Store Disabled on Startup.

6. Click Apply Changes.

Setting Archive Node to read-only if TSM middleware reaches capacity


If the targeted TSM middleware server reaches capacity, the Archive Node can be optimized to only
perform retrievals.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Target.

3. Click Configuration > Main.

4. Change Maximum Retrieve Sessions to be the same as the number of concurrent sessions listed in
Number of Sessions.

5. Change Maximum Store Sessions to 0.


Note: Changing Maximum Store Sessions to 0 is not necessary if the Archive Node is Read-
only. Store sessions will not be created.

6. Click Apply Changes.


Managing archival storage | 213

Configuring Archive Node replication


You can configure the replication settings for an Archive Node and disable inbound and outbound
replication, or reset the failure counts being tracked for the associated alarms.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Replication.

3. Click Configuration > Main.

4. Modify the following settings, as necessary:

• Reset Inbound Replication Failure Count: Select to reset the counter for inbound replication
failures. This can be used to clear the RIRF (Inbound Replications – Failed) alarm.

• Reset Outbound Replication Failure Count: Select to reset the counter for outbound
replication failures. This can be used to clear the RORF (Outbound Replications – Failed)
alarm.

• Disable Inbound Replication: Select to disable inbound replication as part of a maintenance or


testing procedure. Leave cleared during normal operation.
When inbound replication is disabled, object data can be retrieved from the ARC service for
replication to other locations in the StorageGRID Webscale system, but objects cannot be
replicated to this ARC service from other system locations. The ARC service is read‐only.

• Disable Outbound Replication: Select the checkbox to disable outbound replication (including
content requests for HTTP retrievals) as part of a maintenance or testing procedure. Leave
unchecked during normal operation.
214 | StorageGRID Webscale 11.1 Administrator Guide

When outbound replication is disabled, object data can be copied to this ARC service to
satisfy ILM rules, but object data cannot be retrieved from the ARC service to be copied to
other locations in the StorageGRID Webscale system. The ARC service is write‐only.

5. Click Apply Changes.

Configuring retrieve settings


You can configure the retrieve settings for an Archive Node to set the state to Online or Offline, or
reset the failure counts being tracked for the associated alarms.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Retrieve.

3. Click Configuration > Main.

4. Modify the following settings, as necessary:

• Archive Retrieve State: Set the component state to either:

◦ Online: The grid node is available to retrieve object data from the archival media device.

◦ Offline: The grid node is not available to retrieve object data.

• Reset Request Failures Count: Select the checkbox to reset the counter for request failures.
This can be used to clear the ARRF (Request Failures) alarm.
• Reset Verification Failure Count: Select the checkbox to reset the counter for verification
failures on retrieved object data. This can be used to clear the ARRV (Verification Failures)
alarm.

5. Click Apply Changes.


Managing archival storage | 215

Configuring the archive store


You can configure store setting for an Archive Node.

About this task


Store settings differ based on the configured target type for the Archive Node.

Related tasks
Managing an Archive Node when TSM server reaches capacity on page 211

Configuring the archive store for TSM middleware connection


If your Archive Node connects to a TSM middleware server, you can configure an Archive Node's
archive store state to Online or Offline. You can also disable the archive store when the Archive Node
first starts up, or reset the failure count being tracked for the associated alarm.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Store.

3. Click Configuration > Main.

4. Modify the following settings, as necessary:

• Archive Store State: Set the component state to either:

◦ Online: The Archive Node is available to process object data for storage to the archival
storage system.

◦ Offline: The Archive Node is not available to process object data for storage to the archival
storage system.

• Archive Store Disabled on Startup: When selected, the Archive Store component remains in
the Read-only state when restarted. Used to persistently disable storage to the targeted the
216 | StorageGRID Webscale 11.1 Administrator Guide

archival storage system. Useful when the targeted the archival storage system is unable to
accept content.

• Reset Store Failure Count: Reset the counter for store failures. This can be used to clear the
ARVF (Stores Failure) alarm.

5. Click Apply Changes.

Configuring store settings for S3 API connection


If your Archive Node connects to an archival storage system through the S3 API, you can reset the
Store Failures count, which can be used to clear the ARVF (Store Failures) alarm.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Support > Grid Topology.

2. Select Archive Node > ARC > Store.

3. Click Configuration > Main.

4. Select Reset Store Failure Count.

5. Click Apply Changes.


The Store Failures attribute resets to zero.

Setting Custom alarms for the Archive Node


You should establish Custom alarms for the ARQL and ARRL attributes that are used to monitor the
speed and efficiency of object data retrieval from the archival storage system by the Archive Node.

• ARQL: Average Queue Length. The average time, in microseconds, that object data is queued for
retrieval from the archival storage system.

• ARRL: Average Request Latency. The average time, in microseconds, needed by the Archive
Node to retrieve object data from the archival storage system.

The acceptable values for these attributes depend on how the archival storage system is configured
and used. (Go to ARC > Retrieve > Overview > Main.) The values set for request timeouts and the
number of sessions made available for retrieve requests are particularly influential.
Managing archival storage | 217

After integration is complete, monitor the Archive Node's object data retrievals to establish values for
normal retrieval times and queue lengths. Then, create Custom alarms for ARQL and ARRL that will
trigger if an abnormal operating condition arises.

Related tasks
Creating custom service or component alarms on page 70
218

What an Admin Node is


You perform most day-to-day activities using the Grid Manager, which resides on Admin Nodes.
Admin Nodes provide services for the web interface, system configuration, and audit logs. Each site
in a StorageGRID Webscale deployment can have one or more Admin Nodes.
Admin Nodes use the AMS service, the CMN service, and the NMS service.

What the AMS service is


The Audit Management System (AMS) service tracks system activity and events.

What the CMN service is


The Configuration Management Node (CMN) service manages system-wide configurations of
connectivity and protocol features needed by all services. In addition, the CMN service is used to run
and monitor grid tasks. There is only one CMN service per StorageGRID Webscale deployment. The
Admin Node that hosts the CMN service is known as the primary Admin Node.

What the NMS service is


The Network Management System (NMS) service powers the monitoring, reporting, and
configuration options that are displayed through the Grid Manager, the StorageGRID Webscale
system's browser-based interface.

Related tasks
Verifying an ILM policy with object metadata lookup on page 146
What an Admin Node is | 219

Admin Node redundancy


A StorageGRID Webscale system can include multiple Admin Nodes. This provides you with the
redundancy of multiple IP addresses from which you can to sign in to StorageGRID Webscale system
and perform various monitoring and configuration procedures.
Having multiple Admin Nodes provides you with the capability to continuously monitor and
configure your StorageGRID Webscale system in the event that an Admin Node fails. If an Admin
Node becomes unavailable, web clients can reconnect to any other available Admin Node and
continue to view and configure the system. Meanwhile, attribute processing continues, alarms are
still triggered, and related notifications sent. However, multiple Admin Nodes does not provide
failover protection except for notifications and AutoSupport messages. Alarm acknowledgments
made from one Admin Node are not copied to other Admin Nodes.

Related concepts
Alarm acknowledgments on page 219

Alarm acknowledgments
Alarm acknowledgments made from one Admin Node are not copied to any other Admin Node.
Because acknowledgments are not copied to other Admin Nodes, it is possible that the Grid
Topology tree will not look the same for each Admin Node.
This difference can be useful when connecting web clients. Web clients can have different views of
the StorageGRID Webscale system based on the administrator needs.
220 | StorageGRID Webscale 11.1 Administrator Guide

Note that notifications are sent from the Admin Node where the acknowledgment occurs.

Email notifications and AutoSupport messages


In a multi-site StorageGRID Webscale system, one Admin Node is configured as the preferred sender
of notifications and AutoSupport messages. This preferred sender can be any Admin Node. All other
Admin Nodes become “standby” senders.
Under normal system operations, only the preferred sender sends notifications and AutoSupport
messages. The standby sender monitors the preferred sender and if it detects a problem, the standby
sender switches to online status and assumes the task of sending notifications and AutoSupport
messages.

Preferred and standby senders


There are two scenarios in which both the preferred sender and the standby sender can send
notifications and AutoSupport messages:

• It is possible that while the StorageGRID Webscale system is running in this “switch-over”
scenario, where the standby sender assumes the task of sending notifications and AutoSupport
messages, the preferred sender will maintain the ability to send notifications and AutoSupport
messages. If this occurs, duplicate notifications and AutoSupport messages are sent: one from the
preferred sender and one from the standby sender. When the Admin Node configured as the
standby sender no longer detects errors on the preferred sender, it switches to “standby” status
and stops sending notifications and AutoSupport messages. Notifications and AutoSupport
messages are once again sent only by the preferred sender.

• If the standby sender cannot detect the preferred sender, the standby sender switches to online and
sends notifications and AutoSupport messages. In this scenario, the preferred sender and standby
senders are “islanded” from each other. Each sender (Admin Node) can be operating and
monitoring the system normally, but because the standby sender cannot detect the other Admin
Node of the preferred sender, both the preferred sender and the standby sender send notifications
and AutoSupport messages.

When sending a test email, all NMS services send a test email.

Related concepts
About alarms and email notifications on page 55
What AutoSupport is on page 79

Related tasks
Selecting a preferred sender on page 63
221

Managing networking
Because the topology of your StorageGRID Webscale system is that of a group of interconnected
servers, over time as your system changes and grows you may be required to perform various updates
to the system's networking.
You can change the configuration of the Grid, Client, or Admin Networks, or you can add new Client
and Admin Networks. You can also update external NTP source IP addresses and DNS IP addresses
at any time.
Note: To use the Grid Network editor to modify or add a network for a grid node, see the recovery
and maintenance instructions. For more information about network topology, see Grid primer.

Grid Network
Required. The Grid Network is the communication link between grid nodes. All hosts on the Grid
Network must be able to talk to all other hosts. This network is used for all internal StorageGRID
Webscale system communications.

Admin Network
Optional. The Admin Network allows for restricted access to the StorageGRID Webscale system for
maintenance and administration.

Client Network
Optional. The Client Network can communicate with any subnet reachable through the local gateway.

Guidelines
• A StorageGRID Webscale grid node requires a dedicated network interface, IP address, subnet
mask, and gateway for each network it is assigned to.

• A grid node is not permitted to have more than one interface on a network.

• A single gateway, per network, per grid node is supported, and it has to be on the same subnet as
the node. You can implement more complex routing in the gateway, if required.

• On each node, each network maps to a specific network interface.

Network Interface name


Grid eth0
Admin (optional) eth1
Client (optional) eth2

• If the node is connected to a StorageGRID Webscale appliance, specific ports are used for each
network

◦ Grid Network or eth0: hic2 and hic4 (10-GbE network ports)

◦ Admin Network or eth1: mtc1 (the leftmost 1-GbE port)

◦ Client Network or eth2: hic1 and hic3 (10-GbE network ports)

• The default route is generated automatically, per node. If eth2 is enabled, then 0.0.0.0/0 uses the
Client Network on eth2. If eth2 is not enabled, then 0.0.0.0/0 uses the Grid Network on eth0.
222 | StorageGRID Webscale 11.1 Administrator Guide

• The Client Network does not become operational until the grid node has joined the grid

• The Admin Network can be configured during VM deployment to allow access to the installation
UI before the grid is fully installed.

Related information
Recovery and maintenance
Grid primer
SG6000 appliance installation and maintenance
SG5700 appliance installation and maintenance
SG5600 appliance installation and maintenance

Viewing IP addresses
You can view the IP address for each grid node that makes up your StorageGRID Webscale system.
You can then use this IP address to log into the grid node at the command line and perform various
maintenance procedures.

Before you begin


You must be signed in to the Grid Manager using a supported browser.

About this task


For information on changing IP addresses, see the recovery and maintenance instructions.

Steps

1. Select Support > Grid Topology.

2. Select SSM > Resources.

3. From the Grid Options menu, click Overview.


IP addresses are listed in the Network Addresses table.

Example

For VM-based grid nodes, the IP address assigned to eth0 is the node's Grid Network IP address.
For StorageGRID Webscale appliance Storage Nodes, the IP address assigned to hic2 and hic4 is
the node's Grid Network IP address.
The Network Addresses table displays link-local IPv6 addresses beginning with fe80::, which
are automatically assigned by Linux.

Related information
Recovery and maintenance
Managing networking | 223

Configuring SNMP monitoring


A Simple Network Management Protocol (SNMP) agent is available on Admin Nodes and can be
used to monitor system status.
The StorageGRID Webscale system supports version v2c (SNMPv2c) and version 3 (SNMPv3) of
the SNMP protocol.
The SNMP agent is disabled by default, and the IPv4 and IPv6 community strings are randomly
generated. When enabled, the SNMP agent provides system status by returning object identifier
(OID) data values to polling requests from a third-party monitoring server.
The StorageGRID Webscale system provides a custom management information base (MIB) file that
you can install on the monitoring server to translate OID data into a readable form.
For information about how to configure a third-party monitor for use with StorageGRID Webscale,
refer to documentation specific to your SNMP monitor.

Enabling the SNMP agent


The StorageGRID Webscale's SNMP agent is disabled by default, and the IPv4 and IPv6 community
strings are randomly generated. You can enable the agent and set the community strings to strings of
your choosing.

Steps

1. From the service laptop, log in to the primary Admin Node:

a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

2. To enable the SNMP agent, edit /etc/default/snmpd to change SNMPDRUN=no to


SNMPDRUN=yes.

3. To convert the randomly generated community strings to strings of your choice, edit /etc/snmp/
snmpd.conf to replace the rocommunity and rocommunity6 randomly generated community
strings.

4. Restart the snmpd service:


service snmpd restart

Management Information Base file


A Management Information Base (MIB) file is needed by the monitor to translate SNMP data from
the StorageGRID Webscale system into readable text.
Copy the StorageGRID Webscale MIB file (BYCAST-STORAGWRID-MIB.mib) to the monitor server.
The StorageGRID Webscale MIB file is available on all Admin Nodes at /usr/share/snmp/mibs/
BYCAST-STORAGEGRID-MIB.mib.
224 | StorageGRID Webscale 11.1 Administrator Guide

Detailed registry
The OID is displayed on third-party monitor servers.
This OID reports the overall system status of the StorageGRID Webscale system.

Element Values
OID 1.3.6.1.4.1.28669.1.0.1.1.1
Hierarchy iso.org.dod.internet.mgmt.private.enterprises.bycast.version1.common.n
msmi.system.status
Values One of the following values is displayed:
1 = unknown
11 = adminDown
21 = normal
31 = notice
41 = minor
51 = major
61 = critical
The MIB contains this enumeration mapping. If the monitor uses SNMP
GET, the textual value will appear instead of the numerical value.

This OID is the system label.

Element Values
OID 1.3.6.1.4.1.28669.1.0.1.1.2
Hierarchy iso.org.dod.internet.mgmt.private.enterprises.bycast.version1.common.n
msmi.system.label
Values Text string of the system label.

What link costs are


Link costs refers to the relative costs of communicating between data center sites. Link costs are used
to determine which grid nodes should provide a requested service.
For example, link cost information is used to determine which LDR services are used to retrieve
objects. All else being equal, the service with the lowest link cost is preferred.
In the example shown below, if a client application at data center site two (DC2) retrieves an object
that is stored both at data center site one (DC1) and at data center site three, the LDR service at DC1
is responsible for sending the object because the link cost from DC1 to DC2 is 0, which is lower than
the link cost from the DC3 site to the DC2 site (25).
Managing networking | 225

The table shows example link costs.

Link Link cost Notes


Between physical 25 (default) Usually a high-speed WAN link exists between sites.
data center sites
Between logical 0 Logical data centers at the same physical site
data center sites connected by a LAN.

Updating link costs


You can update the link costs between data center sites.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Configuration > Link Cost.


226 | StorageGRID Webscale 11.1 Administrator Guide

2. Select a site under Link Source and enter a cost value between 0 and 100 under Link
Destination.
You cannot change the link cost if the source is the same as the destination.

To cancel changes, click Revert.

3. Click Apply Changes.

Changing network transfer encryption


The StorageGRID Webscale system uses Transport Layer Security (TLS) to protect internal control
traffic between grid nodes. The Network Transfer Encryption option sets the algorithm used by TLS
to encrypt control traffic between grid nodes. This setting does not affect data encryption.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


By default, network transfer encryption uses the AES256-SHA algorithm. Control traffic can also be
encrypted using the AES128-SHA algorithm.

Steps

1. Select Configuration > Grid Options.

2. From the Grid Options menu, select Configuration.


Managing networking | 227

3. Change Network Transfer Encryption to AES256-SHA or AES128-SHA.

4. Click Apply Changes.

Configuring certificates
You can customize the certificates used by the StorageGRID Webscale system.
The StorageGRID Webscale system uses security certificates for two distinct purposes:

• Management Interface Server Certificates: Used to secure access to the Grid Manager and the
Grid Management API.

• Storage API Server Certificates: Used to secure access to the Storage Nodes and API Gateway
Nodes, which API client applications use to upload and download object data.

You can use the default certificates created during installation, or you can replace either, or both, of
these default types of certificates with your own custom certificates.

Configuring custom server certificates for the Grid Manager


You can replace the default Grid Manager server certificates with custom certificates that allows
users to access the interface without encountering security warnings.

About this task


You need to complete configuration on the server, and depending on the root Certificate Authority
(CA) you are using, users might also need to install the root CA certificate in the web browser they
will use to access the Grid Manager.

Steps

1. Select Configuration > Server Certificates.

2. In the Management Interface Server Certificate section, click Install Custom Certificate.

3. Upload the required server certificate files:


228 | StorageGRID Webscale 11.1 Administrator Guide

• Server Certificate: The custom server certificate file (.crt).

• Server Certificate Private Key: The custom server certificate private key file (.key).

• CA Bundle: A single file containing the certificates from each intermediate issuing Certificate
Authority (CA). The file should contain each of the PEM-encoded CA certificate files,
concatenated in certificate chain order.

4. Click Save.
The custom server certificates are used for all subsequent new client connections.

5. Refresh the page to ensure the web browser is updated.

Restoring the default server certificates for the Grid Manager


You can revert to using the default server certificates for the Grid Manager.

Steps

1. Select Configuration > Server Certificates.

2. In the Manage Interface Server Certificate section, click Use Default Certificates.

3. Click OK in the confirmation dialog box.


When you restore the default server certificates, the custom server certificate files you configured
are deleted and cannot be recovered from the system. The default server certificates are used for
all subsequent new client connections.

4. Refresh the page to ensure the web browser is updated.

Configuring custom server certificates for storage API endpoints


You can replace the default object storage API service endpoint server certificates with a single
custom server certificate that is specific to your organization.

About this task


API service endpoints on Storage Nodes are secured and identified by X.509 server certificates. By
default, every Storage Node is issued a certificate signed by the grid CA. These CA signed
certificates can be replaced by a single common custom server certificate and corresponding private
key.
You need to complete configuration on the server, and depending on the root Certificate Authority
(CA) you are using, users might also need to install the root CA certificate in the in the API client
they will use to access the system.

Steps

1. Select Configuration > Server Certificates.

2. In the Object Storage API Service Endpoints Server Certificate section, click Install Custom
Certificate.

3. Upload the required server certificate files:

• Server Certificate: The custom server certificate file (.crt).

• Server Certificate Private Key: The custom server certificate private key file (.key).
Managing networking | 229

• CA Bundle: A single file containing the certificates from each intermediate issuing Certificate
Authority (CA). The file should contain each of the PEM-encoded CA certificate files,
concatenated in certificate chain order.

4. Click Save.
The custom server certificates are used for all subsequent new API client connections.

5. Refresh the page to ensure the web browser is updated.

Restoring the default server certificates for storage API endpoints


You can revert to using the default server certificates for the storage API endpoints.

Steps

1. Select Configuration > Server Certificates.

2. In the Object Storage API Service Endpoints Server Certificate section, click Use Default
Certificates.

3. Click OK in the confirmation dialog box.


When you restore the default object storage API service endpoints server certificates, the custom
server certificate files you configured are deleted and cannot be recovered from the system. The
default server certificates are used for all subsequent new API client connections.

4. Refresh the page to ensure the web browser is updated.

Copying the StorageGRID Webscale system's CA certificate


You can copy the StorageGRID Webscale system's certificate authority (CA) certificate from the
StorageGRID Webscale system for client applications that require server verification. If a custom
server certificate has been configured, then client applications should verify the server using the root
CA certificate that issues the custom server certificate, rather than copy the CA certificate from the
StorageGRID Webscale system.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.

Steps

1. Select Configuration > Grid Options.

2. From the Grid Options menu, click Overview.

3. Under CA Certificates, expand CA Certificate.

4. Select the CA certificate.


Include the “-----BEGIN CERTIFICATE-----” and the “-----END CERTIFICATE-----” in your
selection.
230 | StorageGRID Webscale 11.1 Administrator Guide

5. Right-click the selected certificate, and then select Copy.

Configuring StorageGRID Webscale certificates for ONTAP clients using


FabricPool
You can use a script to generate a self-signed server certificate for S3 clients that perform strict
hostname validation and do not support disabling strict hostname validation, such as ONTAP clients
using FabricPool. In production environments, you should use a certificate that is signed by a known
Certificate Authority (CA). Certificates signed by a CA can be rotated non-disruptively. They are also
more secure because they provide better protection against man-in-the-middle attacks.

Before you begin

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You must have the Passwords.txt file.

Steps

1. Obtain the fully qualified domain name (FQDN) of each API Gateway Node.

2. Log in to the primary Admin Node:

a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

3. Configure StorageGRID Webscale with a new self-signed certificate.


Managing networking | 231

$ sudo make-certificate --domains wildcard-gateway-node-fqdn --type


storage

• For --domains, use wildcards to represent the fully qualified domain names of all API
Gateway Nodes. For example, *.sgws.foo.com uses the * wildcard to represent
gn1.sgws.foo.com and gn2.sgws.foo.com.

• Set --type to storage to configure the certificate used by S3 and Swift storage clients.

• By default, generated certificates are valid for one year (365 days) and must be recreated
before they expire. You can use the --days argument to override the default validity period.
Note: A certificate's validity period begins when make-certificate is run. You must
ensure the S3 client is synchronized to the same time source as StorageGRID Webscale;
otherwise, the client might reject the certificate.

Example

$ sudo make-certificate --domains *.s3.example.com --type storage --


days 730

The resulting output contains the public certificate needed by your S3 client.

4. Select and copy the certificate.


Include the BEGIN and the END tags in your selection.

5. Log out of the command shell.


$ exit

6. Confirm the certificate was configured:

a. Access the Grid Manager.

b. Select Configuration > Server Certificates > Object Storage API Service Endpoints
Server Certificate.

7. Configure your S3 client to use the public certificate you copied. Include the BEGIN and END
tags.

Configuring S3 API endpoint domain names


To support S3 virtual hosted-style requests, you must configure the list of endpoint domain names
that S3 clients will be connecting to.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• You must have specific access permissions. For details, see information about controlling system
access with administration user accounts and groups.

• You must have confirmed that a grid upgrade is not in progress.


Warning: Do not make any changes to the domain name configuration when a grid upgrade is
in progress.
232 | StorageGRID Webscale 11.1 Administrator Guide

About this task


You configure API endpoint domain names after you create the fully-qualified domain names on the
DNS server, depending on the grid nodes that S3 clients will be connecting to:
• If S3 clients are connecting to one or more API Gateway Nodes, you must include the domain
name of each API Gateway Node.

• If S3 clients are connecting to one or more Storage Nodes, you must include the domain name of
each Storage Node.

• If S3 clients are connecting through an external load balancer, you must include the domain name
of the load balancer.

Steps

1. Select Configuration > Domain Names.


The Endpoint Domain Names page appears.

2. Using the (+) icon to add additional fields, enter the list of S3 API endpoint domain names in the
Endpoint fields.
If this list is empty, support for S3 virtual hosted-style requests is disabled.

3. Click Save.

4. Obtain a custom server certificate with the wildcard Subject Alternative Name (SAN) for the
endpoint domain name, the endpoint domain name, and any other domain names that must be
supported.
This step is required to validate the SSL certificate and to verify the hostname when API client
applications connect to the endpoint.

Example
If the endpoint is s3.company.com, obtain a custom server certificate that includes the
s3.company.com endpoint and the endpoint's wildcard SAN: *.s3.company.com.

5. Select Configuration > Server Certificates. Then, install the custom certificate in the Object
Storage API Service Endpoints Server Certificate section.

6. Confirm that the DNS server also supports the endpoint and the wildcard SAN.
Now, when the endpoint bucket.s3.company.com is used, the DNS server resolves to the
correct endpoint and the certificate authenticates the endpoint as expected.
Managing networking | 233

Related tasks
Configuring custom server certificates for storage API endpoints on page 228

Related information
Implementing S3 client applications
234

Configuring audit client access


The Admin Node, through the Audit Management System (AMS) service, logs all audited system
events to a log file available through the audit share, which is added to each Admin Node at
installation. For easy access to audit logs, you can configure client access to audit shares for both
CIFS and NFS.
The StorageGRID Webscale system uses positive acknowledgment to prevent loss of audit messages
before they are written to the log file. A message remains queued at a service until the AMS service
or an intermediate audit relay service has acknowledged control of it.
For more information, see the instructions for understanding audit messages.
Note: Audit export through CIFS/Samba is deprecated in StorageGRID Webscale 11.1, and will be
removed in a future StorageGRID Webscale release. See information for upgrading StorageGRID
Webscale for more details.

Related concepts
What an Admin Node is on page 218

Related information
Understanding audit messages
Upgrading StorageGRID Webscale

Configuring audit clients for CIFS


The procedure used to configure an audit client depends on the authentication method: Windows
Workgroup or Windows Active Directory (AD). When added, the audit share is automatically
enabled as a read-only share.
Note: Audit export through CIFS/Samba is deprecated in StorageGRID Webscale 11.1, and will be
removed in a future StorageGRID Webscale release. See information for upgrading StorageGRID
Webscale for more details.

Related information
Upgrading StorageGRID Webscale

Configuring audit clients for Workgroup


Perform this procedure for each Admin Node in a StorageGRID Webscale deployment from which
you want to retrieve audit messages.

Before you begin

• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).

• You must have the Configuration.txt file (available in the SAID package).

About this task


Audit export through CIFS/Samba is deprecated in StorageGRID Webscale 11.1, and will be
removed in a future StorageGRID Webscale release. See information for upgrading StorageGRID
Webscale for more details.
Configuring audit client access | 235

Steps

1. From the service laptop, log in to the primary Admin Node:

a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

2. Confirm that all services have a state of Running or Verified: storagegrid-status


If all services are not Running or Verified, resolve issues before continuing.

3. Return to the command line, press Ctrl+C.

4. Start the CIFS configuration utility: config_cifs.rb

---------------------------------------------------------------------
| Shares | Authentication | Config |
---------------------------------------------------------------------
| add-audit-share | set-authentication | validate-config |
| enable-disable-share | set-netbios-name | help |
| add-user-to-share | join-domain | exit |
| remove-user-from-share | add-password-server | |
| modify-group | remove-password-server | |
| | add-wins-server | |
| | remove-wins-server | |
---------------------------------------------------------------------

5. Set the authentication for the Windows Workgroup:


If authentication has already been set, an advisory message appears. If authentication has already
been set, go to step 6.

a. Enter: set-authentication

b. When prompted for Windows Workgroup or Active Directory installation, enter: workgroup

c. When prompted, enter a name of the Workgroup:


workgroup_name

d. When prompted, create a meaningful NetBIOS name:


workgroup_name
or
Press Enter to use the Admin Node’s host name as the NetBIOS name.
The script restarts the Samba server and changes are applied. This should take less than one
minute. After setting authentication, add an audit client.

e. When prompted, press Enter.


The CIFS configuration utility is displayed.

6. Add an audit client:

a. Enter: add-audit-share
Note: The share is automatically added as read-only.

b. When prompted, add a user or group: user


236 | StorageGRID Webscale 11.1 Administrator Guide

c. When prompted, enter the audit user name: audit_user_name

d. When prompted, enter a password for the audit user: password

e. When prompted, re-enter the same password to confirm it: password

f. When prompted, press Enter.


The CIFS configuration utility is displayed.

Note: There is no need to enter a directory. The audit directory name is predefined.

7. If more than one user or group is permitted to access the audit share, add the additional users:

a. Enter: add-user-to-share
A numbered list of enabled shares is displayed.

b. When prompted, enter the number of the audit-export share: share_number

c. When prompted, add a user or group:


user
or
group

d. When prompted, enter the name of the audit user or group: audit_user or audit_group

e. When prompted, press Enter.


The CIFS configuration utility is displayed.

f. Repeat step 7 for each additional user or group that has access to the audit share.

8. Optionally, verify your configuration: validate-config


The services are checked and displayed. You can safely ignore the following messages:

Can't find include file /etc/samba/includes/cifs-interfaces.inc


Can't find include file /etc/samba/includes/cifs-filesystem.inc
Can't find include file /etc/samba/includes/cifs-custom-config.inc
Can't find include file /etc/samba/includes/cifs-shares.inc
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit
(16384)

a. When prompted, press Enter.


The audit client configuration is displayed.

b. When prompted, press Enter.


The CIFS configuration utility is displayed.

9. Close the CIFS configuration utility: exit

10. If the StorageGRID Webscale deployment is a single site, go to step 11.


or
Optionally, if the StorageGRID Webscale deployment includes Admin Nodes at other sites,
enable these audit share as required:

a. Remotely log in to a site’s Admin Node:

i. Enter the following command: ssh admin@grid_node_IP

ii. Enter the password listed in the Passwords.txt file.

iii. Enter the following command to switch to root: su -


Configuring audit client access | 237

iv. Enter the password listed in the Passwords.txt file.

b. Repeat steps 4 through 9 to configure the audit share for each additional Admin Node.

c. Close the remote secure shell login to the remote Admin Node: exit

11. Log out of the command shell: exit

Related information
Upgrading StorageGRID Webscale

Configuring audit clients for Active Directory

Before you begin

• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).

• You must have the CIFS Active Directory username and password.

• You must have the Configuration.txt file (available in the SAID package).

About this task


Perform this procedure for each Admin Node in a StorageGRID Webscale deployment from which
you want to retrieve audit messages.
Note: Audit export through CIFS/Samba is deprecated in StorageGRID Webscale 11.1, and will be
removed in a future StorageGRID Webscale release. See information for upgrading StorageGRID
Webscale for more details.

Steps

1. From the service laptop, log in to the primary Admin Node:

a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

2. Confirm that all services have a state of Running or Verified: storagegrid-status


If all services are not Running or Verified, resolve issues before continuing.

3. Return to the command line, press Ctrl+C.

4. Start the CIFS configuration utility: config_cifs.rb

---------------------------------------------------------------------
| Shares | Authentication | Config |
---------------------------------------------------------------------
| add-audit-share | set-authentication | validate-config |
| enable-disable-share | set-netbios-name | help |
| add-user-to-share | join-domain | exit |
| remove-user-from-share | add-password-server | |
238 | StorageGRID Webscale 11.1 Administrator Guide

| modify-group | remove-password-server | |
| | add-wins-server | |
| | remove-wins-server | |
---------------------------------------------------------------------

5. Set the authentication for Active Directory: set-authentication


In most deployments, you must set the authentication before adding the audit client. If
authentication has already been set, an advisory message appears. If authentication has already
been set, go to step 6.
a. When prompted for Workgroup or Active Directory installation: ad

b. When prompted, enter the name of the AD domain (short domain name).

c. When prompted, enter the domain controller’s IP address or DNS host name.

d. When prompted, enter the full domain realm name.


Use uppercase letters.

e. When prompted to enable winbind support, type y.


Winbind is used to resolve user and group information from AD servers.

f. When prompted, enter the NetBIOS name.

g. When prompted, press Enter.


The CIFS configuration utility is displayed.

6. Join the domain:

a. If not already started, start the CIFS configuration utility: config_cifs.rb

b. Join the domain: join-domain

c. You are prompted to test if the Admin Node is currently a valid member of the domain. If this
Admin Node has not previously joined the domain, enter: no

d. When prompted, provide the Administrator’s username: administrator_username


where administrator_username is the CIFS Active Directory username, not the
StorageGRID Webscale username.

e. When prompted, provide the Administrator’s password: administrator_password


were administrator_password is the CIFS Active Directory username, not the
StorageGRID Webscale password.

f. When prompted, press Enter.


The CIFS configuration utility is displayed.

7. Verify that you have correctly joined the domain:

a. Join the domain: join-domain

b. When prompted to test if the server is currently a valid member of the domain, enter: y
If you receive the message “Join is OK,” you have successfully joined the domain. If you do
not get this response, try setting authentication and joining the domain again.

c. When prompted, press Enter.


The CIFS configuration utility is displayed.

8. Add an audit client: add-audit-share

a. When prompted to add a user or group, enter: user


Configuring audit client access | 239

b. When prompted to enter the audit user name, enter the audit user name.

c. When prompted, press Enter.


The CIFS configuration utility is displayed.

9. If more than one user or group is permitted to access the audit share, add additional users: add-
user-to-share

A numbered list of enabled shares is displayed.

a. Enter the number of the audit-export share.

b. When prompted to add a user or group, enter: group


You are prompted for the audit group name.

c. When prompted for the audit group name, enter the name of the audit user group.

d. When prompted, press Enter.


The CIFS configuration utility is displayed.

e. Repeat step 9 for each additional user or group that has access to the audit share.

10. Optionally, verify your configuration: validate-config


The services are checked and displayed. You can safely ignore the following messages:

• Can't find include file /etc/samba/includes/cifs-interfaces.inc

• Can't find include file /etc/samba/includes/cifs-filesystem.inc

• Can't find include file /etc/samba/includes/cifs-interfaces.inc

• Can't find include file /etc/samba/includes/cifs-custom-config.inc

• Can't find include file /etc/samba/includes/cifs-shares.inc

• rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)


Attention: Do not combine the setting 'security=ads' with the 'password server' parameter.
(by default Samba will discover the correct DC to contact automatically).

a. When prompted, press Enter to display the audit client configuration.

b. When prompted, press Enter.


The CIFS configuration utility is displayed.

11. Close the CIFS configuration utility: exit

12. If the StorageGRID Webscale deployment is a single site, go to step 13.


or
Optionally, if the StorageGRID Webscale deployment includes Admin Nodes at other sites,
enable these audit shares as required:

a. Remotely log in to a site’s Admin Node:

i. Enter the following command: ssh admin@grid_node_IP

ii. Enter the password listed in the Passwords.txt file.

iii. Enter the following command to switch to root: su -

iv. Enter the password listed in the Passwords.txt file.

b. Repeat steps 4 through 11 to configure the audit shares for each Admin Node.
240 | StorageGRID Webscale 11.1 Administrator Guide

c. Close the remote secure shell login to the Admin Node: exit

13. Log out of the command shell: exit

Related information
Upgrading StorageGRID Webscale

Adding a user or group to a CIFS audit share


You can add a user or group to a CIFS audit share that is integrated with AD authentication.

Before you begin

• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).

• You must have the Configuration.txt file (available in the SAID package).

About this task


The following procedure is for an audit share integrated with AD authentication.
Note: Audit export through CIFS/Samba is deprecated in StorageGRID Webscale 11.1, and will be
removed in a future StorageGRID Webscale release. See information for upgrading StorageGRID
Webscale for more details.

Steps

1. From the service laptop, log in to the primary Admin Node:

a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

2. Confirm that all services have a state of Running or Verified. Enter: storagegrid-status
If all services are not Running or Verified, resolve issues before continuing.

3. Return to the command line, press Ctrl+C.

4. Start the CIFS configuration utility: config_cifs.rb

---------------------------------------------------------------------
| Shares | Authentication | Config |
---------------------------------------------------------------------
| add-audit-share | set-authentication | validate-config |
| enable-disable-share | set-netbios-name | help |
| add-user-to-share | join-domain | exit |
| remove-user-from-share | add-password-server | |
| modify-group | remove-password-server | |
| | add-wins-server | |
| | remove-wins-server | |
---------------------------------------------------------------------

5. Start adding a user or group: add-user-to-share


Configuring audit client access | 241

A numbered list of audit shares that have been configured is displayed.

6. When prompted, enter the number for the audit share (audit-export): audit_share_number
You are asked if you would like to give a user or a group access to this audit share.

7. When prompted, add a user or group: user or group

8. When prompted for the user or group name for this AD audit share, enter the name.
The user or group is added as read-only for the audit share both in the server’s operating system
and in the CIFS service. The Samba configuration is reloaded to enable the user or group to
access the audit client share.

9. When prompted, press Enter.


The CIFS configuration utility is displayed.

10. Repeat steps 5 to 8 for each user or group that has access to the audit share.

11. Optionally, verify your configuration: validate-config


The services are checked and displayed. You can safely ignore the following messages:

• Can't find include file /etc/samba/includes/cifs-interfaces.inc

• Can't find include file /etc/samba/includes/cifs-filesystem.inc

• Can't find include file /etc/samba/includes/cifs-custom-config.inc

• Can't find include file /etc/samba/includes/cifs-shares.inc

a. When prompted, press Enter to display the audit client configuration.

b. When prompted, press Enter.

12. Close the CIFS configuration utility: exit

13. Determine if you need to enable additional audit shares, as follows:

• If the StorageGRID Webscale deployment is a single site, go to step 14.


• If the StorageGRID Webscale deployment includes Admin Nodes at other sites, enable these
audit shares as required:

a. Remotely log in to a site’s Admin Node:

i. Enter the following command: ssh admin@grid_node_IP

ii. Enter the password listed in the Passwords.txt file.

iii. Enter the following command to switch to root: su -

iv. Enter the password listed in the Passwords.txt file.

b. Repeat steps 4 through 12 to configure the audit shares for each Admin Node.

c. Close the remote secure shell login to the remote Admin Node: exit

14. Log out of the command shell: exit


242 | StorageGRID Webscale 11.1 Administrator Guide

Removing a user or group from a CIFS audit share


You cannot remove the last user or group permitted to access the audit share.

Before you begin

• You must have the Passwords.txt file with the root account passwords (available in the SAID
package).

• You must have the Configuration.txt file (available in the SAID package).

About this task


Audit export through CIFS/Samba is deprecated in StorageGRID Webscale 11.1, and will be
removed in a future StorageGRID Webscale release. See information for upgrading StorageGRID
Webscale for more details.

Steps

1. From the service laptop, log in to the primary Admin Node:


a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

2. Start the CIFS configuration utility: config_cifs.rb

---------------------------------------------------------------------
| Shares | Authentication | Config |
---------------------------------------------------------------------
| add-audit-share | set-authentication | validate-config |
| enable-disable-share | set-netbios-name | help |
| add-user-to-share | join-domain | exit |
| remove-user-from-share | add-password-server | |
| modify-group | remove-password-server | |
| | add-wins-server | |
| | remove-wins-server | |
---------------------------------------------------------------------

3. Start removing a user or group: remove-user-from-share


A numbered list of available audit shares for the Admin Node is displayed. The audit share is
labeled audit-export.

4. Enter the number of the audit share: audit_share_number

5. When prompted to remove a user or a group:


user
or group
A numbered list of users or groups for the audit share is displayed.

6. Enter the number corresponding to the user or group you want to remove: number
The audit share is updated, and the user or group is no longer permitted access to the audit share.
For example:
Configuring audit client access | 243

Enabled shares
1. audit-export
Select the share to change: 1
Remove user or group? [User/group]: User
Valid users for this share
1. audituser
2. newaudituser
Select the user to remove: 1

Removed user "audituser" from share "audit-export".

Press return to continue.

7. Close the CIFS configuration utility: exit

8. If the StorageGRID Webscale deployment includes Admin Nodes at other sites, disable the audit
share at each site as required.

9. Log out of each command shell when configuration is complete: exit

Related information
Upgrading StorageGRID Webscale

Changing a CIFS audit share user or group name


You can change the name of a user or a group for a CIFS audit share by adding a new user or group
and then deleting the old one.

About this task


Audit export through CIFS/Samba is deprecated in StorageGRID Webscale 11.1, and will be
removed in a future StorageGRID Webscale release. See information for upgrading StorageGRID
Webscale for more details.

Steps

1. Add a new user or group with the updated name to the audit share.

2. Delete the old user or group name.

Related tasks
Adding a user or group to a CIFS audit share on page 240
Removing a user or group from a CIFS audit share on page 242

Related information
Upgrading StorageGRID Webscale

Verifying CIFS audit integration


The audit share is read-only. Log files are intended to be read by computer applications and
verification does not include opening a file. It is considered sufficient verification that the audit log
files appear in a Windows Explorer window. Following connection verification, close all windows.
244 | StorageGRID Webscale 11.1 Administrator Guide

Configuring the audit client for NFS


The audit share is automatically enabled as a read-only share.

Before you begin

• You must have the Passwords.txt file with the root/admin password (available in the SAID
package).
• You must have the Configuration.txt file (available in the SAID package).

• The audit client must be using NFS Version 3 (NFSv3).

About this task


Perform this procedure for each Admin Node in a StorageGRID Webscale deployment from which
you want to retrieve audit messages.

Steps

1. From the service laptop, log in to the primary Admin Node:


a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

2. Confirm that all services have a state of Running or Verified. Enter: storagegrid-status
If any services are not listed as Running or Verified, resolve issues before continuing.

3. Return to the command line, press Ctrl+C.

4. Start the NFS configuration utility. Enter: config_nfs.rb

-----------------------------------------------------------------
| Shares | Clients | Config |
-----------------------------------------------------------------
| add-audit-share | add-ip-to-share | validate-config |
| enable-disable-share | remove-ip-from-share | refresh-config |
| | | help |
| | | exit |
-----------------------------------------------------------------

5. Add the audit client: add-audit-share

a. When prompted, enter the audit client’s IP address or IP address range for the audit share:
client_IP_address

b. When prompted, press Enter.

6. If more than one audit client is permitted to access the audit share, add the IP address of the
additional user: add-ip-to-share

a. Enter the number of the audit share: audit_share_number


Configuring audit client access | 245

b. When prompted, enter the audit client’s IP address or IP address range for the audit share:
client_IP_address

c. When prompted, press Enter.


The NFS configuration utility is displayed.

d. Repeat step 6 for each additional audit client that has access to the audit share.

7. Optionally, verify your configuration.

a. Enter the following: validate-config


The services are checked and displayed.

b. When prompted, press Enter.


The NFS configuration utility is displayed.

c. Close the NFS configuration utility: exit

8. Determine if you must enable audit shares at other sites.


• If the StorageGRID Webscale deployment is a single site, go to step 9.

• If the StorageGRID Webscale deployment includes Admin Nodes at other sites, enable these
audit shares as required:

a. Remotely log in to the site’s Admin Node:

i. Enter the following command: ssh admin@grid_node_IP

ii. Enter the password listed in the Passwords.txt file.

iii. Enter the following command to switch to root: su -

iv. Enter the password listed in the Passwords.txt file.

b. Repeat steps 4 through 7.c to configure the audit shares for each additional Admin Node.

c. Close the remote secure shell login to the remote Admin Node. Enter: exit

9. Log out of the command shell: exit


NFS audit clients are granted access to an audit share based on their IP address. Grant access to
the audit share to a new NFS audit client by adding its IP address to the share, or remove an
existing audit client by removing its IP address.

Adding an NFS audit client to an audit share


NFS audit clients are granted access to an audit share based on their IP address. Grant access to the
audit share to a new NFS audit client by adding its IP address to the audit share.

Before you begin

• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).

• You must have the Configuration.txt file (available in the SAID package).

• The audit client must be using NFS Version 3 (NFSv3).

Steps

1. From the service laptop, log in to the primary Admin Node:


246 | StorageGRID Webscale 11.1 Administrator Guide

a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

2. Start the NFS configuration utility: config_nfs.rb

-----------------------------------------------------------------
| Shares | Clients | Config |
-----------------------------------------------------------------
| add-audit-share | add-ip-to-share | validate-config |
| enable-disable-share | remove-ip-from-share | refresh-config |
| | | help |
| | | exit |
-----------------------------------------------------------------

3. Enter: add-ip-to-share
A list of NFS audit shares enabled on the Admin Node is displayed. The audit share is listed
as: /var/local/audit/export

4. Enter the number of the audit share: audit_share_number

5. When prompted, enter the audit client’s IP address or IP address range for the audit share:
client_IP_address

The audit client is added to the audit share.

6. When prompted, press Enter.


The NFS configuration utility is displayed.

7. Repeat from step 3 for each audit client that should be added to the audit share.

8. Optionally, verify your configuration: validate-config


The services are checked and displayed.

a. When prompted, press Enter.


The NFS configuration utility is displayed.

9. Close the NFS configuration utility: exit

10. If the StorageGRID Webscale deployment is a single site, go to step 11.


— or —
Optionally, if the StorageGRID Webscale deployment includes Admin Nodes at other sites,
enable these audit shares as required:

a. Remotely log in to a site’s Admin Node:

i. Enter the following command: ssh admin@grid_node_IP

ii. Enter the password listed in the Passwords.txt file.

iii. Enter the following command to switch to root: su -

iv. Enter the password listed in the Passwords.txt file.

b. Repeat steps 2 through 9 to configure the audit shares for each Admin Node.
Configuring audit client access | 247

c. Close the remote secure shell login to the remote Admin Node: exit

11. Log out of the command shell: exit

Verifying NFS audit integration


After you configure an audit share and add an NFS audit client, you can mount the audit client share
and verify that the files are available from the audit share.

Steps

1. Verify connectivity (or variant for the client system) using the client-side IP address of the Admin
Node hosting the AMS service. Enter: ping IP_address
Verify that the server responds, indicating connectivity.

2. Mount the audit read-only share using a command appropriate to the client operating system. A
sample Linux command is (enter on one line):
mount -t nfs -o hard,intr Admin_Node_IP_address:/var/local/audit/export
myAudit

Use the IP address of the Admin Node hosting the AMS service and the predefined share name
for the audit system. The mount point can be any name selected by the client (for example,
myAudit in the previous command).

3. Verify that the files are available from the audit share. Enter: ls myAudit /*
where myAudit is the mount point of the audit share. There should be at least one log file listed.

Removing an NFS audit client from the audit share


NFS audit clients are granted access to an audit share based on their IP address. You can remove an
existing audit client by removing its IP address.

Before you begin

• You must have the Passwords.txt file with the root/admin account password (available in the
SAID package).

• You must have the Configuration.txt file (available in the SAID package).

About this task


You cannot remove the last IP address permitted to access the audit share.

Steps

1. From the service laptop, log in to the primary Admin Node:

a. Enter the following command: ssh admin@primary_Admin_Node_IP

b. Enter the password listed in the Passwords.txt file.

c. Enter the following command to switch to root: su -

d. Enter the password listed in the Passwords.txt file.


When you are logged in as root, the prompt changes from $ to #.

2. Start the NFS configuration utility: config_nfs.rb


248 | StorageGRID Webscale 11.1 Administrator Guide

-----------------------------------------------------------------
| Shares | Clients | Config |
-----------------------------------------------------------------
| add-audit-share | add-ip-to-share | validate-config |
| enable-disable-share | remove-ip-from-share | refresh-config |
| | | help |
| | | exit |
-----------------------------------------------------------------

3. Remove the IP address from the audit share: remove-ip-from-share


A numbered list of audit shares configured on the server is displayed. The audit share is listed
as: /var/local/audit/export

4. Enter the number corresponding to the audit share: audit_share_number


A numbered list of IP addresses permitted to access the audit share is displayed.

5. Enter the number corresponding to the IP address you want to remove.


The audit share is updated, and access is no longer permitted from any audit client with this IP
address.

6. When prompted, press Enter.


The NFS configuration utility is displayed.

7. Close the NFS configuration utility: exit

8. If your StorageGRID Webscale deployment is a multiple data center site deployment with
additional Admin Nodes at the other sites, disable these audit shares as required:

a. Remotely log in to each site’s Admin Node:

i. Enter the following command: ssh admin@grid_node_IP

ii. Enter the password listed in the Passwords.txt file.

iii. Enter the following command to switch to root: su -

iv. Enter the password listed in the Passwords.txt file.

b. Repeat steps 2 through 7 to configure the audit shares for each additional Admin Node.

c. Close the remote secure shell login to the remote Admin Node: exit

9. Log out of the command shell: exit

Changing the IP address of an NFS audit client

Steps

1. Add a new IP address to an existing NFS audit share.

2. Remove the original IP address.

Related tasks
Adding an NFS audit client to an audit share on page 245
Removing an NFS audit client from the audit share on page 247
249

Controlling system access with administration


user accounts and groups
By managing administration user accounts and administration groups, you can control access to the
StorageGRID Webscale system. Each administration group account is assigned permissions that
control access to StorageGRID Webscale features and functionality. You then add administration
users to one or more administration group accounts to control individual user access.
You can perform the following tasks related to users and groups:

• Configure a federated identity source (such as Active Directory or OpenLDAP) so you can import
administration groups and users.

• Create, edit, clone, and remove local and federated groups.

• Create, edit, clone, and remove local users.

• Change local users' passwords.

Additionally, local users can change their own passwords.

Configuring identity federation


You can use identity federation to import admin groups and users. Using identity federation makes
setting up groups and users faster, and it allows users to sign in to their accounts using familiar
credentials.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


The identity source you configure for the Grid Manager allows you to import the following types of
federated groups:

• Administration (or “admin”) groups. The users in these groups can sign in to the Grid Manager
and perform tasks, based on the management permissions assigned to the group. See “About
administration user groups.”

• Tenant account groups, assuming that the tenant is not using its own identity source (that is,
assuming the Uses Own Identity Source checkbox is unchecked for the tenant account). Users in
tenant account groups can sign in to the Tenant Manager and perform tasks, based on the
permissions assigned to the group. See the information about using tenant accounts.

Note: When using identity federation, be aware that users who only belong to a primary group on
Active Directory are not allowed to sign in to the Grid Manager or the Tenant Manager. To allow
these users to sign in, grant them membership in a user-created group.

Steps

1. Select Configuration > Identity Federation.

2. Select Enable Identity Federation.


250 | StorageGRID Webscale 11.1 Administrator Guide

LDAP service configuration information appears.

3. Select the type of LDAP service you want to configure from the LDAP Service Type drop-down
list.
You can select Active Directory, OpenLDAP, or Other.
Note: If you select OpenLDAP, you must configure the OpenLDAP server. See “Guidelines
for configuring an OpenLDAP server.”

4. If you selected Other, complete the fields in the LDAP Attributes section.

• Unique User Name: The name of the attribute that contains the unique identifier of an LDAP
user. This attribute is equivalent to sAMAccountName for Active Directory and uid for
OpenLDAP.

• User UUID: The name of the attribute that contains the permanent unique identifier of an
LDAP user. This attribute is equivalent to objectGUID for Active Directory and entryUUID
for OpenLDAP.

• Group Unique Name: The name of the attribute that contains the unique identifier of an
LDAP group. This attribute is equivalent to sAMAccountName for Active Directory and cn
for OpenLDAP.

• Group UUID: The name of the attribute that contains the permanent unique identifier of an
LDAP group. This attribute is equivalent to objectGUID for Active Directory and
entryUUID for OpenLDAP.

5. Enter the required LDAP server and network connection information:

• Hostname: The host name or IP address of the LDAP server.

• Port: The port used to connect to the LDAP server. This is typically 389.

• Username: The username used to access the LDAP server, including the domain.
The specified user must have permission to list groups and users and to access the following
attributes:

◦ cn

◦ sAMAccountName or uid

◦ objectGUID or entryUUID

◦ memberOf

• Password: The password associated with the username.

• Group Base DN: The fully qualified Distinguished Name (DN) of an LDAP subtree you want
to search for groups. In the example, all groups whose Distinguished Name is relative to the
base DN (DC=storagegrid,DC=example,DC=com) can be used as federated groups.
Note: The Unique Group Name values must be unique within the Group Base DN they
belong to.

• User Base DN: The fully qualified Distinguished Name (DN) of an LDAP subtree you want
to search for users.
Note: The Unique User Name values must be unique within the User Base DN they belong
to.

6. Select a security setting from the Transport Layer Security (TLS) drop-down list to specify if
TLS is used to secure communications with the LDAP server.
Controlling system access with administration user accounts and groups | 251

• Use operating system CA certificate: Use the default CA certificate installed on the
operating system to secure connections.

• Use custom CA certificate: Use a custom security certificate.


If you select this setting, copy and paste the custom security certificate in the CA Certificate
text box.

• Do not use TLS: The network traffic between the StorageGRID Webscale system and the
LDAP server will not be secured.

Example
The following screenshot shows example configuration values for an LDAP server that uses
Active Directory.

7. Optionally, click Test Connection to validate your connection settings for the LDAP server.

8. Click Save.

Related concepts
Admin group permissions on page 253
252 | StorageGRID Webscale 11.1 Administrator Guide

Related tasks
Creating a tenant account on page 93

Related information
Using tenant accounts

Guidelines for configuring an OpenLDAP server


If you want to use an OpenLDAP server for identity federation, you must configure specific settings
on the OpenLDAP server.

Memberof and refint overlays


The memberof and refint overlays should be enabled. For more information, see the instructions for
reverse group membership maintenance in the administrator’s guide for OpenLDAP.

Indexing
You must configure the following OpenLDAP attributes with the specified index keywords:
olcDbIndex: objectClass eq
olcDbIndex: uid eq,pres,sub
olcDbIndex: cn eq,pres,sub
olcDbIndex: entryUUID eq
In addition, ensure the fields mentioned in the help for Username are indexed for optimal
performance.
See the information about reverse group membership maintenance in the administrator’s guide for
OpenLDAP.

Related information
OpenLDAP documentation: Version 2.4 Administrator's Guide

Forcing synchronization with the identity source


The StorageGRID Webscale system periodically synchronizes federated groups and users from the
identity source. You can force synchronization to start if you want to enable or restrict user
permissions as quickly as possible.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• The identity source must be enabled.

Steps

1. Select Configuration > Identity Federation.

2. Click Synchronize.
A confirmation message is displayed indicating that synchronization started successfully.
Controlling system access with administration user accounts and groups | 253

Disabling identity federation


You can temporarily or permanently disable identity federation for groups and users. When identity
federation is disabled, there is no communication between the StorageGRID Webscale and the
identity source. However, any settings you have configured are retained, allowing you to easily re-
enable identity federation in the future.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


Before you disable identity federation, you should be aware of the following:

• Federated users will be unable to sign in.

• Federated users who are currently signed in will retain access to the StorageGRID Webscale
system until their session expires, but they will be unable to sign in after their session expires.

• Synchronization between the StorageGRID Webscale system and the identity source will not
occur, and alarms will not be raised for accounts that have not been synchronized.

Steps

1. Select Configuration > Identity Federation.

2. Deselect the Enable Identity Federation checkbox.

3. Click Save.

Admin group permissions


When creating administration user groups, you select one or more permissions to control access to
specific features of the StorageGRID Webscale system. You can then assign each user to one or more
of these admin groups to determine which tasks that user can perform.
You must assign at least one permission to each group; otherwise, users belonging to that group will
not be able to sign in to the StorageGRID Webscale system.
By default, any user who belongs to a group that has at least one permission can perform the
following tasks:

• Sign in to the StorageGRID Webscale system

• View the Dashboard

• Monitor grid topology

• Monitor alarms

• Change their own password

• View certain information on the Configuration and Maintenance pages


254 | StorageGRID Webscale 11.1 Administrator Guide

The table shows the permissions you can assign when creating or editing an admin group for
StorageGRID Webscale access. Any functionality not explicitly mentioned in the table requires the
Root Access permission.
Note: You can use the Grid Management API to completely deactivate certain features. When a
feature has been deactivated, the corresponding Management Permission no longer appears on the
Groups page.

Management permission Description


Root Access Provides access to all grid administration features.
Acknowledge Alarms Provides access to acknowledge and respond to alarms.
All signed-in users can monitor alarms.
If you want a user to monitor grid topology and
acknowledge alarms only, you should assign this
permission.
Change Tenant Root Password Provides access to the Change Root Password button
on the Tenant Accounts page, allowing you to control
who can change the password for the tenant account's
root user. Users who do not have this permission
cannot see the Change Root Password button.
Note: You must assign the Tenant Accounts
permission to the group before you can assign this
permission.

Grid Topology Page Configuration Provides access to the Configuration tabs in Grid
Topology.
ILM Provides access to the following menu options:

• ILM > Rules

• ILM > Policies

• ILM > Erasure Coding

• ILM > Regions

Note: Access to the ILM > Storage Pools and ILM


> Storage Grades menu options is controlled by the
Other Grid Configuration and Grid Topology Page
Configuration permissions.
Controlling system access with administration user accounts and groups | 255

Management permission Description


Maintenance Provides access to the following menu options:

• Maintenance > Maintenance Tasks


◦ Expansion

◦ Decommission

◦ Recovery

• Maintenance > Network:


◦ Grid Network*

◦ DNS Servers*

◦ NTP Servers*

• Maintenance > System:

◦ License*

◦ Recovery Package

◦ Software Upgrade

• Configuration > System Settings:

◦ Domain Names*

◦ Server Certificates*

• Configuration > Monitoring:

◦ Audit*

* Users who do not have the Maintenance permission


can view, but not edit, the pages marked with an
asterisk.
Metrics Query Provides access to custom Prometheus metrics queries
using the Metrics section of the Management API.
Object Metadata Lookup Provides access to the ILM > Object Metadata
Lookup menu option.
256 | StorageGRID Webscale 11.1 Administrator Guide

Management permission Description


Other Grid Configuration Provides access to the following grid configuration
options:

• Configuration > System Settings:

◦ Grid Options

◦ Link Cost

◦ Storage Options

◦ Display Options

• Configuration > Monitoring:

◦ Global Alarms

◦ Notifications

◦ Email Setup

◦ AutoSupport

◦ Events

• ILM:
◦ Storage Pools

◦ Storage Grades

Note: Access to these items also requires the Grid


Topology Page Configuration permission.

Tenant Accounts Provides access to the Tenant Accounts page from the
Tenants option. Users who have this permission can
add, edit, or remove tenant accounts. Users with this
permission can also set the initial root password for the
tenant and can sign into the tenant until the root
password is changed. Users who do not have this
permission do not see the Tenants option in the menu.
Note: Version 1 of the Grid Management API
(which has been deprecated) uses this permission to
manage tenant group policies, reset Swift admin
passwords, and manage root user S3 access keys.

Related tasks
Deactivating features from the Grid Management API on page 257
Controlling system access with administration user accounts and groups | 257

Deactivating features from the Grid Management API


You can use the Grid Management API to completely deactivate certain features in the StorageGRID
Webscale system. When a feature is deactivated, no one can be assigned permissions to perform the
tasks related to that feature.

About this task


The Deactivated Features system allows you to prevent access to certain features in the StorageGRID
Webscale system. Deactivating a feature is the only way to prevent the root user or users who belong
to admin groups with the Root Access permission from being able to use that feature.
To understand how this functionality might be useful, consider the following scenario:
Company A is a service provider who leases the storage capacity of their StorageGRID Webscale
system by creating tenant accounts. To protect the security of their leaseholders' objects, Company A
wants to ensure that its own employees can never access any tenant account after the account has
been deployed.
Company A can accomplish this goal by using the Deactivate Features system in the Grid
Management API. By completely deactivating the Change Tenant Root Password feature in the Grid
Manager (both the UI and the API), Company A can ensure that no Admin user—including the root
user and users belonging to groups with the Root Access permission—can change the password for
any tenant account's root user.
Reactivating deactivated features
By default, you can use the Grid Management API to reactivate a feature that has been deactivated.
However, if you want to prevent deactivated features from ever being reactivated, you can deactivate
the activateFeatures feature itself.
Caution: The activateFeatures feature cannot be reactivated. If you decide to deactivate this
feature, be aware that you will permanently lose the ability to reactivate any other deactivated
features. You must contact technical support to restore any lost functionality.

For details, see the instructions for implementing S3 or Swift client applications.

Steps

1. Access the Swagger documentation for the Grid Management API.

2. Locate the Deactivate Features endpoint.

3. To deactivate a feature, such as Change Tenant Root Password, send a body to the API like this:

{ "grid": {"changeTenantRootPassword": true} }

When the request is complete, the Change Tenant Root Password feature is disabled. The Change
Tenant Root Password management permission no longer appears in the user interface, and any
API request that attempts to change the root password for a tenant will fail with “403 Forbidden.”

4. To reactivate all features, send a body to the API like this:

{ "grid": null }

When this request is complete, all features, including the Change Tenant Root Password feature,
are reactivated. The Change Tenant Root Password management permission now appears in the
user interface, and any API request that attempts to change the root password for a tenant will
succeed, assuming the user has the Root Access or Change Tenant Root Password management
permission.
258 | StorageGRID Webscale 11.1 Administrator Guide

Note: The previous example causes all deactivated features to be reactivated. If other features
have been deactivated that should remain deactivated, you must explicitly specify them in the
PUT request. For example, to reactivate the Change Tenant Root Password feature and
continue to deactivate the Alarm Acknowledgment feature, send this PUT request:

{ "grid": { "alarmAcknowledgment": true } }

Related concepts
Understanding the Grid Management API on page 12

Admin user accounts


You can manage admin user accounts in the StorageGRID Webscale system and also add them to one
or more admin groups that govern access to system features. The StorageGRID Webscale system
includes one predefined local user, named “root.”
StorageGRID Webscale can be accessed by local and federated users:

• Local users: You can create admin user accounts that are local to StorageGRID Webscale system
and add these users to StorageGRID Webscale local admin groups.

• Federated users: You can use a federated identity source (such as Active Directory or
OpenLDAP) to import administration groups and users. The identity source manages the groups
to which users belong, so you cannot add federated users to local groups. Also, you cannot edit
federated user information; this information is synchronized with the external identity source.

Although you can add and delete local users, you cannot delete the root user. After creating groups,
you assign users to one or more local groups.

Creating admin groups


You can create admin groups to manage the security permissions for a group of admin user accounts.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Configuration > Admin Groups.


Controlling system access with administration user accounts and groups | 259

2. Click Add.

3. Select either Local or Federated as the type of group.

4. For local groups, enter the group's name that will appear to users, for example, “Development
US”.

5. Enter a unique name without spaces for the group, for example, “Dev_US”.

6. Select a set of permissions. See information about admin group permissions.

7. Click Save.
A new group is created and added to the list of group names available for user accounts. User
accounts can now be associated with the new group.

Related concepts
Admin group permissions on page 253

Related tasks
Creating an admin user account on page 260
Modifying a local user's account on page 261

Modifying an admin group


You can modify an admin group to update the display name or permissions associated with the group.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Configuration > Admin Groups.

2. Select the group.


If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.

3. Click Edit.
260 | StorageGRID Webscale 11.1 Administrator Guide

4. For local groups, enter the group's name that will appear to users, for example, “Development
US”.
You cannot change the unique name, which is the internal group name.

5. Select a set of permissions.


See information about admin group permissions.

6. Click Save.

Related concepts
Admin group permissions on page 253

Deleting an admin group


You can delete an admin group when you want to remove the group from the system, and remove all
permissions associated with the group. Deleting an admin group removes any admin users from the
group, but does not delete the admin users.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


When you delete a group, users assigned to that group will lose all access privileges to the
StorageGRID Webscale system, unless they are granted privileges by a different group.

Steps

1. Select Configuration > Admin Groups.

2. Select the name of the group.


If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.

3. Click Remove.

4. Click OK.

Creating an admin user account


You can create a new local user and assign the user to a defined admin group with permissions that
govern access to system features. If an admin group with the necessary permission settings does not
exist, you must first create an admin user account.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.
Controlling system access with administration user accounts and groups | 261

About this task


You can create only local users. Federated user details are automatically synchronized with the
external identity source, for example, the LDAP server.

Steps

1. Select Configuration > Admin Users.

2. Click Create.
The list of group names is generated from the Groups table.

3. Enter the user's display name, unique name, and password.

4. Assign the user to one or more groups that govern the access permissions.

5. Click Save.

Related tasks
Creating admin groups on page 258

Modifying a local user's account


You can modify a local admin users account to update the full display name or group membership
that governs access to system features. You can also temporarily prevent a user from accessing the
system.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


You can edit only local users. Federated user details are automatically synchronized with the external
identity source, for example, the LDAP server.

Steps

1. Select Configuration > Admin Users.

2. Select the user you want to edit.


If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.

3. Click Edit.

4. Optionally, make changes to the name or group membership.

5. Optionally, to prevent the user from accessing the system temporarily, check Deny Access.

6. Click Save.
The new settings are applied the next time the user signs out and then signs back in to the
StorageGRID Webscale system.
262 | StorageGRID Webscale 11.1 Administrator Guide

Deleting a local user's account


You can delete accounts for local users that no longer require access to the StorageGRID Webscale
system.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.


• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

Steps

1. Select Configuration > Admin Users.

2. Select the user you want to delete.


Note: You cannot delete StorageGRID Webscale system’s built-in root user.

If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.

3. Click Remove.

4. Click OK to confirm the deletion.

Changing a local user's password


While local users can change their own passwords using the Change Password option in the
StorageGRID Webscale header, users with access to the Admin Users page can change passwords for
any local user.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

About this task


You can change passwords only for local users. Federated users must change their own passwords in
the external identity source (such as Active Directory or OpenLDAP).

Steps

1. Select Configuration > Admin Users.

2. From the Users page, select the user.


If your system includes more than 20 items, you can specify how many rows are shown on each
page at one time. You can then use your browser's find feature to search for a specific item in the
currently displayed rows.

3. Click Change Password.


Controlling system access with administration user accounts and groups | 263

4. Enter and confirm the password, and click Save.


264

What data migration is


You can migrate large amounts of data to the StorageGRID Webscale system while simultaneously
using the StorageGRID Webscale system for day-to-day operations.
The following section is a guide to understanding and planning a migration of large amounts of data
into the StorageGRID Webscale system. It is not a general guide to data migration, and it does not
include detailed steps for performing a migration. Follow the guidelines and instructions in this
section to ensure that data is migrated efficiently into the StorageGRID Webscale system without
interfering with its day-to-day operations, and that the migrated data is handled appropriately by the
StorageGRID Webscale system.

Confirming capacity of the StorageGRID Webscale system


Before migrating large amounts of data into the StorageGRID Webscale system, confirm that the
StorageGRID Webscale system has the disk capacity to handle the anticipated volume.
If the StorageGRID Webscale system includes an Archive Node and a copy of migrated objects has
been saved to nearline storage (such as tape), ensure that the Archive Node’s storage has sufficient
capacity for the anticipated volume of migrated data.
As part of the capacity assessment, look at the data profile of the objects you plan to migrate and
calculate the amount of disk capacity required. For details about monitoring the disk capacity of your
StorageGRID Webscale system, see the instructions for monitoring storage capacity and managing
disk storage.

Related concepts
Managing disk storage on page 178

Related tasks
Monitoring storage capacity for the entire grid on page 30

Determining the ILM policy for migrated data


The StorageGRID Webscale system’s ILM policy determines how many copies are made, the
locations to which copies are stored, and for how long these copies are retained. An ILM policy
consists of a set of ILM rules that describe how to filter objects and manage object data over time.
Depending on how migrated data is used and your requirements for migrated data, you might want to
define unique ILM rules for migrated data that are different from the ILM rules used for day-to-day
operations. For example, if there are different regulatory requirements for day-to-day data
management than there are for the data that is included in the migration, you might want a different
number of copies of the migrated data on a different grade of storage.
You can configure rules that apply exclusively to migrated data if it is possible to uniquely
distinguish between migrated data and object data saved from day-to-day operations.
If you can reliably distinguish between the types of data using one of the metadata criteria, you can
use this criteria to define an ILM rule that applies only to migrated data.
Before beginning data migration, ensure that you understand the StorageGRID Webscale system’s
ILM policy and how it will apply to migrated data, and that you have made and tested any changes to
the ILM policy.
What data migration is | 265

Caution: An ILM policy that has been incorrectly specified can cause unrecoverable data loss.
Carefully review all changes you make to an ILM policy before activating it to make sure the
policy will work as intended.

Related concepts
What an information lifecycle management policy is on page 106

Related tasks
Configuring ILM rules on page 113

Impact of migration on operations


A StorageGRID Webscale system is designed to provide efficient operation for object storage and
retrieval, and to provide excellent protection against data loss through the seamless creation of
redundant copies of object data and metadata.
However, data migration must be carefully managed according to the instructions in this chapter to
avoid having an impact on day-to-day system operations, or, in extreme cases, placing data at risk of
loss in case of a failure in the StorageGRID Webscale system.
Migration of large quantities of data places additional load on the system. When the StorageGRID
Webscale system is heavily loaded, it responds more slowly to requests to store and retrieve objects.
This can interfere with store and retrieve requests which are integral to day-to-day operations.
Migration can also cause other operational issues. For example, when a Storage Node is nearing
capacity, the heavy intermittent load due to batch ingest can cause the Storage Node to cycle between
read-only and read-write, generating notifications.
If the heavy loading persists, queues can develop for various operations that the StorageGRID
Webscale system must perform to ensure full redundancy of object data and metadata.
Data migration must be carefully managed according to the guidelines in this document to ensure
safe and efficient operation of the StorageGRID Webscale system during migration. When migrating
data, ingest objects in batches or continuously throttle ingest. Then, continuously monitor the
StorageGRID Webscale system to ensure that various attribute values are not exceeded. Controlling
the rate of migration of data into the system is outside of the scope of StorageGRID Webscale
functionality.

Scheduling data migration


Avoid migrating data during core operational hours. Limit data migration to evenings, weekends, and
other times when system usage is low.
If possible, do not schedule data migration during periods of high activity. However, if it is not
practical to completely avoid the high activity period, it is safe to proceed as long as you closely
monitor the relevant attributes and take action if they exceed acceptable values.

Related concepts
Monitoring data migration on page 266
266 | StorageGRID Webscale 11.1 Administrator Guide

Monitoring data migration


Data migration must be monitored and adjusted as necessary to ensure data is placed according to the
ILM policy within the required timeframe.
This table lists the attributes you must monitor during data migration, and the issues that they
represent.

Monitor Description
Number of objects
1. Select Support > Grid Topology.
waiting for ILM
evaluation 2. Select deployment > Overview > Main.

3. In the ILM Activity section, monitor the number of objects shown


for the following attributes:

• Awaiting - All (XQUZ): The total number of objects awaiting


ILM evaluation.

• Awaiting - Client (XCQZ): The total number of objects


awaiting ILM evaluation from client operations (for example,
ingest).

4. If the number of objects shown for either of these attributes exceeds


100,000, throttle the ingest rate of objects to reduce the load on the
StorageGRID Webscale system.

Targeted archival If the ILM policy saves a copy of the migrated data to a targeted
system's storage archival storage system (tape or the cloud), monitor the capacity of the
capacity targeted archival storage system to ensure that there is sufficient
capacity for the migrated data.
Archive Node > ARC If an alarm for this attribute is triggered, the targeted archival storage
> Store > Store system might have reached capacity. Check the targeted archival storage
Failures (ARVF) system and resolve any issues that triggered an alarm.

Creating custom notifications for migration alarms


You might want to configure the StorageGRID Webscale system to send a notification email to the
system administrator responsible for monitoring migration if the attribute values exceed their
recommended maximum values.

Before you begin

• You must be signed in to the Grid Manager using a supported browser.

• To perform this task, you need specific access permissions. For details, see information about
controlling system access with administration user accounts and groups.

• You must have configured email settings.

• You must have a mailing list.

Steps

1. Create an email list that includes all administrators responsible for monitoring the data migration.
What data migration is | 267

Optionally, you can create a template to customize the subject line, header, and footer of data
migration notification emails.

2. Create a Global Custom alarm for each attribute you need to monitor during data migration.

a. Select Configuration > Global Alarms.

b. Under Default Alarms, search for the Default alarms for the first attribute. Under Filter by,
select Attribute Code, then type the four letter code for the attribute. For example, ARVF.

c. Click Submit .

d. In the results list, click Copy next to the alarm you want to modify.
The alarm moves to the Global Custom Alarms table.

e. Under Global Custom Alarms, in the Mailing List column for the copied attribute, add the
mailing list.

f. Repeat for each remaining attribute.

g. When finished creating Global Custom alarms, click Apply Changes.

After you finish


Administrators responsible for monitoring data migration now receive an email notification if the
values of key attributes exceed their maximum acceptable levels during migration.
Remember to disable these notifications after data migration is complete. Note that Global Custom
alarms override Default alarms. If there are any, enable Custom alarms at the grid node level as
Global Custom alarms cannot be triggered.

Related tasks
Configuring email server settings on page 57
Creating mailing lists on page 59
268

Integrating Tivoli Storage Manager


This section includes best practices and set-up information for integrating an Archive Node with a
Tivoli Storage Manager (TSM) server, including Archive Node operational details that impact the
configuration of the TSM server.

Archive Node configuration and operation


Your StorageGRID Webscale system manages the Archive Node as a location where objects are
stored indefinitely and are always accessible.
When an object is ingested, copies are made to all required locations, including Archive Nodes,
based on the Information Lifecycle Management (ILM) rules defined for your StorageGRID
Webscale system. The Archive Node acts as a client to a TSM server, and the TSM client libraries are
installed on the Archive Node by the StorageGRID Webscale software installation process. Object
data directed to the Archive Node for storage is saved directly to the TSM server as it is received. The
Archive Node does not stage object data before saving it to the TSM server, nor does it perform
object aggregation. However, the Archive Node can submit multiple copies to the TSM server in a
single transaction when data rates warrant.
After the Archive Node saves object data to the TSM server, the object data is managed by the TSM
server using its lifecycle/retention policies. These retention policies must be defined to be compatible
with the operation of the Archive Node. That is, object data saved by the Archive Node must be
stored indefinitely and must always be accessible by the Archive Node, unless it is deleted by the
Archive Node.
There is no connection between the StorageGRID Webscale system’s ILM rules and the TSM
server’s lifecycle/retention policies. Each operates independently of the other; however, as each
object is ingested into the StorageGRID Webscale system, you can assign it a TSM management
class. This management class is passed to the TSM server along with object data. Assigning different
management classes to different object types permits you to configure the TSM server to place object
data in different storage pools, or to apply different migration or retention policies as required. For
example, objects identified as database backups (temporary content than can be overwritten with
newer data) might be treated differently than application data (fixed content that must be retained
indefinitely).
The Archive Node can be integrated with a new or an existing TSM server; it does not require a
dedicated TSM server. TSM servers can be shared with other clients, provided that the TSM server is
sized appropriately for the maximum expected load. TSM must be installed on a server or virtual
machine separate from the Archive Node.
It is possible to configure more than one Archive Node to write to the same TSM server; however,
this configuration is only recommended if the Archive Nodes write different sets of data to the TSM
server. Configuring more than one Archive Node to write to the same TSM server is not
recommended when each Archive Node writes copies of the same object data to the archive. In the
latter scenario, both copies are subject to a single point of failure (the TSM server) for what are
supposed to be independent, redundant copies of object data.
Archive Nodes do not make use of the Hierarchical Storage Management (HSM) component of TSM.

Configuration best practices


When you are sizing and configuring your TSM server there are best practices you should apply to
optimize it to work with the Archive Node.
When sizing and configuring the TSM server, you should consider the following factors:
Integrating Tivoli Storage Manager | 269

• Because the Archive Node does not aggregate objects before saving them to the TSM server, the
TSM database must be sized to hold references to all objects that will be written to the Archive
Node.

• Archive Node software cannot tolerate the latency involved in writing objects directly to tape or
other removable media. Therefore, the TSM server must be configured with a disk storage pool
for the initial storage of data saved by the Archive Node whenever removable media are used.

• You must configure TSM retention policies to use event‐based retention. The Archive Node does
not support creation-based TSM retention policies. Use the following recommended settings of
retmin=0 and retver=0 in the retention policy (which indicates that retention begins when the
Archive Node triggers a retention event, and is retained for 0 days after that). However, these
values for retmin and retver are optional.

The disk pool must be configured to migrate data to the tape pool (that is, the tape pool must be the
NXTSTGPOOL of the disk pool). The tape pool must not be configured as a copy pool of the disk
pool with simultaneous write to both pools (that is, the tape pool cannot be a COPYSTGPOOL for
the disk pool). To create offline copies of the tapes containing Archive Node data, configure the TSM
server with a second tape pool that is a copy pool of the tape pool used for Archive Node data.

Completing the Archive Node setup


The Archive Node is not functional after you complete the installation process. Before the
StorageGRID Webscale system can save objects to the TSM Archive Node, you must complete the
installation and configuration of the TSM server and configure the Archive Node to communicate
with the TSM server.
For more information about optimizing TSM retrieval and store sessions, see information about
managing archival storage.
Refer to the following IBM documentation, as necessary, as you prepare your TSM server for
integration with the Archive Node in a StorageGRID Webscale system:

• IBM Tape Device Drivers Installation and User’s Guide


http://www.ibm.com/support/docview.wss?rs=577&uid=ssg1S7002972
• IBM Tape Device Drivers Programming Reference
http://www.ibm.com/support/docview.wss?rs=577&uid=ssg1S7003032

Related concepts
Managing archival storage on page 203

Installing a new TSM server


You can integrate the Archive Node with either a new or an existing TSM server. If you are installing
a new TSM server, follow the instructions in your TSM documentation to complete the installation.
Note: An Archive Node cannot be co-hosted with a TSM server.

Configuring the TSM server


This section includes sample instructions for preparing a TSM server following TSM best practices.
The following instructions guide you through the process of:

• Defining a disk storage pool, and a tape storage pool (if required) on the TSM server

• Defining a domain policy that uses the TSM management class for the data saved from the
Archive Node, and registering a node to use this domain policy
270 | StorageGRID Webscale 11.1 Administrator Guide

These instructions are provided for your guidance only; they are not intended to replace TSM
documentation, or to provide complete and comprehensive instructions suitable for all configurations.
Deployment specific instructions should be provided by a TSM administrator who is familiar both
with your detailed requirements, and with the complete set of TSM Server documentation.

Defining TSM tape and disk storage pools


The Archive Node writes to a disk storage pool. To archive content to tape, you must configure the
disk storage pool to move content to a tape storage pool.

About this task


For a TSM server, you must define a tape storage pool and a disk storage pool within Tivoli Storage
Manager. After the disk pool is defined, create a disk volume and assign it to the disk pool. A tape
pool is not required if your TSM server uses disk‐only storage.
You must complete a number of steps on your TSM server before you can create a tape storage pool.
(Create a tape library and at least one drive in the tape library. Define a path from the server to the
library and from the server to the drives, and then define a device class for the drives.) The details of
these steps can vary depending upon the hardware configuration and storage requirements of the site.
For more information, see the TSM documentation.
The following set of instructions illustrates the process. You should be aware that the requirements
for you site may be different depending on the requirements of your deployment. For configuration
details and for instructions, see the TSM documentation.
Note: You must log onto the server with administrative privileges and use the dsmadmc tool to
execute the following commands.

Steps

1. Create a tape library.


define library tapelibrary libtype=scsi

Where tapelibrary is an arbitrary name chosen for the tape library, and the value of libtype
can vary depending upon the type of tape library.

2. Define a path from the server to the tape library.


define path servername tapelibrary srctype=server desttype=library
device=lib-devicename

• servername is the name of the TSM server

• tapelibrary is the tape library name you defined

• lib-devicename is the device name for the tape library

3. Define a drive for the library.


define drive tapelibrary drivename

• drivename is the name you want to specify for the drive

• tapelibrary is the tape library name you defined

You might want to configure an additional drive or drives, depending upon your hardware
configuration. (For example, if the TSM server is connected to a Fibre Channel switch that has
two inputs from a tape library, you might want to define a drive for each input.)

4. Define a path from the server to the drive you defined.


define path servername drivename srctype=server desttype=drive
library=tapelibrary device=drive-dname
Integrating Tivoli Storage Manager | 271

• drive-dname is the device name for the drive

• tapelibrary is the tape library name you defined

Repeat for each drive that you have defined for the tape library, using a separate drivename and
drive-dname for each drive.

5. Define a device class for the drives.


define devclass DeviceClassName devtype=lto library=tapelibrary
format=tapetype

• DeviceClassName is the name of the device class

• lto is the type of drive connected to the server

• tapelibrary is the tape library name you defined

• tapetype is the tape type; for example, ultrium3

6. Add tape volumes to the inventory for the library.


checkin libvolume tapelibrary

tapelibrary is the tape library name you defined.

7. Create the primary tape storage pool.


define stgpool SGWSTapePool DeviceClassName description=description
collocate=filespace maxscratch=XX

• SGWSTapePool is the name of the Archive Node’s tape storage pool. You can select any name
for the tape storage pool (as long as the name uses the syntax conventions expected by the
TSM server).

• DeviceClassName is the name of the device class name for the tape library.

• description is a description of the storage pool that can be displayed on the TSM server
using the query stgpool command. For example: “Tape storage pool for the Archive
Node.”

• collocate=filespace specifies that the TSM server should write objects from the same
file space into a single tape.

• XX is one of the following:

◦ The number of empty tapes in the tape library (in the case that the Archive Node is the
only application using the library).

◦ The number of tapes allocated for use by the StorageGRID Webscale system (in instances
where the tape library is shared).

8. On a TSM server, create a disk storage pool. At the TSM server’s administrative console, enter
define stgpool SGWSDiskPool disk description=description
maxsize=maximum_file_size nextstgpool=SGWSTapePool highmig=percent_high
lowmig=percent_low

• SGWSDiskPool is the name of the Archive Node’s disk pool. You can select any name for the
disk storage pool (as long as the name uses the syntax conventions expected by the TSM).

• description is a description of the storage pool that can be displayed on the TSM server
using the query stgpool command. For example, “Disk storage pool for the Archive
Node.”
272 | StorageGRID Webscale 11.1 Administrator Guide

• maximum_file_size forces objects larger than this size to be written directly to tape, rather
than being cached in the disk pool. It is recommended to set maximum_file_size to 10 GB.

• nextstgpool=SGWSTapePool refers the disk storage pool to the tape storage pool defined
for the Archive Node.

• percent_high sets the value at which the disk pool begins to migrate its contents to the tape
pool. It is recommended to set percent_high to 0 so that data migration begins immediately

• percent_low sets the value at which migration to the tape pool stops. It is recommended to
set percent_low to 0 to clear out the disk pool.

9. On a TSM server, create a disk volume (or volumes) and assign it to the disk pool.
define volume SGWSDiskPool volume_name formatsize=size

• SGWSDiskPool is the disk pool name.

• volume_name is the full path to the location of the volume (for example, /var/local/arc/
stage6.dsm) on the TSM server where it writes the contents of the disk pool in preparation
for transfer to tape.

• size is the size, in MB, of the disk volume.

For example, to create a single disk volume such that the contents of a disk pool fill a single tape,
set the value of size to 200000 when the tape volume has a capacity of 200 GB.
However, it might be desirable to create multiple disk volumes of a smaller size, as the TSM
server can write to each volume in the disk pool. For example, if the tape size is 250 GB, create
25 disk volumes with a size of 10 GB (10000) each.
The TSM server preallocates space in the directory for the disk volume. This can take some time
to complete (more than three hours for a 200 GB disk volume).

Defining a domain policy and registering a node


You need to define a domain policy that uses the TSM management class for the data saved from the
Archive Node, and then register a node to use this domain policy.
Note: Archive Node processes can leak memory if the client password for the Archive Node in
Tivoli Storage Manager (TSM) expires. Ensure that the TSM server is configured so the client
username/password for the Archive Node never expires.

When registering a node on the TSM server for the use of the Archive Node (or updating an existing
node), you must specify the number of mount points that the node can use for write operations by
specifying the MAXNUMMP parameter to the REGISTER NODE command. The number of mount
points is typically equivalent to the number of tape drive heads allocated to the Archive Node. The
number specified for MAXNUMMP on the TSM server must be at least as large as the value set for
the ARC > Target > Configuration > Main > Maximum Store Sessions for the Archive Node,
which is set to a value of 0 or 1, as concurrent store sessions are not supported by the Archive Node.
The value of MAXSESSIONS set for the TSM server controls the maximum number of sessions that
can be opened to the TSM server by all client applications. The value of MAXSESSIONS specified
on the TSM must be at least as large as the value specified for ARC > Target > Configuration >
Main > Number of Sessions in the Grid Manager for the Archive Node. The Archive Node
concurrently creates at most one session per mount point plus a small number (< 5) of additional
sessions.
The TSM node assigned to the Archive Node uses a custom domain policy tsm-domain. The tsm-
domain domain policy is a modified version of the “standard” domain policy, configured to write to
tape and with the archive destination set to be the StorageGRID Webscale system’s storage pool
(SGWSDiskPool).
Integrating Tivoli Storage Manager | 273

Note: You must log in to the TSM server with administrative privileges and use the dsmadmc tool
to create and activate the domain policy.

Creating and activating the domain policy


You must create a domain policy and then activate it to configure the TSM server to save data sent
from the Archive Node.

Steps

1. Create a domain policy.


copy domain standard tsm-domain

2. If you are not using an existing management class, enter one of the following:
define policyset tsm-domain standard
define mgmtclass tsm-domain standard default

default is the default management class for the deployment.

3. Create a copygroup to the appropriate storage pool. Enter (on one line):
define copygroup tsm-domain standard default type=archive
destination=SGWSDiskPool retinit=event retmin=0 retver=0

default is the default Management Class for the Archive Node. The values of retinit, retmin,
and retver have been chosen to reflect the retention behavior currently used by the Archive Node
Note: Do not set retinit to retinit=create. Setting retinit=create blocks the Archive Node from
deleting content since retention events are used to remove content from the TSM server.

4. Assign the management class to be the default.


assign defmgmtclass tsm-domain standard default

5. Set the new policy set as active.


activate policyset tsm-domain standard

Ignore the “no backup copy group” warning that appears when you enter the activate command.

6. Register a node to use the new policy set on the TSM server. On the TSM server, enter (on one
line):
register node arc-user arc-password passexp=0 domain=tsm-domain
MAXNUMMP=number-of-sessions

arc-user and arc-password are same client node name and password as you define on the Archive
Node, and the value of MAXNUMMP is set to the number of tape drives reserved for Archive
Node store sessions.
Note: By default, registering a node creates an administrative user ID with client owner
authority, with the password defined for the node.
274

Copyright information
Copyright © 2018 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
Data contained herein pertains to a commercial item (as defined in FAR 2.101) and is proprietary to
NetApp, Inc. The U.S. Government has a non-exclusive, non-transferrable, non-sublicensable,
worldwide, limited irrevocable license to use the Data only in connection with and in support of the
U.S. Government contract under which the Data was delivered. Except as provided herein, the Data
may not be used, disclosed, reproduced, modified, performed, or displayed without the prior written
approval of NetApp, Inc. United States Government license rights for the Department of Defense are
limited to those rights identified in DFARS clause 252.227-7015(b).
275

Trademark information
NETAPP, the NETAPP logo, and the marks listed on the NetApp Trademarks page are trademarks of
NetApp, Inc. Other company and product names may be trademarks of their respective owners.
http://www.netapp.com/us/legal/netapptmlist.aspx
276

How to send comments about documentation and


receive update notifications
You can help us to improve the quality of our documentation by sending us your feedback. You can
receive automatic notification when production-level (GA/FCS) documentation is initially released or
important changes are made to existing production-level documents.
If you have suggestions for improving this document, send us your comments by email.
doccomments@netapp.com
To help us direct your comments to the correct division, include in the subject line the product name,
version, and operating system.
If you want to be notified automatically when production-level documentation is released or
important changes are made to existing production-level documents, follow Twitter account
@NetAppDoc.
You can also contact us in the following ways:

• NetApp, Inc., 1395 Crossman Ave., Sunnyvale, CA 94089 U.S.

• Telephone: +1 (408) 822-6000

• Fax: +1 (408) 822-4501

• Support telephone: +1 (888) 463-8277


Index | 277

Index
A custom notification migration 266
customizing alarms 63
aborting grid tasks Default 64
procedure for 223 disabling 74
accounts, group disabling Default alarms 74, 75
deleting 260 disabling Global Custom alarms, service level 76
accounts, user disabling Global Custom alarms, system wide 77
adding 260 email notifications 55
creating 260 examples of how triggered 66
deleting 262 Global Custom 65
Acknowledge Alarms permission 253 icons for 28
Active Directory monitoring 21, 66
adding users or groups to audit share 240 new services 69
audit clients 237 notifications 55, 56
changing audit client share user or group name 243 of same severity 68
configuring audit clients 237 overriding higher priority alarm 69
removing users from audit share 242 severity changes 69
active ILM policy severity levels 28
defined 106 SMTT Total Events 54
ADC service 182 table, displayed in 74
add-audit-share command, in config_cifs.rb 234, 237 triggering evaluation order 66
add-ip-to-share command, in config_nfs.rb 245 triggering logic 66
add-user-to-share command 240 types 64
add-user-to-share command in config_cifs.rb 240 viewing Default 64
add-user-to-share command, in config_cifs.rb 234 All Storage Nodes storage pool 118
adding API
Storage Nodes 189 cross-site request forgery (CSRF) 16
storage volumes 189 Grid Management 12
admin groups versioning 15
managing 249 API Gateway Node
permissions 253 description of 201
Admin Node appliance
defined 218 viewing events 53
primary Admin Node, defined 218 viewing Storage Nodes 44
redundancy 219 ARC
admin users archive read-only on startup 215
managing 249 archive store state 215
Administrative Domain Controller configuration
See ADC Target component 208
Administratively Down description of service 203
operational state 28 optimizing for Tivoli Storage Manager 210
advanced filtering resetting store failure count 215, 216
in ILM rules 128 retrieve component 214
AES-128 189 Tivoli Storage Manager
AES-256 189 unavailable 211
aggregate text report 88 archive
alarms monitoring storage capacity 40
acknowledgments 219 read-only on startup 215
by code retrieve state 214
SAVP Total Usable Space (Percent) 183 store set 215, 216
SSTS Storage Status 183 Archive Node
class overrides 69 capacity, full 212
clearing triggered alarms 78 configurations described 268
colors 28 configure replication settings 213
configuring email notifications 57 configure target 204
creating Custom alarms 70 configuring cloud connections 205
creating email mailing lists 59 configuring S3 connections 205
creating Global Custom alarms 72 description 203
Custom 66 destination 204
278 | StorageGRID Webscale 11.1 Administrator Guide

optimizing for Tivoli Storage Manager middleware Archive Node 212


210, 212 Cassandra database
setting Custom alarms 216 monitoring space used per Storage Node 35
target 204 nodetool repair 182
Archive Nodes CBID 179
specifying in ILM rules 172 certificate authority (CA) certificates
area graphs 83 configuring for FabricPool clients 230
attributes copying for StorageGRID Webscale system 229
by code Change Tenant Root Password permission 253
CAWM Metadata Reserved Space 183 changing password for tenant accounts 253
CDLP Metadata Used Space (Percent) 35 charts
PSCU Percentage Storage Capacity Used 30 area graph 83
ROMK Hard Read-Only Watermark 183 colors 84
SSCR Storage Status - Current 183 creating 85
STAS Total Usable Space 30 creating manually 87
VHWM Storage Volume Soft Read-Only displaying 85
Watermark 183 line graph 83
VROM Storage Volume Hard Read-Only CIFS
Watermark 183 audit file share configuration 234
XCQZ Awaiting - Client 266 audit share 234
XQUZ Awaiting - All 266 CIFS audit share
by name adding groups 240
Awaiting - All (XQUZ) 266 adding users 240
Awaiting - Client (XCQZ) 266 CLB service
Hard Read-Only Watermark ROMK 183 defined 201
Metadata Reserved Space CAWM 183 client file shares
Metadata Used Space (Percent) (CDLP) 35 CIFS
Storage Status - Current SSCR 183 audit directory, AD 237
Storage Volume Soft Read-Only Watermark configuring audit clients 234
VHWM 183 CIFS audit share
described 8 changing group name 243
Percentage Storage Capacity Used 30 changing user name 243
Total Usable Space 30 removing users 242
audit client configuration NFS audit share
CIFS, Active Directory 237 adding clients 245
CIFS, Windows Workgroup 234 changing client IP address 248
NFS 244 CloudMirror replication
AutoSupport for tenant accounts 100
described 79 CMN service
disabling 80, 81 defined 218
preferred sender 220 CMS service
sending 79 described 182
triggering 79 comments
troubleshooting 82 how to send feedback about documentation 276
compliance
considerations for using global setting 164
B enabling global setting 168
background verification example ILM rules and ILM policy 169
adaptive 196 example of S3 bucket 169
configuring 196 regulation CFTC 17 1.31(b)-(c) 162
high verification rate 196 regulation SEC 17 240.17a-4(f) 162
LDR 196 resolving consistency errors when updating
rate 196 compliance configuration 177
reset corrupt objects count 186 restrictions for compliant buckets 164
verification rate 186 restrictions for ILM 164
browsers workflow for grid administrators 163
changing inactivity timeout period 11 components
supported 8 described 8
compression
lossless 191
C configuring
Archive Node 205, 268
capacity
configuring identity federation 249
Index | 279

Connected E
operational state 28
consistency level editing tenant accounts 96
compliance configuration for S3 bucket 177 Elasticsearch
content and the search integration service 100
storage capacity 30 email notifications
content block identifier configure global notification 60
See CBID configuring email server 57
content placement instructions 123 create global notification 60
content verification 196 create mailing lists 59
corrupt objects 186 create templates 58
CPU status 41 description 55
CPU usage 21 events 55
creating tenant accounts 93 for alarms 69
Critical islanded Admin Nodes 63
alarm 28 mail server settings 57
cross-site replication preferred sender 63, 220
using multiple storage pools 116 service state notifications 55
CSRF severity level in notifications 55
protecting against attacks for API clients 16 suppressing for entire system 62
Custom alarms suppressing for mailing lists 61
creating 70 template 58
described 64 test email 57
triggering logic 66 email server
configuring 57
email template 58
D encryption
Dashboard disabling 189
described 18 network transfer 226
data center topology, example ILM policy 155, 158, 161 endpoints
data migration proxy configuration for platform services 101
attributes, monitoring erasure coding
ARVF 266 advantages and disadvantages 112
Awaiting - All (XQUZ) 266 compared to replication 109
Awaiting - Client (XCQZ) 266 overview 109
creating Custom alarms 266 requirements 112
grid capacity, check 264 schemes 110
ILM policy 264 Erasure Coding profile
impact on grid operations 265 configuring 119
notifications 266 eth0 222
schedule time of day 265 eth1 222
DDS service eth2 222
described 180 events
object count 180 alarms 55
object metadata 182 events, hardware
queries 181 viewing 53
Default alarms example policies 155, 158, 161
described 64 examples
disabling 74, 75 alarm triggering 66
triggering logic 66 exporting text reports 90
deleting tenant accounts 99
disable inbound replication 186 F
disable outbound replication 186
disabling identity federation 253 FabricPool
documentation StorageGRID Webscale certificates 230
how to receive automatic notification of changes to feedback
276 how to send comments about documentation 276
how to send feedback about 276 foreground verification
domain policy missing data 198
activating for TSM 273
creating for TSM 273
dual commit
described 108
280 | StorageGRID Webscale 11.1 Administrator Guide

G HTTP State, set 186


HTTPS connections
Global Custom alarms copying CA certificates 229
creating 72 hybrid clouds
described 64 implemented by tenants 100
disabling 76
disabling for entire system 77
triggering logic 66 I
grid capacity icons
data migration 264 nodes 28
grid configuration identity federation
HTTP option 193 configuring 249
network transfer encryption 226 configuring OpenLDAP for 252
Prevent Client Modify option 192 disabling 253
Grid Management API synchronization of identity source 252
deactivating features with 257 identity source
overview 12 forcing synchronization of 252
versioning 15 ILM
Grid Manager advanced filtering 128
customizing server certificates 227 configuring rules 113
described 8, 18, 20, 28 content placement instructions 123
node icons 28 controlling location and type of storage 115
restoring default server certificates 228 creating rules 123
signing in 9 erasure coding 109
signing out 10 example of compliant rule for erasure-coded objects
Grid Network IP addresses 221 170
grid node example of compliant rule for replicated objects 173
IP address 221 example of non-compliant rule 172
grid nodes example policies for simulating 139
described 8 example policy for compliance 174
monitor 40 example rules and policy for compliance 169
viewing appliance 44 example rules and policy for EC object size filtering
grid tasks 156
aborting 223 example rules and policy for image files 158
LDR foreground verification 198 example rules and policy for object storage 152
Grid Topology Page Configuration permission 253 ingest time 132
Grid Topology tree Last Access Time 132
described 28 metadata 106
grids metadata for advanced filtering 128
access permissions 253 object identifiers 146
groups overview 106
authenticating with identity federation 249 policies
changing 259 activating 145
creating 258 creating proposed policy 134
deleting 260 example policies 155, 158, 161
updating 259 simulating 137
GUI Inactivity Timeout simulation examples 139, 140, 142
updating 11 viewing activity queue 151
regions 122
H replication 109, 116
rules
hardware cloning 150
viewing events 53 content placement instructions 123
hashing, stored object 190 creating 123, 150
health check timeout deleting 149
LDR 186 editing 149
historical ILM policy examples 152–154, 156–158, 160
defined 106 Make 2 Copies rule 118
HSTE modifying 149
HTTP State 186 storage grade
HTTP assign to LDR 113
auto-start 186 configuring 113
effect of enabling 193 storage pools
Index | 281

built-in 118 using as ILM rule filter 128


configure 118 Last Access Time
guidelines 118 in ILM rules 132
time values 132 LDR
ILM permission 253 assign storage grade 113
ILM policy background verification 37, 38, 186
activating 145 components
compliance 164 storage 30
creating proposed policy 134 verification 37
defined 106 configuration
example for compliance 169 Storage component 186
example for EC object size filtering 156 content balancing 179
example for image files 158 corrupt objects 186
example for object storage 152 encryption 186
examples for simulating 139 foreground verification 38
how objects are filtered 106 full 196
order of rules 106 health check timeout 186
simulating 137 HTTP State 186
verifying with object metadata lookup 146 lost objects 186
viewing activity queue 151 object mapping 179
ILM rules object stores 179
archival media 203 replication 186
compliance 164 reset missing copies count, erasure coding 186
default rule 106 reset missing fragments count, erasure coding 186
examples 152–154, 156–158, 160 reset read failures, erasure coding 186
examples for compliance 169 reset write failures, erasure coding 186
examples for EC object size filtering 156 service 178
examples for image files 158 storage grade 113
examples for object storage 152 storage state-desired 186
order of 106 verification 196
inactivity timeout volume ID 179
changing 11 license
information updating 12
how to send feedback about improving viewing details for 12
documentation 276 line graphs 83
information lifecycle management link costs
See ILM default values 224
ingest updating 225
dual commit 108 lists, mailing
ingest time suppressing email notifications from 61
in ILM rules 132 location constraint
using as ILM rule filter 128 using as ILM rule filter 128
Interface Engine page location constraints
NMS service 55 See regions
IP address logging in
Grid Network 221 Grid Manager 9
supplementary network 221 logging out
view 222 Grid Manager 10

J M
join-domain command, in config_cifs.rb 237 mailing lists
suppressing email notifications from 61
Maintenance permission 253
K Major
key alarm 28
using as ILM rule filter 128 Make 2 Copies rule, ILM 118
management class, Tivoli Storage Manager 204
memory usage 21
L metadata
for advanced filtering in ILM rules 128
last access time
in ILM rules 106
ILM rules 132
Metadata Reserved Space (CAWM)
282 | StorageGRID Webscale 11.1 Administrator Guide

monitoring 35 O
Metrics Query permission 253
MIB object count
OID values 224 DDS service 180
SNMP 223 object data
SNMP monitoring 223 corrupt 196
microseconds missing 196, 198
how to calculate for ILM rules 132 verify integrity 196
Minor object metadata
alarm 28 DDS service 182
MINS E-mail Notification Status alarm 55 monitoring space used per Storage Node 35
monitoring object metadata lookup
CPU usage 21 using to verify ILM policy 146
memory usage 21 Object Metadata Lookup permission 253
nodes 20 object segmentation 195
object metadata capacity per Storage Node 35 object size
storage capacity per Storage Node 33 using as ILM rule filter 128
system capacity 30 object store
volume ID 179
object stores 179
N object tag
NetBIOS name using as ILM rule filter 128
file share configuration 237 objects
network connections compliant S3 buckets 164
viewing for appliance 44 OID
network transfer encryption SNMP 223
disable 226 values 224
enable 226 ONTAP
networking requirements StorageGRID Webscale certificates 230
for platform services 100 OpenLDAP
NFS audit share configuration guidelines for 252
removing clients 247 optimizing performance, middleware sessions
verifying integration 247 Tivoli Storage Manager 204
NFS share configuration optimizing storage 195
adding client to an audit share 245 Other Grid Configuration permission 253
changing client IP address 248
configure the audit client 244 P
removing client from the audit share 247
NMS service password for tenant account 93
defined 218 passwords
Interface Engine page 55 changing 10
Nodes page changing for others 262
described 20 changing for tenant account's root user 96
Hardware tab 21 Percentage Storage Capacity Used attribute (PSCU) 30
ILM tab 26 permissions
Network tab 21 setting for groups 258
Objects tab 26 platform services
Overview tab 21 allowing for an existing tenant 97
nodetool repair networking and ports for 100
Storage Nodes 182 non-transparent proxy 101
Normal overview 100
alarm 28 policies
Notice activating 133, 145
alarm 28 active 106
notifications configuring 133
configuring 56 historical 106
for tenant accounts 100 migrated data 264
suppressing for entire system 62 order of rules 106
suppressing for mailing lists 61 proposed 106, 134
simulating 137
verifying 133, 146
ports
configuration for platform services 100
Index | 283

preferred sender, notifications 63 S3 API endpoint domain name


Prevent Client Modify configuring 231
effect of enabling 192 S3 buckets
primary Admin Node 218 compliance 164
printing reports 89 regions for 122
proposed ILM policy SAVP Total Usable Space (Percent)
defined 106 alarm 183
PSCU Percentage Storage Capacity Used attribute 30 search integration service
for tenant accounts 100
security
R StorageGRID Webscale CA certificate 229
RAM usage 41 server certificates
raw text report 88 configuring for FabricPool clients 230
regions copying 229
configuring 122 customizing for Grid Manager 227
using as ILM rule filter 128 restoring defaults for Grid Manager 228
remove-ip-from-share command, in config_nfs.rb 247 storage API endpoints
remove-user-from-share command, in config_cifs.rb 242 configuring 228
removing tenant accounts 99 restoring 229
replication Server Connection Error 55
advantages and disadvantages 112 Server Status Monitor
compared to erasure coding 109 See SSM
defined 109 service states
reports indicators 28
aggregate 88 services
area graph 83 creating alarms 69
chart colors 84 described 8
charts 83 disable Global Custom alarm 76
creating 85 disabling Default alarms 74
displaying 85 states for 28
for monitoring 83 set-authentication command, in config_cifs.rb 234, 237
raw text 88 setting custom Archive Node 216
reset counters share configuration
HTTP counts 186 add-audit-share command 234, 237
inbound replication failure count 186 add-ip-to-share command 245
outbound replication failure count 186 add-user-to-share command 234
reset counters audit share (CIFS) 234
lost objects count 186 audit share (NFS) 244
SSM events 42 join-domain command 237
resetting corrupt objects count 186 NetBIOS name 237
resetting missing copies count, erasure coding 186 remove-ip-from-share command 247
resetting missing fragments count, erasure coding 186 remove-user-from-share command 242
resetting read failure count 186 set-authentication command 234, 237
resetting writes failure count 186 signing in
Root Access permission 253 Grid Manager 9
root user 253 signing out
rules Grid Manager 10
configuring 113 Simple Storage Service 204
creating for ILM 123 sites
ingest time 132 defined 115
Last Access Time 132 SMTP mail server settings 57
SMTT Total Events alarm 54
SNMP
S configuring 223
SNMP monitoring
S3
configure 223
effect of HTTP option 193
SNMPv2c 223
effect of Prevent Client Modify option 192
SSCR Storage Status - Current 183
modify Archive Node settings 207
SSM
modifying Cloud Tiering Service 208
components
modifying settings 207
events 42
object-key identifier 146
resources 43
tenant account 92
timing 44
284 | StorageGRID Webscale 11.1 Administrator Guide

configuration T
Resources component 43
reset event counters 42 template 58
services 41 tenant accounts
SSM service changing password for root user 96
events 54 creating 93
Total Events alarm (SMTT) 54 deleting 99
SSTS Storage Status alarm 183 delivery of platform services messages 102
STAS Total Usable Space 183 editing 97
STAS Total Usable Space attribute 30 overview 92
state graphs 83 permissions for 253
storage platform services and 100
background verification 196 platform services errors 103
storage capacity removing 99
archive media 40 Tenant Accounts permission 253
content 30 tenant root user password 93, 96
monitoring 30 text reports
monitoring per Storage Node 33 creating 89
monitoring system-wide 30 exporting 90
watermarks 183 printing 89
storage grade types of 88
assign to LDR 113 timeout period
configuring 113 changing 11
creating a list 113 Tivoli Storage Manager
storage grades ARC 204
defined 115 configuration best practices 268
Storage Node configure 208
background verification 38, 196 domain policy for 272
encryption 189 lifecycle and retention rules 204
foreground verification 38, 196, 198 management class 204
LDR 179 middleware 208
nodetool repair 182 register nodes for 272
object mapping 179 Tivoli Storage Manager ARC
viewing appliance information 44 optimizing Archive Node 210, 212
watermarks 183 optimizing performance 210
storage pools unavailable 211
configure 118 Total Events alarm (SMTT) 54
defined 115 Total Usable Space (Percent) SAVP
guidelines for creating 115 alarm 183
using with replicated object copies 109 Total Usable Space attribute (STAS) 30
Storage Status - Current SSCR 183 Total Usable Space STAS 183
Storage Status SSTS alarm 183 TSM tape storage pools
StorageGRID Webscale system defining 270
copying CA certificates 229 Twitter
stored object how to receive automatic notification of
configuring encryption 189 documentation changes 276
configuring hashing 190
enabling compression 191
suggestions
U
how to send feedback about documentation 276 Unknown
supplementary network IP addresses 221 operational state 28
Swift us-east-1
container/object name 146 default region 122
tenant account 92 user accounts
Swift clients adding 260
effect of HTTP option 193 changing password 10
effect of Prevent Client Modify option 192 creating 260
synchronization deleting 262
identity source 252 modifying 261
system events 18 permissions 10, 258
system status user metadata
SNMP 223 using as ILM rule filter 128
users
Index | 285

authenticating with identity federation 249 view values 183


changing passwords for 262 web browsers
signing in 9
signing out 10
V supported 8
verifying Workgroup share configuration
CIFS audit integration 243 adding users or groups to audit share 240
version information 41 audit clients 234
virtual hosted-style requests changing audit client share user or group name 243
configuring 231 removing user from audit share 242
volume ID 179

W
watermarks

Vous aimerez peut-être aussi